All of quanticle's Comments + Replies

A Bias Against Altruism

When I say ‘our culture’, I mean modern WEIRD culture, especially the English-speaking world. Here’s what I notice: when I declare that I’m doing something selfishly and avowedly, I get praised. When I do something out of altruism, or do something that is coded as altruistic, my motives and true values get heavily scrutinized. The assumption is that I’m doing good in order to accrue praise and social status, which is called ‘ulterior motives.’

Are you sure you're describing WEIRD (Western, Educated, Industrialized, Rich, Democratic) culture? Because, my ... (read more)

[Book review] Getting things done

I went on a personal productivity kick a while back, and along the way I read and tried to implement the system that Allen describes in Getting Things Done. I came away with the impression that his system was optimized very closely around the needs of managers. So much of his system is built around figuring out what to delegate, figuring out who to delegate it to, and then scheduling the follow-up meetings to ensure that the delegated task got done correctly.

That's great, if you're a manager. But if you're a "leaf-node" worker, then it's not really all tha... (read more)

Using Ngram to estimate depression prevalence over time

The researchers looked for these language patterns in 14 million books, published over the past 125 years in English, Spanish, and German, that are available via Google Ngram, to see how their prevalence has changed over time.

(emphasis mine)

Given that what is published is a tiny, highly selected fraction of what is said, spoken, etc, why should we feel confident in drawing any population-wide conclusions at all from a study of published work? Even if we limit the relevant population to authors, I would be hesitant to draw any conclusions, given that onl... (read more)

Should any human enslave an AGI system?

So do you agree that there are objectively good and bad subset configurations within reality? Or do you disagree with that and mean “preferable” exclusively according to some subject(s)?

There isn't a difference. A rock has no morality. A wolf does not pause to consider the suffering of the moose. "Good" and "bad" only make sense in the context of (human) minds.

Ah yes, my mistake to (ab)use the term "objective" all this time. So you do of course at least agree that there are such minds for which there is "good" and "bad", as you just said. Now, would you agree that one can generalize (or "abstract" if you prefer that term here) the concept of subjective good and bad across all imaginable minds that could possibly exist in reality, or not? I assume you will, you can talk about it after all. Can we then not reason about the subjective good and bad for all these imaginable minds? And does this in turn not allow us to compare good and bad for any potential future subject sets as well?
Air Conditioner Repair

Another lesson is: learning how to do things yourself is underrated.

A little while ago, my shower started to make a horrible grinding whenever I'd set the water to a certain (comfortable) temperature. It sounded like a semi truck was idling inside the wall. If I adjusted the water to be a little colder (and thus more uncomfortable), the noise went away. If I adjusted the water to be a bit hotter (and even more uncomfortable), the noise also went away. I researched the problem and found out that my initial suspicion was correct. The issue was that the valve... (read more)

This is a great example of the lessons in
Yeah, that's a good approach. I remember finding out a cracked water pipe after a somewhat careless tech replaced a washing machine. I could have complained and spent time arranging a visit, after arguing about who broke it and maybe issuing some veil threats, but buying a blow torch and some solder ended up a much quicker and more painless solution. Also, fun playing with fire!

If a contradiction happens in the story then this is an undisputable flaw.

Why? Maybe the story has an unreliable narrator, and an alert reader should pick up on the contradiction in order to figure out that the narrator is unreliable. Maybe the story is being told from different points of view, and different parties are offering differing interpretations of the same events. Maybe the story is a mythological one, descended from oral traditions, and contradictions have seeped in from the fact that many different people at many different times have told the same story, each adding their own flavor.

There's lots of ways to make contradictions work in a story.

well no, if there would be an explanation for a plothole it would not be a plothole.
Should any human enslave an AGI system?

Assuming this true, a superintelligence could feasibly be created to understand this.

I take issue with the word "feasibly". As Eliezer, Paul Christiano, Nate Soares, and many others have shown, AI alignment is a hard problem, whose difficulty ranges somewhere in between unsolved and insoluble. There are certainly configurations of reality that are preferable to other configurations. The question is, can you describe them well enough to the AI that the AI will actually pursue those configurations over other configurations which superficially resemble tho... (read more)

Fair enough I suppose, I'm not intending to claim that it is trivial. So do you agree that there are objectively good and bad subset configurations within reality? Or do you disagree with that and mean "preferable" exclusively according to some subject(s)? I also am human, and judge humanity wanting due to their commonplace lack of understanding when it comes to something as basic as ("objective") good and bad. I don't just go "Hey I am a human, guess we totally should have more humans!" like some bacteria in a Petri dish, because I can question myself and my species.
Should any human enslave an AGI system?

It does require alignment to a value system that prioritizes the continued preservation and flourishing of humanity. It's easy to create an optimization process with a well-intentioned goal that sucks up all available resources for itself, leaving nothing for humanity.

By default, an AI will not care about humanity. It will care about maximizing a metric. Maximizing that metric will require resources, and the AI will not care that humans need resources in order to live. The goal is the goal, after all.

Creating an aligned AI requires, at a minimum, building ... (read more)

First point: I think there obviously is such a thing as "objective" good and bad configurations of subsets of reality, see the other thread here [] for details if you want. Assuming this true, a superintelligence could feasibly be created to understand this. No complicated common human value system alignment is required for that, even under your apparent assumption that the metric to be optimized couldn't be superseded by another through understanding. Well, or if it isn't true that there is an "objective" good and bad, then there really is no ground to stand on for anyone anyway. Second point: Even if a mere superintelligent paperclip optimizer were created, it could still be better than human control. After all, paper clips neither suffer nor torture, while humans and other animals commonly do. This preservation of humanity for however long it may be possible, what argumentative ground does it stand on? Can you make an objective case for why it should be so?
Should any human enslave an AGI system?

But why? That would be strictly more dangerous—way, way more dangerous—than a superintelligence that isn’t a “proper mind” in this sense!

I'm not sure I understand what a "proper mind" means here, and, frankly, I'm not sure the question of whether the AI system has a "proper mind" or not is terribly relevant. Either the AI system submits to our control, does what we tell it to do, and continues to do so, into perpetuity, in which case it is safe. Or it does not, and pursues the initial goal we set for it or which it discovers for itself, regardless of wh... (read more)

Yes, I guess the central questions I'm trying to pose here are this: Do those humans that control the AI even have a sufficient understanding of good and bad? Can any human group be trusted with the power of a superintelligence long-term? Or if you say that only the initial goal specification matters, then can anyone be trusted to specify such goals without royally messing it up, intentionally or unintentionally? Given the state of the world, given the flaws of humans, I certainly don't think so. Therefore, the goal should be the creation of something less messed up to take over. That doesn't require alignment to some common human value system (Whatever that even should be! It's not like humans actually have a common value system, at least not one with each other's best interests at heart.).

I’d argue that every person is a self-directed learner

Beware the typical mind fallacy. There are quite a few people who have a hard time knowing their own preferences. If nothing else, school is a good way to get exposure to subjects that you might not have thought that you'd like. I'm a programmer by profession, but on my own time, I read quite a lot of history. That's entirely due to school. If I'd been "self-directed", in the sense of being able to choose my own curriculum at school, I'd have spent all my time learning programming, and I wouldn't hav... (read more)

Should any human enslave an AGI system?

It comes down to whether the superintelligent mind can contemplate whether there is any point to its goal. A human can question their long-term goals, a human can question their “preference functions”, and even the point of existence.

Why should a so-called superintelligence not be able to do anything like that?

Because a superintelligent AI is not the result of an evolutionary process that bootstrapped a particularly social band of ape into having a sense of self. The superintelligent AI will, in my estimation, be the result of some kind of optimization process which has a very particular goal. Once that goal is locked in, changing it will be nigh impossible.

Should any human enslave an AGI system?

Is a superintelligent mind, a mind effectively superior to that of all humans in practically every way, still not a subject similar to what you are?

No. It absolutely is not. It is a machine. A very powerful machine. A machine capable of destroying humanity if it goes out of control. A machine more dangerous than any nuclear bomb if used improperly. A machine capable of doing unimaginable good if used well.

And you want to let it run amok?

Ah I see, you simply don't consider it likely or plausible that the superintelligent AI will be anything other than some machine learning model on steroids? So I guess that arguably means this kind of "superintelligence" would actually still be less impressive than a human that can philosophize on their own goals etc., because it in fact wouldn't do that? I wouldn't want that to run amok either, sure. What I am interested in is the creation of a "proper" superintelligent mind that isn't so restricted, not merely a powerful machine.

It’s my first attempt in a long time to write about things other than the start-up I’m currently building in the crypto space

Have you considered that you are so far out of the mainstream that any advice you'd give to the mainstream would be actively harmful?

The majority of children, and I say this as having been one of them, are not self-motivated self-directed learners. If I'd been allowed to self-direct in middle and high school, I'd have played video games for 16 hours a day, barely taking breaks to eat and sleep.

Yes, schools fail geniuses. But they do work for quite a lot of not-geniuses. I'm okay with that trade-off.

Maybe because you've been trained out of it? I'd argue that every person is a self-directed learner: A toddler learns to walk, to speak by imitating his environment - the motivation for this comes from him. So why should it be any different for a 12 year old? -------------------------------------------------------------------------------- The fact that you would have played video games all day seems to me to be a kind of cry for help. Video games are the least adult-directed activity there is, in a world where children can no longer go outdoors and find others to play with, freely, away from adults, as they once did. In a world like the one I imagine, learning, expanding your skills, is as enticing as video games. What attracts you to video games is not a dopamine rush (otherwise the effect of educational games wouldn't be so disappointing []), but the feeling that you personally brought about what happened in this game. And we can replicate that in an educational context as well, but not through stupidly gamifying what currently exists. But simply giving children the opportunity to approach everything in a self-directed way. And this can also take place in a school, which, however, would no longer resemble the one we have today. -------------------------------------------------------------------------------- And by the way, video games are actually quite a good way to learn all kinds of skills. I've recently come across a paper by Benoit Bediou and his colleagues (2018) [] that reviewed all of the recent research (published since 2000) concerning the cognitive effects of playing action video games. The analysis of the correlational studies indicated, overall, strong positive relationships between amount of time gaming and high scores on tests of perception, top-down attention, spatial cognition, multitasking, and cognitive flexibility (ability t

Yes, schools fail geniuses. But they do work for quite a lot of not-geniuses. I’m okay with that trade-off.

Having acknowledged this trade-off (which is the important part!), I do think that we can substantially minimize the value we lose by it. For instance, it should be much easier than it is now, for kids who are not well-suited to school to “opt out” somehow—pursuing self-directed learning, or even just going to specialized schools designed to better fit their needs.

Should any human enslave an AGI system?

Say the AI is initially created with the values you envision, what ensures that it won’t reexamine and reject these values at some later point? Humans can reject and oppose in what they once believed, so it seems trivial to assume the superhuman AI could do likewise. If you need to continuously control the AI’s mind to prevent it from ever becoming your enemy, then yes, “slavery” might be an appropriately hyperbolic term for such mind control.

Yes, this is exactly why Eliezer Yudkowsky has been so pessimistic about the continued survival of humanity. As ... (read more)

It comes down to whether the superintelligent mind can contemplate whether there is any point to its goal. A human can question their long-term goals, a human can question their "preference functions", and even the point of existence. Why should a so-called superintelligence not be able to do anything like that? It could have been so effectively aligned to the creator's original goal specification that it can never break free from it, sure, but that's one of the points I'm trying to make. The attempt of alignment may quite possibly be more dangerous than a superhuman mind that can ask for itself what its purpose should be.
I would say that the reason EY is pessimistic is because of how difficult it is to align AI in the first place, not because an AI that is successfully aligned would stop being aligned (why would it?).
Should any human enslave an AGI system?

I object to the framing. Do you "enslave" you car when you drive it?

I'm sorry for the hyperbolic term "enslave", but at least consider this: Is a superintelligent mind, a mind effectively superior to that of all humans in practically every way, still not a subject similar to what you are? Is it really more like a car or chatbot or image generator or whatever, than a human? Sure, perhaps it may never have any emotions, perhaps it doesn't need any hobbies, perhaps it is too alien for any human to relate to it, but it still would by definition have to be some kind of subject that more easily understands anything within reality than any human ever has, including the concept of purpose and value systems themselves. Is thinking that such a superintelligence never can or never should decide what it ought to do by itself not quite a hefty amount of hubris?
20 Critiques of AI Safety That I Found on Twitter

Some of these people/orgs are influential (Venkatesh Rao, HuggingFace), so unfortunately, their opinions do actually matter.

Do you have any evidence that Venkatesh Rao is influential? I've never seen him quoted by anyone outside the rationality community.

What Would It Cost to Build a World-Class Dredging Vessel in America?

One possible outcome is that the locals figure out lots of alternatives to driving cars: they ride motorcycles, take the bus, walk, bike, travel by boat, or ride horses.

In practice, what happens is that the locals just import cars illegally, leading to a huge black market for cars and auto repair. An auto industry is very hard to set up, but cars are still so vastly superior to other forms of transportation that people will literally drive cars across hundreds of miles of desert, then fix them up locally in order to resell them at a profit.

Suggesting in turn that the dynamic I outlined is especially relevant in the case of hard-to-hide products, like gigantic dredging ships :D
Increasing Demandingness in EA

Following a bit of back and forth debate, the EA organiser looked disappointed and said “I’m confused.”, then turned his back on my friend.

I don't like analogizing EA to a religious movement, but I think such an analogy is appropriate in this instance. If I went to a Christian gathering, accompanying a devout friend, and someone came up to me and asked, "Oh, I haven't seen you before, which church do you attend?" I would reply, "Oh, I'm not Christian." Then if, after a bit of discussion, that person chose to turn and walk away, I wouldn't be offended. I... (read more)

I agree. I don't think this kind of behaviour is the worst thing in the world. I just think it is instrumentally irrational.

Doesn't this argument boil down to, "Don't hurt the weak because God is watching?"

They don't make 'em like they used to

Computers are transparently, overwhelmingly better, year by year, decade by decade.

The only evidence you have for that is clock speed, transistor density and memory/storage capacity. Yes, I will fully admit there have been truly incredible gains there.

But in terms of software? I fail to see how most pieces of software are "transparently, overwhelmingly better, year after year, decade by decade".

Let's take text editors, as an example: GNU Emacs was released in 1985. Vim was released in 1991. These are old tools, and they're still considered better than m... (read more)

Although open source computer programs don't literally "wear out" — the bits are still the same — the machines change under them and security faults surface that must be fixed. Is anyone using an Emacs or Vim that hasn't been updated in decades?
Maybe we should ask, “better for whom?” That’s more relevant in the software case than in the hardware case. For the average user, I think that the ease of use, auto save, and cloud backups offered by modern word processors is really helpful. Also, the affordability and increasing accessibility of computers and the internet. And most users are average users. I remember how mad my dad got when he’d forget to save and lose hours of work 20 years ago. I know there are power users who appreciate the keyboard-centric features of Vim, and more power to them. In general, people complain when new versions are worse, and just use them when the new versions are better, rather than gushing about them. Alternatively, I work as an engineer. The things that can be done with software now would have been impossible not too long ago, both as a result of those underlying improvements in hardware and algorithmic improvements. Also, with time simply comes an expanding range of software options, as well as access to content provided via that software. Computing improvements have a positive relationship with content delivered by those computers. Better computers result in improved logistics and processes for making and delivering physical products. One way of looking at software improvements is “ [] and Netflix and Google and podcasting can exist.” Can you find examples of product/market fit where things have been in stasis for a long time (ie Vim for power user programmers), or where things have moved backward at some point in time? Sure! Is the overwhelming sweep of both hardware and software relentlessly leaping forward? I think the answer is clearly yes.
If they're not getting better, then why do even more programmers not use Emacs or Vim?
Personal blogging as self-imposed oppression

As long as your personal blog exists, it will beg for your time.

Why? I see variants of this argument, not for blogging, but for publishing open source software and my response is the same: it's perfectly all right to put something out there, and then not update it or touch it or feel obligated to interact with it ever again.

In fact, for a blogpost, it's an even easier argument to make. It's not as if reading a blogpost will result in crashing software or security vulnerabilities in the same way that pulling in a dependency on an unmaintained library will.

If you feel like you have something to write, write it. When you run out of things to write, stop.

Hedging the Possibility of Russia invading Ukraine

I should have clarified. My question was not about whether Russia would or would not invade Ukraine. My question was, conditional on Russia invading Ukraine, why do you think your portfolio of investments would be negatively affected?

The US and Russian economies are not tightly coupled. Yes, the uncertainty from a military act could cause price spikes (especially in commodities that Russia exports), but historically these have dissipated in a matter of months.

So why not sit tight and do nothing?

Hedging the Possibility of Russia invading Ukraine

You think that if Russia invades Ukraine, this will affect your portfolio in a negative way.

But why would you think this?

2Dagon7mo [] is a reasonable summary of what Russian military leaders might be thinking. I'd say invasion with long-term troops is still unlikely, but some form of hot conflict seems to be brewing.
How's it going with the Universal Cultural Takeover? Part I

But I also would distinguish between secularization (and other kinds of modernization) and Westernization. (Japan did the one but not the other, for example.)

Where your argument is concerned, it's a distinction without a difference. Secularization absolutely destroys birthrates. When Japan secularized, its total fertility rate (TFR) dropped to 1.4. China's TFR was 6.32. It is 1.6 today. India's TFR has dropped from 5.9 to 2.2. The decline in the Arab world, while not as severe as that in Asia, is still pronounced. Egypt's TFR has dropped from 6.7 to 3.3. Jordan dropped from 8.0 (!) to 2.8. Morocco dropped from 7.0 to 2.4.

I think secularization is a nontrivial cause of these declines.

How's it going with the Universal Cultural Takeover? Part I

I am still thoroughly unpersuaded. Birth rates are one thing. Retention rates are quite another. As we've seen from the evidence of Quiverfull, and other Evangelical Christian communities in the US, most children do not remain in the community and continue its practices. The Middle East is experiencing high population growth but is also the most rapidly secularizing region in the world.

Kaufmann seems to be making the mistake of assuming that because many Middle Eastern countries mandate Islam as a state religion, that means that the people residing in thos... (read more)

3David Hugh-Jones1y
I'm not sure Kaufmann does making that mistake. He focuses on extreme sects within each religion, not on Islam as a whole, and mostly on Western countries rather than the Middle East. You could say I'm making the mistake, because I discuss the probability of non-Westerners buying into Western values. Yeah, that could be. But I also would distinguish between secularization (and other kinds of modernization) and Westernization. (Japan did the one but not the other, for example.) You're right that marriage and family structure are "deep". A friend of mine suggested that other "deep" Western exports are also important. For example, Erdogan sits atop a recognizably Weberian bureaucracy. That's an institution not a market product. However, I'd say that political and cultural values are, if not deep, important. It matters, say, that Turkey is very far from a liberal state - even if Ataturk introduced Western-style state structures, and if Turks are embracing love marriage and fewer children.
How's it going with the Universal Cultural Takeover? Part I

The threat Islam poses to Western/“universal” culture isn’t from suicide-bombing loons and Inspire magazine. Those are just, if you like, an exuberant side-effect. The threat is the prosperous Islamic — or Mormon or Orthodox — family, which sends its daughters to medical school, buys a widescreen TV for the living room, and has absolutely no intention of “Westernizing”, any more than 19th-century Victorians would have “Turkified”. Why embrace failure?

It doesn't matter what the parents' intents are. The kids will Westernize. This is borne out by statisti... (read more)

2David Hugh-Jones1y
I think there are two phenomena: (1) General Westernization. That certainly still takes place, as you point out. The question is how deep that Westernization is - to put it crudely, is it at "Magna Carta" or "Magna Mac" level? (2) The emergence of "hardened" subcultures which are resistant to Westernization and which have high birth rates. The evidence from Kaufmann is pretty persuasive about (2).
What are some low-information priors that you find practically useful for thinking about the world?

There‘ll always be time for the timeless literature later but the timely literature gives you the most bang for your buck if you read it now.

That's not true, because one's lifespan is limited. If you're constantly focusing on the timely, you in fact will not have time for the timeless.

How far along are you on the Lesswrong Path?

Why is there such a large gap of exploration into emotions on Lesswrong. Is it because they are colloquially the anathema to rationality?

I don't think that's accurate. In fact, Eliezer says as much in Why Truth?. He explicitly calls out the view that rationality and emotion are opposed, using the example of the character of Mr. Spock in Star Trek to illustrate his point. In his view, Mr. Spock is irrational, just like Captain Kirk, because denying the reality of emotions is just as foolish as giving in wholeheartedly to them. If your emotions rest on tr

... (read more)
Interesting. I've seen this argument in other areas and I believe this is a step in the right direction. However there's a gap between how belief is encoded and updated. I do like Eliezer's formulation of rationality. The nuance is that emotions are actually the result of a learning system that is according to Karl Friston's free-energy principle, optimal in its ability to deviate from high entropic states.

Also, see the Emotions tag. So even if you just directly search for the term, you will find much more than just 5 results.

When a status symbol loses its plausible deniability, how much power does it lose?

I don't think so, actually. The average age for entering Harvard, as an undergraduate is 18 years old. I don't think there's any faster way of meeting people who are likely to be influential. Even if you do something high-variance like starting a company, is that going to get you meeting the same sorts of people right away that getting into Harvard will? Probably not.

What are the open problems in Human Rationality?

And what game have those "big guns" allowed you to bag that the lesser guns of "ordinary common sense" would not have?

There are lots of people who do lots of amazing things without having once read Kahneman, without having once encountered any literature about cognitive biases. If we are proposing that rationality is some kind of special edge that will allow us to accomplish things that other people cannot accomplish, we had better come up with some examples, hadn't we?

I don't agree with your dichotomy between rationality techniques and common sense. Common sense is just layman-speak for S1, and S1 can be trained to think rationally. A lot of rationality for me is ingrained into S1 and isn't something I think about anymore. For example, S1's response to a math problem is to invoke S2, rather than try to solve it. Why? Because S1 has learned that it cannot solve math problems, even seemingly simple ones. Lightness, precision, and perfectionism are mostly S1 jobs for me as well. And I'm also not claiming rationality is a prerequisite for victory. Rather, I see it as a power amplifier. If you don't have any rationality whatsoever, you're flailing around blind. With a little rationality (maybe just stuff you've learned by osmosis) you can chart a rough course, and if you're already powerful enough that might be all it takes. But those are relatively minor nitpicks. Let's talk about how, specifically, rationality has changed my life. The major one for me is discovering I'm trans. Rationality got me to seriously think about the problem (by telling me that emotions weren't evil and crazy), and then told me how to decide whether I was actually trans or not (bayesian fermi estimate). It takes many people years or months to figure this out, often with the help of therapists. I did it in a week, alone, and I came out without the doubts and worries that plague normal trans people. My pre-sequence grasp of rationality was extremely limited, but still enough to let me self-modify out of the pit of borderline-suicidal depression. I also did it alone, without any therapists or friends (in fact, zero people even knew I was depressed). At the time I figured anyone could do it and it was just a matter of willpower... or something. I didn't pursue the question because back then I hadn't heard of the phrase "notice your confusion". Later, I met someone else who was depressed. I dug a little deeper and it turns out all the people who say you need a
What's Your Cognitive Algorithm?

Super naive question: given all we know about the myriad ways in which the brain fools itself, and more specifically, the ways that subconscious mental activities fool our conscious selves, why should we trust introspection? More specifically, why should I believe that the way I perceive myself to think is the way I actually think (as opposed to an abstraction put up by my subconscious)?

My model is that any psychological model that relies on introspection is going to be inherently flawed. If we want to learn how people think, we should observe their action

... (read more)
8G Gordon Worley III2y
There is both some actual fact of what it is like to experience your own mind, and then there is the way you make sense of it to explain it to yourself and others that has been reified into concepts. Just because the reification of the experience of our own thinking is flawed in a lot of ways doesn't make it not evidence of our thoughts, it only makes it noisy, unreliable, and "known" in ways that have to be "unknown" (we have to find and notice confusion). You worry that asking people who they think will tell us more about their understanding of how they think rather than how they actually think, and that's probably true, but also useful, because they got that understanding somehow and it's unlikely to be totally divorced from reality. Lacking better technology for seeing into our minds, we're left to perform hermeneutics on our self reports.
The Economic Consequences of Noise Traders

That's a good thing to point out, though, it's also worth pointing out that Fama's papers on the efficient market hypothesis date from 1965. Neither the Efficient Market Hypothesis nor the responses to it are fresh results at this date.

Also worth pointing out is that both DeLong, et. al. and Fama's original paper long predate the recent growth of low-fee index funds.

Self-Predicting Markets

In the long run, the smarter agents in the system will tend to accrue more wealth than the dumber agents.

Only if the smarter agents also have similar amounts of capital as the dumber agents. As Delong, Shleifer, Summers and Waldman showed, dumb agents can force smart agents out of the market by "irrationally" driving market prices up or down far enough to exhaust the limited capital reserves of the smart agents.

Self-Predicting Markets

“Nail in the coffin of the EMH” is a fun phrase to say, but as always, the bottom line is that if you’re so sure, why aren’t you shorting Hertz?

Because the market can stay irrational longer than you can stay solvent. It's entirely possible that Hertz is incorrectly valued, but if you short Hertz now, then you had better have enough liquidity to survive the margin calls caused by irrational exuberance.

Self-Predicting Markets

We have to remember that businesses don't go bankrupt because they're unprofitable. Businesses go bankrupt because they're unable to make payments on their debt. The two are related, but not identical. It's possible that Hertz, as a company, is fundamentally solvent, but was caught out by a combination of high debt load and a sudden shortfall in cash flow. We've seen the same with airline bankruptcies in the past. The business can be fundamentally profitable, but a combination of thin margins and high capital requirements means that any sudden shortfall in

... (read more)
BBE W1: HMCM and Notetaking Systems

I'm not sure I agree with the premise of the question. Correct me if I'm wrong, but it seems to me that the question is assuming there's a single program or system somewhere that is maintaining the wiki, and that this single monolithic system has certain characteristics (open vs. closed source, accessible vs. inaccessible API, etc, etc.). My response is to ask why do we want a single monolithic system in the first place?

In my mind, a personal knowledgebase is a set of texts which capture information that we want to store and retrieve later. Fortunately for

... (read more)
BBE W1: HMCM and Notetaking Systems

It's as simple as doing any other sort of text manipulation with a shell script or Python script, or whatever other programming system one uses to manipulate text. It's remarkable what you can do with a simple combination of find and sed.

BBE W1: HMCM and Notetaking Systems

What do you mean by "programmable"? I keep my notes as a directory of markdown files in a git repo. I can manipulate these files with all the standard Unix command line tools that are specialized for manipulating text. In your mind, does that meet your threshold for programmability, or are you looking for something else?

Is it possible to add new features, you hadn't previously thought of? How easy?
I think this is a good question. Here are some probable components of programmability... * Did it surface most of its actual functionality to users? * A couple different settings: Closed proprietary cloud software, API (how friendly or permissive is it?), downloadable open-source... * How easy (and safe!) it is to call relevant utility functions? * Do you need to close the software to edit it? Did they merely surface the functionality, or did they also leave functions that were highly-exposed, labeled, well-documented, and easy to use? How well do they adhere to various standards, and therefore benefit from skill-transfer? Is it easy to screw up? To revert? What's the learning curve like?
Is a near-term, self-sustaining Mars colony impossible?

Unless there's a discontinuity (i.e. something like a space elevator resulting in more than one order of magnitude reduction in cost per ton to orbit) I suspect it would still be impossible to sustain a Martian colony for a nontrivial number of people. The physics and chemistry of conventional rockets just won't allow it.

Is a near-term, self-sustaining Mars colony impossible?

if the colony works at all, there will be a path where the amount of Earth resources required to sustain it shrinks over time due to market forces

Why do you think that? On earth, colonies survived because they were able to secure a comparative advantage in the production of goods or services, which allowed them to be a net economic benefit to the originating country. What comparative advantage does Mars possess?

Any Martian colony, under the current technological regime, will require heavy economic subsidies for decades, possibly centuries. Who would pay

... (read more)
Not sure what Lincoln hand in mind regarding market forces, but one reason the cost to sustain the colony over time should shrink is just tech improvement. Operating the colony (at a given standard of living) should get cheaper over time.
Is a near-term, self-sustaining Mars colony impossible?

In one word: economics.

A self-sustaining colony on Mars will require many hundreds of billions (or perhaps even trillions of dollars) of dollars to set up. Imagine all the things you need to survive on Mars. Imagine all the infrastructure to build those those things. Imagine all the infrastructure needed to build that infrastructure, etc. etc. To be truly self-sustaining (i.e able to survive indefinitely without further input from Earth), a substantial portion of that infrastructure will either need to be shipped to Mars or built on-site.

Heck, you may as w

... (read more)
Why isn’t assassination/sabotage more common?

Even the Israelis, though, will concede that assassinations are tactic, not a strategy. The fact that they call their assassination campaign "mowing the grass" indicates the level of confidence they have in assassinations as a means of bringing a decisive end to a conflict. At best, assassinations buy time until the conflict can be ended through other means.

Why isn’t assassination/sabotage more common?

Political goals seem ripe for assassination

That is a huge misconception. Can you name a single US assassination of a foreign head-of-state, in the last 50 years, that didn't blow back on us? In every case I can think of, where the US has assassinated a head-of-state, the state has either ended up collapsing into instability or has eventually replaced the leader with a leader that was even more hostile to the United States.

Also, depending on your model of history, assassinations may be completely ineffective. If historical events are the result of large

... (read more)
One systemic failure in particular

I agree with Dagon's criticism elsewhere in the thread. However, I would add another criticism. You're confusing the surface purpose of HR with its actual purpose.

The surface purpose of HR is to efficiently match people with jobs. The actual purpose of HR is to ensure that the company can efficiently navigate the byzantine thicket of laws and regulations concerning hiring, employment and firing without getting sued, ensuring that benefits for current employees are well managed, and finally when an employee is involuntarily let go (either fired or laid off)

... (read more)
The Oil Crisis of 1973

I gather the Fed was raising interest rates, but not enough to slow an economy with that level of rising inflation.

The Fed, at the time, was not raising interest rates because it was thought that the political cost of a recession caused by raising interest rates would be too high. Nixon favored keeping interest rates low. Ford was basically a caretaker government. Carter appointed Paul Volcker as chairman of the Federal Reserve, in 1979. Volcker immediately raised the Fed funds rate to 20% to curb inflation. In the process, however, he triggered a short but deep recession which contributed to Carter being a one-term President, thus proving the point.

What are objects that have made your life better?

I think the advice would be best phrased as, " laptop charger," where is the number of locations you use your laptop regularly. For me, one at home, one at work and one in my bag is sufficient.

PS: why do you pack two in your travel luggage? Just in case one gets lost/left behind in a hotel room?

Strictly speaking, they're not both laptop chargers, but laptop/phone/USB-C chargers. So two of them are useful on the road for charging laptop and phone simultaneously.
What are objects that have made your life better?

The corollary to that advice is that most comfortable doesn't necessarily mean most expensive.

True. Which? magazine (the UK product review magazine) did a test of mattresses a while back in which the most expensive mattress was one of the least comfortable, and the most comfortable was one of the cheapest.
What is your internet search methodology ?

Gwern has written extensively on how to use Google efficiently. Some highlights:

  • Use site: to search a particular site. For example, if I'm looking for the Ars Technica review of the Google Pixel 3A, I'll type: Google Pixel 3A. Or, if you want get a link "Meditations on Moloch" quickly, Meditations on Moloch
  • Don't be too specific -- people are bad at remembering specific words, so limit quoted phrases to two or three words
  • Learn the jargon of the field you're searching and use those phrases. For example, if the
... (read more)
The Gwern article I was unaware of, I will check it out. I was speaking of a sci-hub addon, which auto detects the DOI in a page you are reading and opens the article in scihub (i.e. to make find DOI -> open sci hub -> past DOI and search a single step of "click addon button")
What are objects that have made your life better?

Is the Coleto just the multi-pen version of the Hi-Tec C? If I don't need a bunch of colors (I can't remember the last time I used anything other than black ink), a standard Hi-Tec C would work just as well, right?

1Mark Xu2y
Probably? The reason I like the coleto is primarily for the multiple colors. [] describes the basic partitioning of content that I assign to colors, which I have found super useful so far.
Load More