All of Grant's Comments + Replies

I've also been following COVID-19 for investment reasons. Every study I've read of the disease indicates it is extremely contagious relative to the flu. This recent retrospective study indicates that prior to Feb 5th, R0 in China were between 4.7 and 6.6. Time to double was 2.4 days:

However since then China has made herculean efforts to stop the spread of the disease. R0 has certainly plummeted. So I'm not sure what to think. I would imagine officially reported numbers fr... (read more)

Two things to keep in mind here though. First, it is not clear that the current testing is even catching all the cases. I think one of the Chinese CDC equivalent researchers has said the test they have available is only identifying 30 to 50 percent of the cases but is essentially 100% accurate when identifying an infection. So I'm not sure we get too much by just testing everyone with potential symptoms (community acquired). I suspect that depends on how plentiful the supply of test kits might be. The other thing, it sounds like this might be a virus that vaccine will ever be available for. There is not one for SARS nor for HIV and COVID-19 seems to be similar to both (clearly same family as SARS).
Would you say the measures taken in Italy and South Korea (particularly the lockdown of towns in Norther Italy) are sufficiently similar to China? I find that rather unlikely considering the virus' spread in warm regions like Singapore [].

It looks like other blockchain technologies (altcoins) have been the victim of 51% attacks, so I'm going to read up on their repercussions. I wonder if they were carried out by bitcoiners who don't like competition?

It occurs to me that little can probably be done to stop attacks on distributed systems by large actors with non-monetary goals. If people are willing to throw a lot of resources into destroying a fledgling technology, they will probably succeed.

I do have an idea for a distributed public ledger in which attacks are possible but always negative-s... (read more)

Feel free to post your idea. No one expects you to revolutionize an industry in one post. Its fun to throw ideas around.

Thanks, I did not know about

On the 51% attacks, I was specifically thinking of state actors. However, mightn't any eventuality which leads to a lot of Bitcoiners who aren't enthusiasts or have ulterior motives (Bond villains?) be an issue? The current state of the BC community is probably mostly BC enthusiasts, i.e. people who aren't just in it for the money.

You're right that "wasted" was a poor term; "inefficient" would be better.

State actors would need to aquire mining hardware in order to make a 51% attack. Several million dollars worth of it. Trying to do so would have obvious effects on the small bitcoin mining equipment market, and unless the government turned all their machines on at once, they would spend the time they ramp up to a 51% attack actually strengthening the network, and causing other miners to also buy more equipment to keep up with the new difficulty. This is certainly a possibility, if someone in the government recognizes the potential threat bitcoin offers to the government money monopoly. However, anyone smart enough to recognize the threat is also incentiveized to not tell the government about the threat, or even downplay the threat, while investing in bitcoin themselves, as if the threat materialized they will become significantly wealthier than they would through a government job. As for the inefficient argument, the operating costs of bitcoin (mining) should be compared to the operating costs of more traditional money transfer services of similar size. Think armored cars, vaults, double book accounting, and other such things that a 3-4 billion dollar firm engaged in money transfer would need, and compare those costs to the costs of blockchain mining.

Why does anyone think BitCoin is going to work when its users aren't mostly BitCoin enthusiasts?

I'm specifically referring to incentives of 51% attacks. The returns on mining seem to increase as computing power eclipses 50%, creating an economy of scale in mining and incentivicing attacks.

The information in a science textbook is (or should be) considered scientific because of the processes used to vet it. Absent of this process its just conjecture.

I often wonder if this position is unpopular because of its implications for economics and climatology.

Macroeconomics? Sure, its highly politicized so in many cases I'll agree with that. But microeconomics is in many ways the study of how to rationally deal with scarcity. IMO, traditional micro assuming homo-economicus is actually more interesting (and useful, outside of politics) than the behavioral stuff for this reason.

Is there overwhelming evidence on the safety (not efficacy) of vaccines somewhere, and I've just missed it?

The amount of evidence depends on the vaccine, but the US National Academy of Sciences published a long report [] on this in 2011.
9Username8y []

I used to just trust the word of the experts, because I am not an expert and had no incentive to become one. I didn't have a lot of faith in such a politicized science, but reasoned it was probably better than anything I could come up with. I trusted the IPCC reports, but after reading about Climategate thought they were exaggerated a bit as a means to gain political power.

Recently I've started to consider investing in alternative energy. Given that most alternative energy (especially with the fracking and shale oil revolutions) is based on AGW being a ser... (read more)

Agreed. Powerful people (especially politicians) seem to hold plenty of irrational beliefs. Of course we can't really tell the difference between lying about irrational beliefs and hypocrisy, if there's a meaningful difference for the outside observer at all.

The problem is that the politician who honestly holds a popular irrational belief (assuming said belief isn't directly related to the mechanisms of election campaigns) is better able to signal it and thus more likely to get elected than the politician who merely false claims to hold it.

The quote refers to the (end) market and users, not the internal workings of a software development firm.

Networking protocols face similar challenges. I wonder if there's a rationality of conversation hidden somewhere in here?

Could you say more about that?
I've always held that humans should strive to abide by Postel's Law too.

This has always been my experience shopping at Florida Walmarts: the employees are horrible. Perhaps they could be making more money with a higher minimum wage, better unionizing or what have you, but I have always viewed Walmart's ability to make their employees productive as some sort of miracle of capitalism.

I can't think of another chain business I've experienced with the same or lower caliber of employee.

I haven't found that to be the case with personal gifts either. I spend a lot of time trying to pick out good gifts, and generally seem to fail. It just seems so very much easier for someone to pick out something they enjoy for themselves than it is for someone else to do it. I find most gifts given to me undesirable, but still have to look happy and grateful to receive them. The majority of the time I'd rather not have gotten or given any gifts at all.

I keep trying to get friends and family to forgo the normal gift-giving holidays in favor of giving to charity, with limited success.

True. Some sources indicate that some Japanese cities were left intact precisely so the American military could test the effects of a nuke!

That is no reason to drop the bomb on a city though; there are plenty of non-living targets that can be blown up to demonstrate destructive power. I suppose doing so wouldn't signal the will to use the atomic bomb, but in a time when hundreds of thousands died in air raids I would think such a thing would be assumed.

I suppose this highlights the fundamental problem of the era: the assumption that targeting civilians with bombs was the best course of action.

If you drop a nuke on a Japanese city you kill three birds with one stone: you get to test how it works for intended use (remember, it was the first real test so uncertainty was high); you get to intimidate Japan into surrender; and you get to hint to Stalin that he should behave himself or else.

If the bombing of Nagasaki contributed more to the end of the war than the bombing of Tokyo, then we could easily say it was morally superior. That is not to say there weren't better options of course.

We can debate endlessly the wisdom of bombing Hiroshima, but does anybody have a defence for bombing Nagasaki? Since this is the quotation thread, I'll quote Dave Barry: I'm seriously curious. (Reasonably rational arguments, of course.)
Many (most?) historians believe that the Soviet entry into the war induced the Japanese surrender. Some historians believe that American decision makers expected Japan to surrender soon and wanted to use atomic bombs before the end of the war, to demonstrate their power to the Soviets. Gaddis Smith: A very small number of historians believe that the atomic bomb on net cost American lives. Martin Sherwin:

Consider what "the cold war" might have been like if we hadn't of had nuclear weapons. It probably would have been less cold. Come to think of it, cold wars are the best kind of wars. We could use more of them.

Yes nukes have done terrible things, could have done far worse, and still might. However since their invention conventional weapons have still killed far, far more people. We've seen plenty of chances for countries to use nukes where they've not, so I think its safe to say the existence of nukes isn't on average more dangerous than the existence of other weapons. The danger in them seems to come from the existential risk which is not present when using conventional weapons.

Consider what the last big "hot war" would have been like if the atom bomb had been developed even a couple of years earlier, or by another side.
Indeed, I'm pretty sure that if not for nuclear weapons, some right-thinking Russian would have declared war over the phrase "hadn't of had". And very rightly so. The slaughter inflicted by mere armies of millions, with a few tens of thousands of tanks, would have been a small price to pay to rid the world of abominations like that one.

True, but its not clear morals have saved us from this. Many of our morals emphasize loyalty to our own groups (e.g. the USA) over our out groups (e.g. the USSR), with less than ideal results. I think if I replaced "morality" with "benevolence" I'd find the quote more correct. I likely read it too literally.

Though the rest of it still doesn't make any sense to me.

These (nebulous) assertions seem unlikely on many levels. Psychopaths have few morals but continue to exist. I have no idea what "inner balance" even is.

He may be asserting that morals are necessary for the existence of humanity as a whole, in which case I'd point to many animals with few morals who continue to exist just fine.

I know of no animals other than humans who have nuclear weapons and the capacity to completely wipe themselves out on a whim.

We're required to wear helmets, nomex suits, gloves, socks and shoes (lots of fun in 90F+ degree weather), head and neck restraint devices and 5 or 6 point harnesses. However keep in mind race cars do not have airbags, while its becoming more and more common for passenger cars to have airbags galore. With airbags, the benefits of a helmet are much reduced.

As an amateur race car driver, I've got a few things to add here.

There's one very important tip I've never seen driver's ed courses mention concerning rain driving: the available traction on wet pavement varies wildly depending on the surface. Rougher surfaces tend to offer more grip, some feel nearly as good as driving in the dry. Smoother surfaces tend to offer less, some (the worst blacktop parking lots) feeling as bad as driving on ice. Any paint (such as painted-on brick strips on some intersections) is going to be very slick, as is most concrete (as ... (read more)

A more direct approach might be: "no patches which frobnicate a beezlebib will be accepted".

There are many FOSS projects that don't use Linus's style and do work well. What's so special about Linux?

I would say the size (in terms of SLOC count), scope (everything from TVs to supercomputers), lack of a equivalent substitute (MySQL or Postgres? Apache or Nginx? Linux or... BSD?), importance of correctness (its the kernel, stupid), and commercial involvement (Google, Oracle, etc.) make it very different from most FOSS projects. Mostly I'd say the... (read more)

Certainly he and his team are less likely to accept patches from people who they've had trouble with in the past? And people who have trouble getting patches accepted (for whatever reason) are probably not going to be paid to continue doing kernel development?

It would surprise me if he's never outright banned anyone.

Thanks for the correction, edited my comment above.

Which means that anyone who doesn't like his style is free to leave at any time without any consequences in the sense of salary, health insurance, etc. The fact that kernel development goes on and goes on pretty successfully is evidence that your concerns are overblown.

As of 2012-04-16, 75% of kernel development is paid. I would assume those developers would find their jobs in jeopardy if Linus removed them from development.

Um, Linux kernel doesn't work like that. Linus doesn't "add" anyone to development or "remove" anyone. And I don't know if companies who pay the developers would be likely to fire them if the developers' patches start to get rejected on a regular basis. Oh, and you misquoted your source. It's not 75% of developers, it's 75% of the share of kernel development and, of course, some developers are much more prolific than others.

It does assume that asset bubbles are made up of bad investments which are costly to undo. While this insight may have been originally Austrian, I didn't think it was at all contentious. The dot-com bubble is a clearer example, as the housing bubble was both an asset bubble and banking failure (and many of the dot-com investments were just off-the-wall crazy).

As Vernon Smith showed, asset bubbles happen even with derivatives who's value is objective (and without central banks). Its hard for me to see the bust as the problem in those cases.

Would a Keynesian say that any economic downturn can be averted in the face of any and all bad investments?

Doubtful. (I should make clear that I'm not a professional economist, and I couldn't talk math with a Keynesian without doing serious reading first.) To go off the same graph, it does identify the tech bubble in ~2000 as being above the projected line. My impression of the difference is that in the terms of a crude analogy, the Austrian prefers to rip the band-aid off, and the Keynesian prefers to slowly peel it back.

From the articles linked from Welcome to Less Wrong:


The title is descriptive and the text is short and to the point. Empirical support is present and clearly stated. Of course it could be shortened quite a bit more without losing any information, but I don't find it excessively verbose.


Its a long post, not trivial to follow, and when reading its not clear how the effort will pay off. Perhaps this is evidence of a short attention span, ... (read more)

I think a better term might be 'meritocratic', and not 'democratic'. Unless mathematicians vote on mathematics?

Well, it is also democratic in the sense that what convinces the mathematical community is what matters, and there's no 'President of Mathematics' or 'Academie de la Mathematique' laying down the rules, but yes, 'meritocratic' is closer to what I meant.

Ditto, and downvoting b1shop's response since the quote did not mention any particular economic theory. Busts caused by widespread bad investments aren't necessarily the problem, the widespread bad investments are the problem. Blaming the bust in these cases may be shooting the messenger.

Thats not to say all busts are largely caused by widespread bad investments, or anything about why these bad investments happen. It is however very clear in hindsight that many boom-phase investments are crazy.

I wouldn't recommend downvoting b1shop's response (I didn't), because they are correct that the basic reading of the quote relies on particular economic assumptions. There are economic theories that put the fault in the bust- if things were intelligently managed, you could keep the bubble inflated at just the right amount to prevent it from popping or inflating further, and never have to deal with the bust. For example, look at this graph [] that Krugman posted in 2010. The "projected real GDP" is from Mark Thoma, another economist, but where you choose to draw that line says a lot about your assumptions. The Austrian would basically draw it from trough to trough, claiming that all the reported GDP above that line was activity that could be recorded but didn't actually generate lasting wealth. In that view, the bubbles are clearly harmful; in Krugman's view, the busts are harmful. It's the difference between a trillion dollars that we can never get back, and a trillion dollars that was never there.
8Rob Bensinger10y
I'm not downvoting Eugine, because Vaniver's interpretation is interesting. But I am upvoting b1shop, because the quotation does sound like Austrianism on a bumper sticker. So it applause-lights a false fringe theory associated with an anti-empirical intellectual community, in addition to plausibly generating specific false beliefs about economics and/or ethics if taken on its face. (Busts, or more generally human misery, are the reason 'distortions' and 'not making sense' are a bad thing in the first place; economies aren't primarily maps.) It's interesting and revealing in subtle ways, but misleading in banal and obvious ways.

I'm sure the use of prediction markets to predict existential threats is difficult, but it seems like you could at least use them to predict the emergence of AI. I'd be surprised if this wasn't discussed here at some point.

It seems to me that while prediction markets may not need funding from a technical perspective, public and especially political opinion on them does need some nudging. I don't think I'm entering mind-killing territory by suggesting it'd be good if politics didn't get in their way so much. I'm certainly no expert, but long-running markets... (read more)

If existential risks are hugely important but we suck at predicting them, why not invest in schemes which improve our predictions?

1Peter Wildeford10y
I think such schemes are promising avenues for exploration. I don't currently know of any schemes that can demonstrate a track record of improving predictions, have room for more funding, and can make a case that marginal funding would yield a marginal benefit in making predictions.

To me "filled with falsehoods and errors" translates into more falsehoods than "some". Though I agree its not a very good quote within the context of LW.

All true, but there are many booms which seem to produce crazy investments; the dot-com boom is the most obvious recent example. You don't need to accept ABCT to accept this, and I'd guess most people who do notice this don't accept ABCT.

"Only" was a gross exaggeration. I'm not sure why I typed it.

I think my examples are pretty typical though. Charitable people get lobbied by people who want charity. This occurs with both personal and extended charity. In my case it gets me bugged into spending more time on other people's technical problems (e.g. open-source software projects) than I'd like.

I haven't contributed to many charities, but the ones I have seem to have put me on mailing and call lists. I also once contributed to a political candidate for his anti-war stance, and have been rewarded with political spam ever since. I'm not into politics at all so its rather unwelcome.

INTP male programmer here. I've never posted an article and rarely comment.

One thing which keeps me from doing is actually HPMoR, and EY's posts and sequences. They're all really long, and seem to be considered required reading. I know its EY's style; he seems to prefer narratives. Unfortunately I don't have a lot of time to read all that, and much prefer a terser Hansonian style.

A shorter "getting started" guide would help me. Would it help others?

Only a minority of respondents to the 2012 survey [] had read “about 75%” or “nearly all” of the sequences. So long as you've read the links in the welcome thread [] and you're prepared to be corrected you should be fine.
I don't think think HPMoR is required reading to learn rationality from LW and related places, and is one of the few things making rationality general-interest at this point. I do agree that a short "getting started" guide would be helpful, though.
7Rob Bensinger10y
I've been thinking about this problem lately, and I agree it's a problem. I have some tentative ideas for starting to address it, which I'll post to Discussion next week. I'd like more data on where the stumbling blocks are, though. 1. Are there LW posts (by Eliezer or whoever) that you have found helpful, readable, concise, etc.? If so, what are some of the better examples? Would you say, for example, that Lukeprog and Hanson's styles work for you about equally well? 2. What are some examples of specific posts (or series of posts) you haven't gotten through? How much was a result of length, how much a result of content (e.g., too difficult or boring or mathy), and how much a result of style (e.g., too narrative or unstructured or jargony)? 3. What are specific ideas, perspectives, approaches, or terms you feel (or have been told) you're currently missing out on? The more examples of this the better. ETA: I'd be interested in others' responses to this too.

I'm not very well informed on this topic, but isn't something like that always going to be the case in a society with a safety net? e.g., if we make sure everyone has at least $25k to live on, anyone making $8k a year isn't going to be any worse off than someone making $25k.

Of course I'm not sure how well America's arcane maze of benefits, tax deductions and whatnot fit into this simple abstraction.


Safety net should be a slope, not a cliff. Earning your first dollar shouldn't mean you get $1 less in benefits - there's actually a good argument for subsidizing the first $X of income - which is what the EITC is. Basically negative income tax.

Good article, thanks. The author does say the taste was quite different from chicken, you just can't tell when its in a burrito as the chicken is mostly used for texture. The producer's website is here.

Another idea, with potentially better returns than the above: invest in faux-meat producers. There appear to be plenty of them.

I agree that this is potentially a high-impact avenue. New harvest [] is a charity which sponsors meat substitutes, both plant based and tissue engineered, if you are interested.

Roughly half of Americans don't owe anything to the IRS each year. Pre-recession I believe this figure was about 40%. They of course pay other taxes, such as payroll (social security, medicare, which most people consider taxes), state sales tax, property taxes, etc. It'd be nice if they at least didn't have to file tax returns.

You mean about half (actually 46%) of all American households did not pay any income tax (which is different from "not owing anything to the IRS") in 2011. 20% of all Americans don't pay income tax by virtue of being too young to work.
I thought they wouldn't need to file taxes, but I just completed a "tax assistant" wizard at the IRS website, for a single, non-retirement-benefit-receiving, single individual with $20k in gross income ... and I was told they'd have to file a return.

The problem isn't just all those other taxes but phasing-out of benefits - this is what leads to the calculations and observations by which somebody making $25,000/year isn't much better off than someone getting $8,000/year.

Idea: if you're very interested in promoting veganism or vegitarianism, help make it taste better, or invest in or donate to those who are helping make it taste better. As my other much-downvoted comment showed, I am very skeptical that appeals to altruism will have nearly as much of an affect as appeals to self-interest, especially outside of this community. I believe most people eat meat because it just tastes better than their alternatives.

Grown crops are far more efficient to produce than livestock, so there are plenty of other good reasons to transiti... (read more)

When I envision a hypothetical future in which humans don't consume meat, I don't imagine everyone getting their protein from some kind of tank-grown super-tasty 'I Can't Believe It's Not McDonalds!' meat substitute . The meat-heavy diet of Western societies has no basis in evolutionary terms and I don't see why we should seek to perpetuate this relatively modern obsession and dietary imbalance. Contrary to what many meat eaters think, a vegan diet can be incredibly varied and tasty once you get used to cooking using a wider variety of herbs, spices and ingredients which aren't currently mainstream in Western cuisine. I personally find things like smoked tofu, coconut oil and milk and nuts like pistachios and cashews to be every bit as tasty as any meat product. The consumption of large quantities of red meat and animal-derived fats is cultural, not essential, and in terms of nuitrition not even especially desirable. The massive over-consumption of bovine dairy products is particularly nonsensical when more efficient, more nutritional alternatives exist.
Data point: I do. This is probably low-status, but I do prefer the taste of meat even in the junk foods to most of the alternatives. In my experience, most of the alternatives are significantly improved by adding some meat to them. Most likely [], no. Otherwise we would already see them sold everywhere. Unless they were invented yesterday, or are extra expensive, or something like that.
Perhaps, but some preliminary findings show that online ads may be very effective (Peter posted about this on LW recently). Hopefully more research into effective outreach will be done in the future.
Well, as a meat-eater I've got to admit that meat substitutes have come a long way in the last few years. A couple days ago I ended up eating vegan burgers which would have passed muster as mediocre cow, and vegetarian sausage tends to be fairly acceptable as well. I can't say the same for anything made from chunks too big to stir-fry, though, and I've never eaten any vegetarian products passing as rare meat, which I tend to prefer.
It varies a lot by brand. The food columnist for the New York Times couldn't tell that Beyond Meat wasn't chicken [], for example.

Thank you for the explanation. I was trying to play the devil's advocate a bit and I didn't think my comment would be well-received. I'm glad to have gotten a thoughtful reply.

Thinking about it some more, I was not meaning to anthropomorphize evolution, just point out homo-hypocritus. On any particular value of a person's, we have:

  • What they tell people about it.
  • How they act on it.
  • How they feel about it.

I feel bad about a lot of suffering (mostly that closest to me, of course). However its not clear to me that what I feel is any more "me" th... (read more)

I'm not sure how much truth there is in this generalisation. Countless environmental activists, conservationists and humanitarian workers across the globe willingly give their time and energy to causes that have little or nothing to do with satisfying their own local needs or wants. Whilst they may not be in the majority, there are nevertheless a significant minority. I doubt many of them would be happy to be told they are only 'signalling altruism' to appear better in the eyes of their peers. On the other hand, I suppose you could argue the case that such people have X-altruistic [] personalities and that perhaps that isn't a desirable quality in terms of creating a hypothetical perfect society.
Any examples?

I admit to being perplexed by this and some other pro-altruism posts on LW. If we're trying to be rationalists, shouldn't we come out and say: "I don't often care about other's suffering, especially of those people I don't know personally, but I do try and signal that I care because this signaling benefits me. Sometimes this signaling benefits others too, which is nice".

I agree everyone likely benefits from a society structured to reward altruism. We all might be in need of altruism one day. But there seems to be a disconnect between the prose of... (read more)

I don't often care about other's suffering, especially of those people I don't know personally, but I do try and signal that I care because this signaling benefits me

Remember the evolutionary-cognitive boundary. "We have evolved behaviors whose function is to signal altruism rather than to genuinely cause altruistic behavior" is not the same thing as "we act kind-of-altruistically because consciously or unconsciously expect it to signal favorable things about us".

If you realize that evolution has programmed you to do something for so... (read more)

7Peter Wildeford10y
Sometimes, pleas for altruism are exactly what they seem. Not everything is a covert attempt at signaling. Trying to say that altruism is not serving self-interested reasons is kind of missing the point.

what I thought was the general rationalist belief that altruism in extended societies largely exists for signaling reasons.

That's, um, not a general rationalist belief.

It probably wouldn't stop political competition, but it very well may slow competition in political systems. If there was one world democratic or republican government, would it let something like futarchy develop? That isn't to say that futarchy would have an easy time coming into being anyway, but it seems like it might be harder under a single world government.

More generally, how often does political innovation occur without violence, or the threat of it? It took violence in the case of the American and French revolutions. Reforms in the UK seem to have... (read more)

I think there's widespread agreement that a probable problem would be stagnation or lack of progress - through a lack of competition. Hopefully, if such a system is ever realized, imagined challenges (the threat of extinction, the threat of future aliens) would keep it from becoming too lazy and complacent.

I'm not sure I'm following the logic here. The failure of science to raise money via voluntary means is evidence that it is too much of a non-ancestral problem?

Well, I'll agree that if we somehow had science as it exists now for a few hundred generations, we'd probably be better at funding it. But thats true of anything. Standard economics predicts that funding large-scale public goods is difficult via voluntary means, and public choice explains why its difficult for governments too. If you believe Coase this difficulty is a feature, not a bug, because it ... (read more)


there will be those who write with an utterly pure and virtuous love of the truthfinding process; they desire solely to give people more unfiltered evidence and to see evidence correctly added up, without a shred of attachment to their or anyone else's theory.
They're implicitly attached to the theory that this process really does find the truth, and they may be attached to the idea that it is the best or one of the best processes for doing so. On a slightly more abstract level, is there a difference between Informers and Persuaders?

For example, a ... (read more)


I'm not entirely sure how "they are offended by helpless victims being forced to suffer against their will and want to remove that" translates into "the SHs aren't nice in any sense of the word".
They aren't offended by suffering, but the expression of it. They don't even understand human brains, and can't exchange experiences with them via sex, so how could they? Maybe the SHs are able to survive and thrive without processing certain stimuli as being undesirable, but they never made an argument that humans could.

Eliezer, thanks. I mostly read OB for the bias posts and don't enjoy narratives or stories, but this one was excellent.

Tyrrell, we aren't told how many humans exist. There could be 15 trillion, so the death of one system may not even equal the number of people who would commit suicide if the SHs had their way.

I don't find the SHs to be "nice" in any sense of the word. In my reading, they aren't interested in making humans happy. They can't be - they don't even understand the human brain. I think they are a biological version of Eliezer's smiley f... (read more)

I'm mostly with Kaj; I don't see the problem. Designing a companion seems like it will often be a superior strategy than trying to acquire one largely through trial-and-error with existing people. Why would anyone want to "catgirl" when they could make a human who was perfectly suited for them?

If anything, I think problems may come from women, who will find themselves no longer able to acquire resources by virtue of their attractiveness. Of course, if we have enough technology to create companions, we could probably easily modify women (or men) to be as attractive as the "catgirls", and maybe make women on equal footing with men in the engineering department (so they don't suffer economically).

But we already live in a world, right now, where people are less in control of their social destinies than they would be in a hunter-gatherer band, because it's harder to talk to the tribal chief or (if that fails) leave unpleasant restrictions and start your own country. There is an opportunity for progress here.
I strongly disagree with this statement. A tribal tyrant likely has much greater effect on someone's personal life than a president or legislator. Its probably harder to start your own country today, but its not harder to leave your country (trib... (read more)


Understood; though I'd call fraud coercion, the use of the word is a side-issue here. However, an AI improving humans could have an equally clear view of what not to mess with: their current goal system. Indeed, I think if we saw specialized AIs that improved other AIs, we'd see something like this anyway. The improved AI would not agree to be altered unless doing so furthered its goals; i.e. the improving was unlikely to alter its goal system.

Nick, thats why I said non-coercively (though looking back on it, that may be a hard thing to define for a super-intelligence that could easily trick humans into becoming schizophrenic geniuses). But isn't that a problem with any self-modifying AI? The directive "make yourself more intelligent" relies on definitions of intelligence, sanity, etc. I don't see why it would be any more likely to screw up human intelligence than its own.

If the survival of the human race is one's goal, I wouldn't think keeping us at our current level of intelligence is even an option.

I'm not sure I understand how sentience has anything to do with anything (even if we knew what it was). I'm sentient, but cows would continue to taste yummy if I thought they were sentient (I'm not saying I'd still eat them, of course).

Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.

It's not going to make you more powerful than it if it's going to limit its ability to make you more intelligent in the future. It will make sure it's intelligent enough to convince you to accept the modifications it wants you to have until it convinces you to accept the one that gives you its utility function.
Load More