All of Fredrik's Comments + Replies

I am trying to build a collaborative argumentation analysis platform. It sounds like we want the almost exact same thing. Who are you working with? What is your detailed vision?

Please join our FB group at or contact me at branstrom at

What if I were to try to create such a web app. Should I take 5 minutes every lunchbreak asking friends and colleagues to brainstorm for questions? Maybe write a LW post asking for questions? Maybe there could be a section of the site dedicated to collecting and curating good questions (crowdsourced or centrally moderated).

I guess I wasn't selected if I haven't received an email by now? Or are you staying up late sorting applications? Will you email just the selectees or all applicants?

I received an e-mail saying I wasn't selected a couple days ago. Maybe your spam folder?

Right… I might have my chance then to save the world. The problem is, everyone will get access to the technology at roughly the same time, I imagine. What if the military get there first? This has probably been discussed elsewhere here on LW though...

Well, presumably Roko means we would be restricting the freedom of the irrational sticklers - possibly very efficiently due to our superior intelligence - rather than overriding their will entirely (or rather, making informed guesses as to what is in their ultimate interests, and then acting on that).

I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?

The closest named ethical philosophy I've found to mine is something like Ethical Egoism []. It's not close enough to what I believe that I'm comfortable self identifying as an ethical egoist however. I've posted quite a bit here in the past on the topic - a search for my user name and 'ethics' using the custom search will turn up quite a few posts. I've been thinking about writing up a more complete summary at some point but haven't done so yet.

Well, the AI would "presume to know" what's in everyone's best interests. How is that different? It's smarter than us, that's it. Self-governance isn't holy.

An AI that forced anything on humans 'for their own good' against their will would not count as friendly by my definition. A 'friendly AI' project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don't think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.

Just out of curiosity, are you for or against the Friendly AI project? I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.

It doesn't have to radically transform their lives, if they wouldn't want it to upon reflection. FAI ≠ enforced transhumanity.
I think that AI with greater than human intelligence will happen sooner or later and I'd prefer it to be friendly than not so yes, I'm for the Friendly AI project. In general I don't support attempting to restrict progress or change simply because some people are not comfortable with it. I don't put that in the same category as imposing compulsory intelligence enhancement on someone who doesn't want it.

I might be wrong in my beliefs about their best interests, but that is a separate issue.

Given the assumption that undergoing the treatment is in everyone's best interests, wouldn't it be rational to forgo autonomous choice? Can we agree on that it would be?

It's not a separate issue, it's the issue. You want me to take as given the assumption that undergoing the treatment is in everyone's best interests but we're debating whether that makes it legitimate to force the treatment on people who are refusing it. Most of them are presumably refusing the treatment because they don't believe it is in their best interests. That fact should make you question your original assumption that the treatment is in everyone's best interests, or you have to bite the bullet and say that you are right, they are wrong and as a result their opinions on the matter can just be ignored.

Well, the attention of those capable of solving FAI should be undivided. Those who aren't equipped to work on FAI and who could potentially make progress on intelligence enhancing therapies, should do so.

Culture has also produced radical Islam. Just look at to get a bit more pessimistic about the natural moral zeitgeist evolution in culture.

What fraction of the population, though? Some people are still cannibals. It doesn't mean there hasn't been moral progress. Update 2011-08-04 - the video link is now busted.

So individual autonomy is more important? I just don't get that. It's what's behind the wheels of the autonomous individuals that matters. It's a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to "way too fracking high".

It's everyone's happiness and progress that matters. If you can raise the floor for everyone, so that we're all just better, what's not to like about giving everybody that treatment?

The same that's not to like about forcing anything on someone against their will because despite their protestations you believe it's in their own best interests. You can justify an awful lot of evil with that line of argument. Part of the problem is that reality tends not to be as simple as most thought experiments. The premise here is that you have some magic treatment that everyone can be 100% certain is safe and effective. That kind of situation does not arise in the real world. It takes a generally unjustifiable certainty in the correctness of your own beliefs to force something on someone else against their wishes because you think it is in their best interests.

You don't have to trust the government, you just have to trust the scientists who developed the drug or gene therapy. They are the ones who would be responsible for the drug working as advertised and having negligible side-effects.

But yes, I sympathize with you, I'm just like that myself actually. Some people wouldn't be able to appreciate the usefulness of the drug, no matter how hard you tried to explain to them that it's safe, helpful and actually globally risk-alleviating. Those who were memetically sealed off to believing that or just weren't capable ... (read more)

If I was convinced of the safety and efficacy of an intelligent enhancing treatment I would be inclined to take it and use my enhanced intelligence to combat any government attempts to mandate such treatment.

Even in such a scenario, some rotten eggs would probably refuse the smart drug treatment or the gene therapy injection - perhaps exactly those who would be the instigators of extinction events? Or at least the two groups would overlap somewhat, I fear.

I'm starting to think it would be rational to disperse our world-saving drug of choice by means of an engineered virus of our own, or something equally radically effective. But don't quote me on that. Or whatever, go ahead.

Gene therapy of the type we do at the moment always works through a engineered virus. But then as technique progresses you don't have to be a nation state anymore to do genetical engineering. A small group of super empowered individuals might be able to it.
Not just "rotten eggs" either. If there is one thing that I could nearly guarantee to bring on serious opposition from independent and extremely intelligent people, that is convince people with brains to become "criminals", it is mandating gov't meddling with their brains. I, for example, don't use alcohol or any other recreational drug, I don't use any painkiller stronger than ibuprofen without excrutiating (shingles or major abcess level) pain, most of the more intelligent people I know feel to some extent the same, and I am a libertarian; do you really think I would let people I despise mess around with my mind?
I suspect that once most people have had themselves or their children cognitively enhanced, you are in much better shape for dealing with the 10% of sticklers in a firm but fair way.

X-risk-alleviating AGI just has to be days late to the party for a supervirus created by a terrorist cell to have crashed it. I guess I'd judge against putting all our eggs in the AI basket.

Who's doing that? Governments also use surveillance, intelligence, tactical invasions and other strategies to combat terrorism.
"We" aren't deciding [] where to put all our eggs. The question that matters is how to allocate marginal units of effort. I agree, though, that the answer isn't always "FAI research".

I wonder how many Swedish readers there are. A meetup in Stockholm or Gotheburg would be kind of nice.

I'm a Swedish reader. A meetup in Stockholm would be great!

So you haven't read his Sweet Dreams: Philosophical Obstacles to a Science of Consciousness?

I think Eliezer was just stating a fact? Or, impression.

"They're really trying to raise the intellectual level this year" sounds like music to my ears.

It sounded like sarcasm to mine.