Posts

Sorted by New

Wiki Contributions

Comments

Why not donate to people promoting neocolonialism, if you are really concerned about efficient malaria eradication and the well-being of Black people? I for one refuse to donate any amount of money to treat symptoms rather than causes, at least in in the case of strangers; it is an inefficient allocation of resources.

If I were a scientist, I would ask for evidence of the existence of omega-level beings before further considering the questions. We can of course debate how many Omega-level beings are there on the tip of a pin, but I believe our limited time in this Universe is better spent asking different kinds of questions.

Maybe the forces of human nature make the future in some sense inevitable, conspiring to keep the long-term probability of eutopia very low?

If you took a freezing, dirty European peasant in winter ca. 1000 AD, and transported him to 0 AD Rome and its public thermae, he would also be heading towards eutopia - only in the 'wrong' direction of time. The worship of many gods in particular would probably strike him as horrifying.

If you transported Thomas Carlyle through time to the present, he would be horrified and disgusted, probably also frightened. But he would most definitely not be surprised. He would say: "I told you so". I'm sure there were at least few Romans who, when transported to Dark Ages Europe, would have said the same.

I do think Progressive like memes would have developed in a non-Christian descended implementation of what is often called The Cathedral

I think this is quite likely to be the case, since Progressivism (which one might think of as "altruism gone rampant") might actually emerge in time from the mating patterns and the resulting genetic structure of a population.

What are the experimental predictions of the various string theories?

Have any of those been experimentally verified so far?

Is belief in string theory paying any rent?

What about individual IQ? It's not at all clear that learning methods yield uniform results across the bell curve. What might work for a 130+ IQ individual may not work for a 110 IQ individual - and vice-versa.

Intelligent people are more likely to think on the consequences when deciding to have a child. But there is a prisoner's dilemma type of situation here:

One reason smart people forego reproduction is because they might feel children make them more unhappy overall for at least the first few years (a not unreasonable assumption). Or simply because they are not religious (smart religious people do still have lots of children) As a consequence, in 20 years, the average IQ of that society will fall (bar some policy reversals encouraging eugenic breeding, or advances in genetic engineering), as only the less intelligent breed. Since, all other things equal, smarter people perform better on their jobs, the average quality of services provided in that society (both public and private) goes down. So in the end everyone becomes more unhappy (even though unhappiness of a childless smart person resulting from societal dysgenics may not outweigh the temporary unhappiness from having a child)

I do not understand how this has anything to do with FAI

It has to do because FAI is currently a branch of pure philosophy. Without constant experimental feedback and contact with reality, philosophy simply cannot deliver useful results like science can.

This is not in fact "simple" to do. It's not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?

Are there any other current proposals to build AGI that don't start from the brain? From what I can tell, people don't even know where to begin with those.

Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.

At some point you have to settle for "good enough" and "friendly enough". Keep in mind that simply stalling AI until you have your perfect FAI philosophy in place may have a serious cost in terms of human lives lost due to inaction.

(like ensuring that value systems remain unchanged during self-modification)

But what if the AI is programmed with a faulty value system by its human creators?

Such a program would be incredibly difficult to get implemented, and so is probably not worth it.

Fair enough, I was giving it as an example because it is possible to implement now - at least technically, though obviously not politically. Things like genome repair seem more distant in time. Cloning brilliant scientists seems like a better course of action in the long run, and without so many controversies. However, this would still leave the problem of what to do with those who are genetically more prone to violence, who are a net drag on society.

Is a "vegetative-state life-support cripple" a person at all?

Load More