Sorted by New


Long-Term Technological Forecasting

"When will AGI be created?"

I'm not sure this means very much. How would we be able to tell?

Computers are already far superior to humans for many tasks. I expect more of the same in the future, with computers being delegated to take on increasingly complex tasks. I don't however see that any "singularity" is likely - rather a relatively smooth progression from what is possible today towards more difficult problems that can be solved in the future.

Even supposing computers were to advance to a state of "intelligence" where they could say invent interesting new mathematics, I'm not sure that this would have any profound consequences, any more than a chess playing computer that can beat a human has any profound consequences.

It's possible to imagine that a very powerful "intelligent" computer could somehow run amok, but we are so far from such a possibility that it hardly seems worth worrying about it now. I'd worry more about human dangers ( fascism, totalitarian regimes ) since they seem to appear and become dangerous quite frequently. For example, should we be worried about China?

Procedural knowledge gap: public key encryption

For email, the main problem is the automating the public key management. There is some hope here in the deployment of secure DNS (DNSSEC), which has the potential to automate the process so that everyone, by default, has a public key without taking any special action.

However progress is extremely slow and the incentives weak, so I would be surprised to see significant progress any time soon for email.

If you use Skype (and probably other proprietary systems, even your mobile phone) encryption will probably come as standard. There may however be back-doors, possibly allowing the provider, governments and law enforcement access. But it's better than nothing. The architecture of Skype, where the traffic passes through completely untrusted super-nodes (for example my computer!) pretty much demands the use of encryption.

A variant on the trolley problem and babies as unit of currency

As posed, I'm not sure what I would do, or whether pressing the button is moral.

"You are offered a magical box. If you press the button on the box, one person somewhere in the world will die.."

Well I don't believe in real magic ( I do believe in magicians who do clever tricks ), so the question is immediately hypothetical.

But leaving that aside, there is the question of the mechanism by which the person dies. If it was simply that aid was diverted from that person to another person, then I would probably have no problem pressing the button ( I might question whether I deserve the cash for myself and refuse it ).

If instead the person is to be executed, then I would have a problem.

If the person is to die by means of magic, well, I don't believe the person who is telling me this, so it gets complicated.

An argument that animals don't really suffer

I'm aware of the theory of endorphins, but I'm a little doubtful if that is the correct explanation. I would instead attribute non-perception of pain mainly to the mind being able to shut out signals that are not the most important in a given situation. In fact while I am out cycling, I an easily able to instantly switch between perceiving pain (what is hurting at the moment) to concentrating on something else (going faster, or navigating a difficult corner say). So pain is often what we choose to perceive at a moment in time. In the case of my accident, if I had stopped and thought "what's hurting", I'm fairly sure I would have felt pain, and been aware of it. But until I had cycled home, put my bike in the garage, and called the emergency services, I was concentrating on other things. Having done that, I certainly did immediately feel pain! I doubt that endorphins could explain such rapid switch. Can endorphin production be consciously controlled? I doubt it.

I guess we agree here, except that I am attributing the lack of perception not to a concious decision to meditate, but an automatic stress response to concentrate fully on what what needs to be done. That would suggest I was in level 2 pain (albeit it may have been reduced due to endorphins), but I was nevertheless not aware of pain.

An argument that animals don't really suffer

I would agree with the basic idea that there are three levels of pain, and also that only great apes are aware that they are in pain.

In fact humans may be in pain, but not be aware of it. I recently had a moderately serious accident, and cut my thumb deeply ( the tip of the bone was sliced off, to give you an idea ). I then probably cycled home ( I don't remember that well due to concussion, of which I was completely unaware ), and was quite unaware that I was in pain. I did know that I had cut my thumb. You might argue that I wasn't even in pain, that's debatable.

I would also cite the example of young babies - they have very little self-awareness (I don't recall the age at which it develops, but it is I think after birth), but can you assert they do not suffer when experiencing pain?

Regardless, the big jump here is going from "animals (other than great apes) not being aware that they are in pain" to the title of your post which is "An argument that animals don't really suffer". Why is suffering related to awareness of being in pain? Isn't it enough just to be in pain?

The bias shield

Any of these readers would have been willing to believe that Bill O'Reilly had written a bad book, if they did not believe that Bill O'Reilly was strongly biased.

Are you possibly confusing what these readers said with what they believe? I suspect many of these people had no well-founded opinion on the book, or may have privately thought it was a bad book, rather they were seeking to defend the author for political reasons.

So in this book review example, it's just that people who have a strong affiliation with a well known political figure will seek to defend him regardless. When we read messages like this, they tell us practically nothing about what the defenders really think about the book ( probably in this case they haven't even read it, there is nothing to tell ).

[ My first comment on this site, so be gentle - I'm just getting acquainted with the furniture, so I may well be wide of the mark. Reading further down, it seems others are thinking on similar lines (TheOtherDave), this was my reaction before I read that. ]

Welcome to Less Wrong! (2012)

Hi, I'm 53 years old, from Gloucester, UK.

I work from home over the internet running IT systems.

I studied Maths for 2 years at Cambridge, then Computer Science in my 3rd year.

I came across this site after becoming interested in the trial of Amanda Knox and Raffaele Sollecito ( just subsequent to their acquittal in October 2011 ).

I made an analysis of the Massei report ( ) and concluded that the defence case was much more probable than the prosecution case.

I'm interested in a rational basis for assessing guilt in criminal cases. My idea ( as above ) is to compare the relative likelihood of each part of the defence and prosecution case, but this was perhaps not a good example, as I found that there was no credible, objective evidence against the defendants after looking closely at the evidence.

Maybe we could look at the recent conviction of Gary Dobson and David Norris. I would start from the position that they are probably guilty, but this is before examining in any detail the evidence against them, so this is based mainly on a general belief that UK courts do a fairly good job. The questions to be raised there would be whether we can really trust the forensic evidence, given that the police have powerful incentives to convict. And how do we eliminate prejudice against these unpleasant people ( both were clearly vile racists whether or not they committed the murder ).