lc

Wiki Contributions

Comments

lc's Shortform

Lost a bunch of huge edits to one of my draft posts because my battery ran out. Just realizing that happened and now I can't remember all the edits I made, just that they were good. :(

AI Safety Needs Great Engineers

This might be a false alarm, but "tell me your thoughts on AI and the future" is an extremely counterproductive interview question. You're presenting it as a litmus test for engineers to apply to themselves, and that's fine as far as it goes. But if it's typical or analogous to some other test(s) you use to actually judge incoming hires, it doesn't bode well. By asking it you are, on some level, filtering for public speaking aptitude and ability to sound impressively thoughtful, two things which probably have little or nothing to do with the work you do.

I realize that might seem like a pedantic point, and you might be asking yourself: "how many smart people who want to work here can't drop impressive speeches about X? We'll just refrain from hiring that edge case population." The reason it's relevant that your interview "could" be selecting for the wrong thing is because recruitment is an adversarial process, not a random process. You are fighting against other technology companies who have better and more scientific hiring pipelines, and more time and money to build them. Those companies often diligently reject the people who can speak well but not code. The result is the candidates you're looking at will almost always seem curiously good at answering these questions, and under-performing on actual workplace tasks. Even if this were happening I'm sure you'd believe everything is fine, because your VC money lets you give enormous salaries that obscure the problem and because AI safety companies get a glut of incoming attention from sites like Lesswrong. All the more reason not to waste those things.

Worse, you have now published that question, so you will now get a large amount of people who coach their answers and practice them in front of a mirror in preparation for the interview. "Oh well, most people are honest, it'll only be like 1/2/5/10/25% of our applicants that..." - again, not necessarily true of your passing applicants, and definitely not necessarily true of applicants rejected or less-well-compensated by your competitors.

What specifically is the computation -> qualia theory?

The reason it seems silly to me has nothing to do with the quantity involved and everything to do with how abstract the suffering seems, and how ill-defined the map between ink and human experience is.

What specifically is the computation -> qualia theory?

You seem to be reading the term "computation" as being explicit, symbolic computation. It isn't. Even an unconscious human's brain does an enormous amount of computation in the sense being meant, all the time. The idea of "computation" here is an abstraction that refers to the process of changing from one brain state to another, in the sense of the operation of all the neurons, synapses, hormones, neurotransmitters, and anything else that describes how the brain does what it does.

I'm going to write a motte and bailey post about this, because this is an absurdly general statement that nowhere near allows anything like the inferences EY makes. Yes, we all agree the brain is made of atoms and moves from one state to another and that's what causes emotions. Using the term "computation" to describe that process is absolutely uncalled for and only serves to isolate human general reasoning abilities as the culprit of qualia, when there isn't any evidence of that at all.

Of course not. The type of computation being discussed is nothing to do with any part of your brain that solves math equations. The behaviour of "solving math equations" is an emulation of an emulation of very different (and much less complicated) computational processes and isn't relevant here at all.

People keep replying with something like this and it's clear I communicated poorly. I'm not equating general intelligence with the ability to solve math equations. I'm saying that it doesn't seem like the component of my brain that provides general intelligence is involved in delivering, or required to deliver the high.

One hypothesis is that there is no subjective difference. It is very difficult to see how there could be any, since there can be no difference in their responses to questions like "do you feel any different?"

Do people who believe this also believe it's impossible for people to lie about their internal emotional state? 

What specifically is the computation -> qualia theory?

I'm moderately sure that GPT-3 is not sentient, but I accept the possibility that a hypothetical society with enough pen-and-paper might well be able to create something that would reasonably be called a person

I don't know if I'm reading into this too much, but it's almost like even when phrasing this, you have to obfuscate it because it's just too ridiculous. To be clear, you believe that actual pen and paper and the computations together would be a person.

[Book Review] "The Bell Curve" by Charles Murray

Isusr actually wrote a post about this contention.

lc's Shortform

Interesting idea.

What specifically is the computation -> qualia theory?
Answer by lcNov 02, 20211

After some more reading I think I understand better the (IMO completely bonkers) logic behind worrying if a GPT-N system is suffering. There are three pillars to this idea:


1. Suffering is the result of computation. Specifically, it's caused by computing one of a class of algorithms, assumed to be vaguely close to the ones that humans use to make lots of generally intelligent predictions. The form or manner in which computation happens is thus unimportant; whether it's human brains or computers, the hardware is an implementation detail. A person with rocks to move around can cause untold suffering, as long as those rocks "mean" states of a universe. In xkcd's case, a simulation of physics in which some atoms are running one of The Algorithms, like a matryoshka doll. 

What The Algorithms are, how one might be run a variation to produce arbitrary amounts of happiness, etc., by the open admission of the S.I.C. people, is unclear. Sometimes SIC people imply that suffering is the result of an agent having a mismatch between a utility function, their observations, and their environment. But humans don't have utility functions - they experience emotions for a plethora of very circumstantial reasons, not necessarily in reaction to any consistent values whatsoever. We have an array of qualitatively dissilimar emotions - the pleasure/pain thing is just a shorthand. Often the sources of these emotions have very physiologically tractable sources, and people's responses to stimuli manipulated or changed by doctors. There's lots of psychological evidence that says that humans' decision making process is nearly independent of their happiness and that humans don't actually try all that hard to make themselves more happy. Whatever the Algorithm is, it seems like SIC campers are really ready to jump out into the motte and start talking about intelligence like it's some obvious source of qualia when it's clearly much more complicated.

It's also unclear to me if SIC campers think that accidental rock movement, like with a hypothetical boltzmann brain, can also suffer if it starts to resemble a 'mind' enough from some interpretation. I'm assuming we're all atheists here, so clearly the rocks don't have to be "mean" anything to another entity, or else human's couldn't have qualia either. What is the "correct" way to identify if something is running an algorithm or is just engaging in simple random movements? It seems to me like there should be an infinite number of ways to interpret atoms' vibrations as having information or even being transformed in a way that approximates the operation of an algorithm. I don't know if any SIC campers actually worry about this, if they have specific requirements on how to tell if atoms are running algorithms, how likely they think this is to happen in practice, etc.

2. Conveniently, only the algorithms that powerful humans can run are capable of inducing qualia. Chickens do not suffer because they are not capable of "self-awareness", the feature that just so happens to be what led humans to take over the world. This is despite the fact that human suffering seems to happen independently of the correct operation of any of the cool rational reasoning skills that make humans powerful, and that suffering is instead conditional entirely on very boring biocomponents like working central nervous systems, and bloodstream contents. 

I'm almost confident enough to call this idea fantastically and obviously wrong. This seems pretty much perfectly analogous to the arguments that colonial slavemasters made about how slaves were just so much less sophisticated and intelligent, and so felt less pain, so don't worry about all of the hard labor. Ignore the way  animals signal pain exactly like regular people do. They don't have the spark of life self-awareness.

3. Given these premises, in the course of its operation, a GPT-N system has to sometimes predict what humans do. If the algorithm is "close enough" to a perfect simulation of human death, it might be causing actual human like suffering via latent and unspecified side effects of computation. The underlying operations don't have to be anything even analogous to the human brain - you can get here by multiplying enough matrices if you want - as long as the model is predicting human behavior with enough granularity and accuracy. Presumably humans could do this too if they had enough time, paper, and abacuses, as per #1.

Load More