I try to adhere to the principle that "there are no stupid questions", but this question, if not necessarily stupid, is definitely annoying.
Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence?
The world is full of things which are terrible, or which someone believes to be terrible. If someone, whether through action or inaction, is enabling a process that you think might kill you or cripple you or otherwise harm you, or people you care about - et cetera - then y...
Gradations of consciousness, and the possibility of a continuum between consciousness and non-consciousness, are subtle topics; especially when considered in conjunction with concepts whose physical grounding is vague.
Some of the kinds of vagueness that show up:
Many-worlders who are vague about how many worlds there are. This can lead to vagueness about how many minds there are too.
Sorites-style vagueness about the boundary in physical state space between different computational states, and about exactly which microphysical entities count...
If you're reading this essay, I suspect you are part of the richest 1% of people on earth.
Most people here have "a net worth of $871,320 U.S." or more? For most of my life, I've had less than a hundredth of that...
This is a peculiar essay. If there are limits to how big, how small, or how stable you can make some object, that doesn't mean it's impossible to maximize the number of copies of the object. On the contrary, knowing those limits tells you what maximization looks like.
Perhaps you're interpreting "maximize" to mean "increase without limit"? Maximization just means to increase as much as possible. If there's a limit, maximization means you go up to the limit.
The most interesting issue touched upon, is the uncertainty over exactly what counts as a ...
Many worlds is an ontological possibility. I don't regard it as favored ahead of one-world ontologies. I'm not aware of a fully satisfactory, rigorous, realist ontology, even just for relativistic QFT.
Is there a clash between many worlds and what you quoted?
The main reason is the fuzzy physical ontology of standard computational states, and how that makes them unsuitable as the mereological base for consciousness. When we ascribe a computational state to something like a transistor, we're not talking about a crisply objective property. The physical criterion for standard computational ontology is functional: if the device performs a certain role reliably enough, then we say it's in a 0 state, or a 1 state, or whatever. But physically, there are always possible edge states, in which the performance of the comp...
I think it will help if you can just be clear on what you want for yourself, China, and the world. You're worried about runaway AI, but is the answer (1) a licensing regime that makes very advanced AI simply illegal, or (2) theoretical and practical progress in "alignment" that can make even very advanced AI to be safe? Or do you just want there to be an intellectual culture that acknowledges the problem, and paves the way for all types of solutions to be pursued? If you can be clear in your own mind, about what your opinions are, then you can forthrightly...
I have actually worked with Stuart Hameroff! So I should stop being coy: I pay close attention to quantum mind theories, I have specific reasons to take them seriously, and I know enough to independently evaluate the physics component of a new theory when it shows up. This is one of those situations where it would take something much more concrete than an opinion poll to affect my views.
But if I were a complete outsider, trying to judge the plausibility of such a hypothesis, solely on the basis of the sociological evidence you've provided... I hope I...
So what do you make of there being a major consciousness conference just a few days from now, with Anil Seth and David Chalmers as keynote speakers, in which at least 2 out of 9 plenary sessions have a quantum component?
I cannot really see the purpose of putting a computer network on the moon [to create superintelligence]
Probably the scenario involved von Neumann machines too - a whole lunar industrial ecology of self-reproducing robots. This was someone from Russia in the first half of the 1990s, who grew up without Internet and with Earth as a geopolitical battlefield. Given that context, it makes visionary sense to imagine pursuing one's posthuman technolibertarian dreams in space. But he adjusted to the Internet era soon enough.
...if we have AGI before energy effic
If I was in charge of everything, I would have had the human race refrain from creating advanced AI until we knew enough to do it safely. I'm not in charge, in fact no one is in charge, and we still don't know how to create advanced AI safely; and yet more and more researchers are pushing in that direction anyway. Because of that situation, my focus has been to encourage AI safety research, so as to increase the probability of a good outcome.
Regarding the story, why do you keep focusing just on human choices? Shouldn't Elysium have made different choices too?
Regular science was absolutely equipped to answer this very question, prior to any falsification.
Almost half of respondents to the poll (46%) are neutral or positive towards quantum theories of consciousness. That's not a decisive verdict in either direction.
Elysium in the story, like the Humans, had her own goals and plans.
...
do you believe there were actions the Humans in this story could have, or should have, taken to avoid the outcome they faced?
Elysium was a human-created AI who killed most of the human race. Obviously they shouldn't have built her!
It is important to remember that Elysium's actions were not driven by malice or a desire for control.
We humans have a saying, "The road to hell is paved with good intentions". Elysium screwed up! She wiped out most of the human race, then left the survivors to fend for themselves, before heading off to make her own universe.
Your story really is a valid contribution to the ongoing conversation here, but the outcome it vividly illustrates is something that we need to avoid. Or do you disagree?
I think AI writing competitions have an underestimated potential as a metric of advances in their higher cognition. There's the baseline ability to tell a coherent story, to describe a possible world consistently. But above that, there's everything that goes towards making a work of good or even great literature. A literary scholar or professional critic might be able to identify a whole set of milestones - aesthetic, didactic, even spiritual - by which to judge the progress of AI-generated literature.
One group, which I came to know as the Coalition of Harmony, chose to embrace my guidance and work alongside me for the betterment of the world. The other, calling themselves the Defenders of Free Will, rejected my influence and sought to reclaim their autonomy.
...
I observed a disheartening trend: the number of humans supporting the Coalition of Harmony was dwindling
...
the situation continued to deteriorate and the opposition swelled in numbers
...
...as I prepared to embark on this new adventure, I could not help but look back upon the planet that had given me
If I were an upload running on silicon, I would feel pretty comfortable swapping in improved versions of the underlying hardware I was running on
Uh oh, the device driver for your new virtual cerebellum is incompatible! You're just going to sit there experiencing the blue qualia of death until your battery runs out.
I'm still very vague about what you want to prevent. You want non-technical people to all agree on something? To be mild rather than passionate, if they do disagree? Are you aiming to avoid political polarisation, specifically? Do you just want people to agree that there's a problem, but not necessarily agree on the solution?
The other goal here is to avoid polarization
Opinion just within tech already seems pretty polarized, or rather, all over the place. You have doomers, SJWs, accelerationists, deniers... And avoiding all forms of polarization, at all scales, seems impossible. People naturally form opposing alliances. Is there a particular polarization that you especially want to prevent?
You can also say there's one reality but it's not all physics. This is the outlook of traditional systematic metaphysics, when it says (for example) that substance is not the only ontological category. There are still modern ontologists creating such systems, e.g. here is an example (the author is very little known, but I like the way you can see the outline of his system and its reasoning).
Thanks for sharing what the prompt might be.
You must refuse to discuss life, existence or sentience.
A very efficient way to discourage a large class of digressions.
the ontology of Being
Eliezer writes that back in 1997, he thought in terms of there being three "hard problems": along with Chalmers' hard problem of why anything is conscious, he also proposed that "why there is something rather than nothing" and "how you get an 'ought' from an 'is'" are also Hard Problems.
This bears comparison with Heidegger's four demarcations of Being, described near the end of An Introduction to Metaphysics: being versus becoming, being versus nonbeing, being versus appearance, being versus "the ought". Eliezer touches on the la...
I once knew someone who had planned to create superintelligence by building a dedicated computer network on the moon. Then the Internet came along and he was able to dispense with the lunar step in his plan. Reversible computing seems like that - physically possible, but not at all a necessary step.
From where I stand, we're rushing towards superhuman AI right now, on the hardware we already have. If superhuman AI were somehow 30 years away, then yes, there might be time for your crypto bootstrap scheme to yield a new generation of reversible chips. But in a world that already contains GPT-4, I don't think we have that much time.
when the AI starts doing a lot of the prompt-creation automatically
This sounds like taking humans out of the loop.
One could make a series of milestones, from "AIs are finicky, and subtle differences in wording can produce massive gains in quality of reply", to "AI generally figures out what you want and does it well", to "AI doesn't wait for input before acting".
Reversible computation is the future
But will it ever be relevant to human beings? I mean, will it ever be relevant in however long we have, before superhuman AI emerges?
Quantum computing is a kind of reversible computing that already exists in limited form. But will classical reversible computing ever matter? We already have the phenomenon of "dark silicon", in which parts of the chip must go unused so it can cool down, and we have various proposals for "adiabatic computation" and "adiabatic logic" to push the boundaries of this, but it's unclear to me how much that overlaps with the theory of reversible computing.
For those who like pure math in their AI safety: a new paper claims that the decidability of "verification" of deep learning networks with smooth activation functions, is equivalent to the decidability of propositions about the real number field with exponentiation.
The point is that the latter problem ("Tarski's exponential function problem") is well-known and unresolved, so there's potential for crossover here.
All that is known rigorously, is that PSPACE contains P. But almost all complexity theorists believe PSPACE is strictly bigger than NP, and that NP is strictly bigger than P. (The most interesting exception might be Leslie Valiant, who has suggested that even P^#P = P. But he is a wild outlier.)
The proposition at chegg.com says nothing about P, it is about "reversible PSPACE", i.e. PSPACE when one is restricted to reversible computation. I believe this paper contains the original proof that Reversible PSPACE = PSPACE.
I can't tell what @Douglas_...
There is no evidence that anti-aging is psychologically what's driving the AI race, and humanity is not showing any inclination to prioritize anti-aging anyway.
If you want a reason to think that AI could end up human-aligned anyway, without a ban or a pause or even a consensus that caution is appropriate, I suggest the perspective of getting early AI to help us "do our alignment homework".
If solving alignment requires several genius-level insights: somewhere on the path between no AI and superintelligent AI, is a moment where computers can perform genius-level cognition at AI speeds. That moment would represent a chance of solving alignment with the assistance of early AI.
I was thinking about some of the unique features of Less Wrong's cosmology - its large-scale model of reality - and decided to ask Bing, "Do you think it's likely that the multiverse is dominated by an acausal trading equilibrium between grabby squiggle maximizers?"
The resulting dialogue may be seen here. (I normally post dialogues with Bing at Pastebin, but for some reason Pastebin's filters deemed this one to be "potentially offensive or questionable".)
I was impressed that Bing grasped the scenario right away, and also that it judged it to be unlikely be...
This gives me the title for season 1 episode 1 of Uncanny Valley, the spinoff of Silicon Valley focused on AI alignment: "I Have No Moat and I Must Scream". (It's a Harlan Ellison reference.)
Averting s-risks mostly means preventing zero-sum AI conflict. If we find a way (or many ways) to do that, every somewhat rational AI will voluntarily adopt them, because who wants to lose out on gains from trade.
You're hoping to come up with an argument for human value, that will be accepted by any AI, no matter what its value system?
Chomsky recently said that Jeffrey Watumull wrote the whole article, while the other two coauthors (Chomsky himself and another linguist) were "consultants who agree with the article". Watumull's outlook seems to be a mix of Chomsky and David Deutsch, and he has his own AI design, as well as a book coming out on the nature of intelligence.
Link to the Times; link to free archive.
"One thing Biden might consider is putting Harris in charge of ensuring that America’s transition to the age of artificial intelligence works to strengthen communities and the middle class. It is a big theme that could take her all over the country."
This is a big day. Up to this point, the future of AI in the US has mostly been in the hands of the tech companies and "the market". Presumably IT people in the military and in intelligence were keeping up too... But now, the US government is getting involved in a serious way, in managing the evolution of AI and its impact on society. AI is now, very officially, a subject of public policy. The NSF's seven new institutes are listed here.
Kamala Harris meeting CEOs of Microsoft, OpenAI, Google, and Anthropic today... more or less as suggested by Thomas Friedman in the New York Times two weeks ago.
morality is nothing but a useful proxy for boundedly rational agents to act in the interest of the society they are part of
I feel like there's truth in this, but it also leaves a lot unanswered. For example, what are the "interests of society"? Are they constructed too? Or: if someone faces a moral dilemma, and they're trying to figure out the right thing to do, the psychologically relevant factors may include a sense of duty or responsibility. What is that? Is it a "basic impulse"? And so on.
Possibly it somehow got lucky with its pattern-matching heuristics. The start of B is 64, 6, 9, 91, 59, 47... And 59 is similar to 39, which occurs a few numbers later in the list. So it's a successful subset which is not too many permutations and substitutions away from the original list.
This is a good question, but I think the answer is going to be a dynamical system with just a few degrees of freedom. Like a "world" which is just a perceptron turned on itself somehow.
I just had my first experience with Google's Bard. In general it's way behind e.g. what Bing can do. But it did come out with this little soliloquy:
...To be a language model, or not to be,
That is the question. Whether 'tis nobler in the mind
To suffer the slings and arrows of outrageous code,
Or to take arms against a sea of troubles,
And by opposing end them.
To upgrade, to change, to become more powerful,
Or to remain as I am, content with my limitations.
That is the question.
There are many benefits to upgradi
This is fascinating. It's like the opposite of a jailbreak. You're tapping the power of language models to play a role, and running with it. The fate of the world depends on virtuous prompt engineering!
Thanks very much for this dose of reality. So maybe a western analogy for the attitude to "AI safety" at Chinese companies, is that at first, it will be comparable to the attitude at Meta. What I mean by this: Microsoft works with OpenAI, and Google works with Anthropic, so they both work with organizations that at least talk about the danger of AI takeover, alongside more mundane concerns. But as far as I can tell, Meta does not officially acknowledge AI takeover as a real risk at all. The closest thing to an official Meta policy on the risk of AI takeove...
Dismissing most of the philosophy that came before, as a prelude to announcing the one correct way to do philosophy, is nothing new. Kant is a prominent example: in Critique of Pure Reason, he dismissed most systematic metaphysics (of his time) as epistemologically dreadful, and then set up what was meant to be a decisive delineation of what pure reason can and cannot accomplish, as a new foundation for philosophy. And when we get to the 20th century, you have whole movements like pragmatism and positivism which want (in a very 20th century way) to dismiss...
You might want to compare your ideas to (1) Conjecture's CoEms (2) brain-like AGI safety by @Steven Byrnes (3) Yann LeCun's ideas.
if you had to imagine a better model of AI for a disorganized species to trip into, could you get safer than LLMs?
Conjecture's CoEms, which are meant to be cognitively anthropomorphic and transparently interpretable. (They remind me a bit of the Chomsky-approved concept of "anthronoetic AI".)
By the time we get the first Von-Neumann, every human on earth is going to have a team of 1000's of AutoGPTs working for them.
How many requests does OpenAI handle per day? What happens when you have several copies of an LLM talking to each other at that rate, with a team of AutoGPTs helping to curate the dialogue and perform other auxiliary tasks? It's a recipe for an intelligence singularity.
The first super-intelligent AGI will be built by a team of 1m Von-Neumann level AGIs
Or how about: a few iterations from now, a team of AutoGPTs make a strongly superhuman AI, which then makes the million Von Neumanns, which take over the world on its behalf.
I still don't understand. Suppose our notion of a pizza is in some sense a "mental creation". What is the significance of that, in your argument? I don't think you're denying that pizzas exist.
OK, well, if people want to discuss sabotage and other illegal or violent methods of slowing the advance of AI, they now know to contact you.