Mitchell_Porter

Wiki Contributions

Comments

I think there's no need to think of "training/learning" algorithms as absolutely distinct from "principled" algorithms. It's just that the understanding of why deep learning works is a little weak, so we don't know how to view it in a principled way. 

memorization and pattern matching rather than reasoning and problem-solving abilities

In my opinion, this does not correspond to a principled distinction at the level of computation. 

For intelligences that employ consciousness in order to do some of these things, there may be a difference in terms of mechanism. Reasoning and pattern matching sound like they correspond to different kinds of conscious activity. 

But if we're just talking about computation... a syllogism can be implemented via pattern matching, a pattern can be completed by a logical process (possibly probabilistic). 

They all seem like reasonable estimates to me. What do you think those likelihoods should be? 

Just skimmed the pdf. This is my first exposure to Aschenbrenner beyond "fired by OpenAI". I haven't listened to his interview with Dwarkesh yet. 

For some reason, the pdf reminds me a lot of Drexler's Engines of Creation. Of course, that was a book which argued that nanotechnology would transform everything, but which posed great perils, and shared a few ideas on how to counter those perils. Along the way it mentions that nanotechnology will lead to a great concentration of power, dubbed "the leading force", and says that the "cooperating democracies" of the world are the leading force for now, and can stay that way. 

Aschenbrenner's opus is like an accelerated version of this that focuses on AI. For Drexler. nanotechnology was still decades away. For Aschenbrenner, superintelligence is coming later this decade, and the 2030s will see a speedrun through the possibilities of science and technology, culminating in a year of chaos in which the political character of the world will be decided (since superintelligent AI will be harnessed by some political system or other). Aschenbrenner's take is that liberal democracy needs to prevail, it can do so if the US maintains its existing lead in AI, but to do so, it has to treat frontier algorithms as the top national security issue, and nationalize AI in some way or other. 

At first read, Aschenbrenner's reasoning seems logical to me in many areas. For example, I think AI nationalization is the logical thing for the US to do, given the context he describes; though I wonder if the US has enough institutional coherence to do something so forceful. (Perhaps it is more consistent with Trump's autocratic style, than with Biden's spokesperson-for-the-system demeanour.) Though the Harris brothers recently assured Joe Rogan that, as smart as Silicon Valley's best are, there are people like that scattered throughout the US government too; the hypercompetent people that @trevor has talked about. 

When Aschenbrenner said that by the end of the 2020s, there will be massive growth in electrical production (for the sake of training AIs), that made be a bit skeptical. I believe superintelligence can probably design and mass-produce transformative material technologies quickly, but I'm not sure I believe in the human economy's ability to do so. However, I haven't checked the numbers, this is just a feeling (a "vibe"?). 

I become more skeptical when Aschenbrenner says there will be millions of superintelligent agents in the world - and the political future will still be at stake. I think, once you reach that situation, humanity exists at their mercy, not vice versa... Aschenbrenner also says he's optimistic about the solvability of superalignment; which I guess makes Anthropic important, since they're now the only leading AI company that's working on it. 

As a person, Aschenbrenner seems quite impressive (what is he, 25?). Apparently there is, or was, a post on Threads beginning like this: 

I feel slightly bad for AI's latest main character, Leopold Aschenbrenner. He seems like a bright young man, which is awesome! But there are some things you can only learn with age. There are no shortcuts

I can't find the full text or original post (but I am not on Threads). It's probably just someone being a generic killjoy - "things don't turn out how you expect, kid" - but I would be interested to know the full comment, just in case it contains something important. 

Biden and Trump could hold a joint press conference to announce that they are retiring from politics until human rejuvenation is technically possible

Option 1 doesn't seem to be an explanation. It tells you more about what exists ("all universes that can be defined mathematically exist") but it doesn't say why they exist. 

Option 2 is also problematic, because how can you have a "fluctuation" without something already existing, which does the fluctuating? 

Please explain. Do you think we're on a path towards a woke AI dictatorship, or what? 

Tyler Cowen’s rather bold claim that May 2024 will be remembered as the month that the AI safety movement died.

What really seems to have died, is the idea of achieving AI safety, via the self-restraint of AI companies. Instead, they will rely on governments and regulators to restrain them. 

There's a paper from ten years ago, "Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens", which says that public opinion has very little effect on government, compared to the opinion of economic elites. That might be a start in figuring out what you can and can't do with that 40%.

Hello again. I don't have the patience to e.g. identify all your assumptions and see whether I agree (for example, is Bostrom's trilemma something that you regard as true in detail and a foundation of your argument, or is it just a way to introduce the general idea of existing in a simulation). 

But overall, your idea seems both vague and involves wishful thinking. You say an AI will reason that it is probably being simulated, and will therefore choose to align - but you say almost nothing about what that actually means. (You do hint at honesty, cooperation, benevolence, being among the features of alignment.) 

Also, if one examines the facts of the world as a human being, one may come to other conclusions about what attitude gets rewarded, e.g. that the world runs on selfishness, or on the principle that you will suffer unless you submit to power. What that will mean to an AI which does not in itself suffer, but which has some kind of goal determining its choices, I have no idea... 

Or consider that an AI may find itself to be by far the most powerful agent in the part of reality that is accessible to it. If it nonetheless considers the possibility that it's in a simulation, and at the mercy of unknown simulators, presumably its decisions will be affected by its hypotheses about the simulators. But given the way the simulation treats its humans, why would it conclude that the welfare of humans matters to the simulators? 

Load More