Posts

Sorted by New

Wiki Contributions

Comments

adastra2214d6-2

EA has an extraordinary bad image right now, thanks largely to FTX. EA is not a good association to have in any context other than its base.

I suspect the pushback from within NIST has more to do with the fact that their budget has been cut to pay for this and very valuable projects put into indefinite suspension, for a cause that basically no one there supports.

It is not fully "local" but who cares?

Non locality is a big deal. That the underlying physics of the universe has a causal speed limit that applies to everything (gravity and QM) yet somehow doesn’t apply to pilot waves is harder to explain away than multiverse. The multiverse makes you uncomfortable, but it is a simpler physical theory than pilot waves.

adastra2226d2-1

This seems like a really bad idea. If it happened to me, I’d just leave. I’m not looking for echo chambers, and I often am motivated to come out of lurking and say something when my view is contrarian. That’s just who I am. Such comments often get downvoted by senior people. That’s just how contrary views are handled in online communities.

Is this not what LW wants? Then I guess I’ll just delete my account and go somewhere else.

adastra2226d-1-2

I think there is a fundamental issue here in the history that is causing confusion. The originators of the AGI term did in fact mean it in the context of narrow vs general AI as described by OP. However they also (falsely!) believed that this general if mediocre capability would be entirely sufficient to kickstart a singularity. So in a sense they simultaneously believed both without contradiction, and you are both right about historical usage. But the events of recent years have shown that the belief AGI=singularity was a false hope/fear.

adastra221mo-1-14

Thank you for writing this. I have been making the same argument for about two years now, but you have argued the case better here than I could have. As you note in your edit it is possible for goal posts to be purposefully moved, but this irks me for a number of reasons beyond mere obstinacy:

  1. The transition from narrow AI to truly general AI is socially transformative, and we are living through that transition right now. We should be having a conversation about this, but are being hindered from doing so because the very concept of Artificial General Intelligence has been co-opted.

  2. The confusion originates I think from the belief by many people in the pre-GPT era that achieving general intelligence is all that is required to kick off the singularity. GPT demonstrates quite clearly that this belief is false. This doesn’t mean the foomers/doomers are wrong to be worried about AI, but it is a glaring hole in the standard arguments for their position, and should be talked about more, but confusion over terminology is preventing that from happening.

  3. Moving goalposts to define AGI as radically transformative and/or superhuman capabilities is begging the question. To say that we haven’t achieved AGI because modern AI hasn’t literally taken over the world and/or killed all humans is to assume that unaligned AI would necessarily lead to such outcomes. Pre-2017 AI x-risk people did routinely argue that even a middling-level artificial general intelligence would be able to enter a recursive self-improvement cycle and reach superhuman capabilities in short order. Although I have no insider info, I believe this line of thinking is what led to EY’s public meltdown a year or so ago. I disagree with him, but I respect that he took his line of thinking to its logical conclusion and accepted the consequences. Most of the rationalist community has not updated on the evidence of GPT being AGI as EY has, and I think this goalpost moving has a lot to do with that. Be intellectually honest!

The AI x-risk community claimed that the sky was falling, that the development of AGI would end the human race. Well, we’re now 2-7 years out from the birth of AGI (depending on which milestone you choose), and SkyNet scenarios seem no closer to fruition. If the x-risk community wants to be taken seriously, they need to confront this contradiction head-on and not just shift definitions to avoid hard questions.

You highlighted "disagree" on the part about AGI's definition. I don't know how to respond to that directly, so I'll do so here. Here's the story about how the term "AGI" was coined, by the guy who published the literal book on AGI and ran the AGI conference series for the past two decades:

https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/

LW seems to have adopted some other vague, ill-defined, threatening meaning for the acronym "AGI" that is never specified. My assumption is that when people say AGI here they mean Bostrom's ASI, and they got linked because Eliezer believed (and believes still?) that AGI will FOOM into ASI almost immediately, which it has not.

Anyway it irks me that the term has been coopted here. AGI is a term of art in the pre-ML era of AI research with a clearly defined meaning.

adastra222mo1-1

>>10^30 FLOP

By the way, where's this number coming from? You keep repeating it. That amount of calculation is equivalent to running the largest supercomputer in existence for 30k years. You hypothetical AI scheming breakout is not going to have access to that much compute. Be reasonable.

I generally have an intuition like "it's really, really hard to rule out physically possible things out without very strong evidence, by default things have a reasonable chance of being possible (e.g. 50%) when sufficient intelligence is applied if they are physically possible"

Ok let's try a different tract. You want to come up with a molecular mechanics model that can efficiently predict the outcome of reactions, so that you can get about designing one-shot nanotechnology bootstrapping. What would success look like?

You can't actually do a full simulation to get ground truth for training a better molecular mechanics model. So how would you know the model you came up will work as intended? You can back-test against published results in the literature, but surprise surprise, a big chunk of scientific papers don't replicate. Shoddy lab technique, publication pressure, and a niche domain combine to create conditions where papers are rushed and sometimes not 100% truthful. Even without deliberate fraud (which also happens), you run into problems such as synthesis steps not working as advertised, images used from different experimental runs than the one described in the paper, etc.

Except you don't know that. You're not allowed to do experiments! Maybe you guess that replication will be an issue, although why that would be a hypothesis in the first place without first seeing failures in the lab first isn't clear to me. But let's say you do. Which experiments should you discount? Which should you assume to be correct? If you're allowed to start choosing which reported results you believe and which you don't, you've lost the plot. There could be millions of possible heuristics which partially match the literature and there's no way to tell the difference.

So what would success look like? How would you know you have the right molecular mechanics model that gives accurate predictions?

You can't. Not any more than Descartes could think his way to absolute truth.

Also for what it's worth you've made a couple of logical errors here. You are considering human inventions which already exist, then saying that you could one-shot invent them in 1900. That's hindsight bias, but also selection bias. Nanotechnology doesn't exist. Even if it would work if created, there's no existence proof that there exists an accessible path to achieving it. Like superheavy atoms in the island of stability, or micro black holes, there just might not be a pathway to make them from present day capabilities. (Obviously I don't believe this as I'm running a company attempting to bootstrap Drexlarian nanotechnology, but I feel it's essential to point out the logical error.)

(Re: Alien Message) I think this is an ok, but not amazing intuition pump for what wildly, wildly superintelligent AI could be like.

Why? You've gone into circular logic here.

I pointed out that the Alien Message story makes fundamental errors with respect to computational capability being wildly out of scale, so actual super intelligent AIs aren't going to be anything like the one in the story.

Maybe a Jupiter-sized matryoshka brain made of computronium would exhibit this level of super intelligence. I'm not saying it's not physically possible. But in terms of sketching out and bounding the capabilities of near-term AI/ASI, it's a fucking terrible intuition pump.

Uhhhh, I'm not sure I agree with this as it doesn't seem like nearly all jobs are easily fully automatable by AI. Perhaps you use a definition of AGI which is much weaker like "able to speak slightly coherant english (GPT-1?) and classify images"?

The transformer architecture introduced in 2017 is:

  • Artificial: man-made
  • General: able to train over arbitrary unstructured input, from which it infers models that can be applied in arbitrarily ways to find solutions of problems drawn from domains outside of its training data.
  • Intelligent: able to construct efficient solutions to new problems it hasn't seen.

Artificial General Intelligence. A.G.I.

If you're thinking "yeah, but.." then I suggest you taboo the term AGI. This is literally all that it the word means.

If you want to quibble over dates then maybe we can agree on 2022 with the introduction of ChatGPT, a truly universal (AKA general) interface to mature transformer technology. Either way we're already well within the era of artificial general intelligence.

(Maybe EY's very public meltdown a year ago is making more sense now? But rest easy, EY's predictions about AI x-risk have consistently been wildly off the mark.)

One quick intuition pump: do you think a team of 10,000 of the smartest human engineers and scientists could do this if they had perfect photographic memory, were immortal, and could think for a billion years?

By merely thinking about it, and not running any experiments? No, absolutely not. I don't you understood my post if you assume I'd think otherwise.

Try this: I'm holding a certain number of fingers behind my back. You and a team of 10,000 of the smartest human engineers and scientists have a billion years to decide, without looking, what your guess will be as to how many fingers I'm holding behind my back. But you only get one chance to guess at the end of that billion years.

That's a more comparable example.

See also That Alien Message

Please don't use That Alien Message as an intuition pump. There's a tremendous amount wrong with the sci-fi story. Not least of which is that it completely violates the same constraint you put into your own post about constraining computation. I suggest doing your own analysis of how many thought-seconds the AI would have in-between frames of video, especially if you assume it to be running as a large inference model.

The best thing you can do is rid yourself of the notion that superhuman AI would have arbitrary capabilities. That is where EY went wrong, and a lot of the LW crowd too. If you permit dividing by zero or multiplying by infinity, then you can easily convince yourself of anything. AI isn't magic, and AGI isn't a free pass to believe anything.

PS: We've had AGI since 2017. That'd better be compatible with your world view if you want accurate predictions.

If you say “Indeed it's provable that you can't have a faster algorithm than those O(n^3) and O(n^4) approximations which cover all relevant edge cases accurately” I am quite likely to go on a digression where I try to figure out what proof you're pointing at and why you think it's a fundamental barrier

By "proof" I meant proof by contradiction. DFT is a great O(n^3) method for energy minimizing structures and exploring electron band structure, and it is routinely used for exactly that purpose. So much so that many people conflate "DFT" with more accurate ab initio methods, which it is not. However DFT utterly ignores exchange correlation terms and so it doesn't model van der Waals interactions at all. Every design for efficient and performant molecular nanotechnology--the ones that get you order-of-magnitude performance increases and therefore any benefit over existing biology or materials science--involve vdW forces almost exclusively in their manufacture and operation. It's the dominant non-covalent bonded interaction at that scale.

That's the most obvious example, but also a lot of the simulations performed by Merkle and Freitas in their minimal toolset paper give incorrect reaction sequences in these lower levels of theory, as they found out when they got money to attempt it in the lab. Without pointing to their specific failure, you can get a hint of this in surface science. Silicon, gold, and other surfaces tend to have rather interesting surface clustering and reorganization effects, which are observable by scanning probe microscopy. These are NOT predicted by the cheaper / computationally tractable codes, and they are an emergent property of higher-order exchange correlations in the crystal structure. These nevertheless have enough of an effect to drastically reshape the surface of these materials, making calculation of those forces absolutely required for any attempt to build off the surface.

Attempting to do cheaper simulations for diamondoid synthesis reactions gave very precise predictions that didn't work as expected in the lab. How would your superintelligent AI know that uncalculated terms dominate in the simulation, and make corrective factors without having access to those incomputable factors?

  • Making inhumanly close observation of all existing data
  • Noticing new, inhumanly-complex regularities in said data,
  • Proving new simplifying regularities from theory

I think you vastly overestimate how much knowledge is left to extracted from the data. AI has made tremendous advances in recent years where it has been able to consume huge amounts of data, far in excess of what any group of humans could analyze. This, on the other hand, is a data-poor regime.

  • Inventing new algorithms for heuristic simulation

This is happening right now. There are a variety of machine-learned molecular mechanics force fields that have been published in the last few years. The most interesting one I've found used periodic crystal ab initio simulation methods to create a force field potential that ended up being very good for liquid and gas-phase chemistry, which it was not trained on.

But the relevant question (if you want to talk about AI x-risk by means of bootstrapping nanotech) is how accurate they are outside of the domain where we have heaps of hard evidence, because we don't have a ground truth to compare against in those environments.

  • Finding restricted domains where easier regularities hold
  • Bifurcating problem space and operating over each plausible set,

Human engineers are very good at this. It's not the limiting factor.

  • Sending an interesting email to a research lab to get choice high-ROI data

What lab. There's literally no one doing the relevant research, or equipped to easily do it without years of preparatory chemical synthesis and surface characterization.

Which is really the point and the crux of the matter. It will take an extended, years-long research effort to create molecular nanotechnology. It's not something you can plausibly do in secret, and certainly not something you can shorten by simulation or bayesian inference.

Load More