mesaoptimizer

https://mesaoptimizer.com

 

learn math or hardware

Wiki Contributions

Comments

Sorted by

As of right now, I expect we have at least a decade, perhaps two, until we get a human intelligence level generalizing AI (which is what I consider AGI). This is a controversial statement in these social circles, and I don't have the bandwidth or resources to write a concrete and detailed argument, so I'll simply state an overview here.

So the main questions for me, given my current model, are these:

  • How many OOMs of optimization power would you need for your search process, to stumble upon a neural network model (or more accurately, an algorithm), that is just general enough that it can start improving itself? (To be clear, I expect this level of generalization to be achieved when we create AI systems that can do ML experiments autonomously.)
  • How much more difficult will each OOM increase be? (For example, if we see an exponential increase in resources and time to build the infrastructure, that seems to compensate for the exponential increase in the optimization power provided.)

Both questions are very hard to answer with rigor I'd consider adequate given their importance. If you did press me to answer, however: my intuition is that we'd need at least three OOMs and that the OOM-increase difficulty would be exponential, which I approximate via a doubling of time taken. Given that Epoch's historical trends imply that it takes two years for one OOM, I'd expect that we roughly have at least 2 + 4 + 8 = 14 years more before the labs stumble upon a proto-Clippy.

IDK how to understand your comment as referring to mine.

I'm familiar with how Eliezer uses the term. I was more pointing to the move of saying something like "You are [slipping sideways out of reality], and this is bad! Stop it!" I don't think this usually results in the person, especially confused people, reflecting and trying to be more skilled at epistemology and communication.

In fact, there's a loopy thing here where you expect someone who is 'slipping sideways out of reality' to caveat their communications with an explicit disclaimer that admits that they are doing so. It seems very unlikely to me that we'll see such behavior. Either the person has confusion and uncertainty and is usually trying to honestly communicate their uncertainty (which is different from 'slipping sideways'), or the person would disagree that they are 'slipping sideways' and claim (implicitly and explicitly) that what they are doing is tractable / matters.

I think James was implicitly tracking the fact that takeoff speeds are a feature of reality and not something people can choose. I agree that he could have made it clearer, but I think he's made it clear enough given the following line:

I suspect that even if we have a bunch of good agent foundations research getting done, the result is that we just blast ahead with methods that are many times easier because they lean on slow takeoff, and if takeoff is slow we’re probably fine if it’s fast we die.

And as for your last sentence:

If you don’t, you’re spraying your [slipping sideways out of reality] on everyone else.

It depends on the intended audience of your communication. James here very likely implicitly modeled his audience as people who'd comprehend what he was pointing at without having to explicitly say the caveats you list.

I'd prefer you ask why people think the way they do instead of ranting to them about 'moral obligations' and insinuating that they are 'slipping sideways out of reality'.

Seems like most people believe (implicitly or explicitly) that empirical research is the only feasible path forward to building a somewhat aligned generally intelligent AI scientist. This is an underspecified claim, and given certain fully-specified instances of it, I'd agree.

But this belief leads to the following reasoning: (1) if we don't eat all this free energy in the form of researchers+compute+funding, someone else will; (2) other people are clearly less trustworthy compared to us (Anthropic, in this hypothetical); (3) let's do whatever it takes to maintain our lead and prevent other labs from gaining power, while using whatever resources we have to also do alignment research, preferably in ways that also help us maintain or strengthen our lead in this race.

I recommend messaging people who seem to have experience doing so, and requesting to get on a call with them. I haven't found any useful online content related to this, and everything I've learned in relation to social skills and working with neurodivergent people, I learned by failing and debugging my failures.

I hope you've at least throttled them or IP blocked them temporarily for being annoying. It is not that difficult to scrape a website while respecting its bandwidth and CPU limitations.

I searched for it and found none. The twitter conversation also seems to imply that there has not been a paper / technical report out yet.

Based on your link, it seems like nobody even submitted anything to the contest throughout the time it existed. Is that correct?

Load More