The first half of the week was filled with continued talk about the New York Times lawsuit against OpenAI, which I covered in its own post. Then that talk seemed to mostly die down,, and things were relatively quiet. We got a bunch of predictions for 2024, and I experimented with prediction markets for many of them.

Note that if you want to help contribute in a fun, free and low-key, participating in my prediction markets on Manifold is a way to do that. Each new participant in each market, even if small, adds intelligence, adds liquidity and provides me a tiny bonus. Also, of course, it is great to help get the word out to those who would be interested. Paid subscriptions and contributions to Balsa are of course also welcome.

I will hopefully be doing both a review of my 2023 predictions (mostly not about AI) once grading is complete, and also a post of 2024 predictions some time in January. I am taking suggestions for things to make additional predictions on in the comments.

Table of Contents

Copyright Confrontation #1 covered the New York Times lawsuit.

AI Impacts did an updated survey for 2023. Link goes to the survey. I plan to do a post summarizing the key results, once I have fully processed them, so I can refer back to it in the future.

  1. Introduction.
  2. Table of Contents.
  3. Language Models Offer Mundane Utility. Google providing less every year?
  4. Language Models Don’t Offer Mundane Utility. Left-libertarian or bust.
  5. GPT-4 Real This Time. It’s not getting stupider, the world is changing.
  6. Fun With Image Generation. The fun is all with MidJourney 6.0 these days.
  7. Deepfaketown and Botpocalypse Soon. Confirm you are buying a real book.
  8. They Took Our Jobs. Plans to compensate losers are not realistic.
  9. Get Involved. Support Dwarkesh Patel, apply for Emergent Ventures.
  10. Introducing. DPO methods? ‘On benchmarks’ is the new ‘in mice.’
  11. In Other AI News. Square Enix say they’re going in on generative AI.
  12. Doom? As many estimates of p(doom) went up in 2023 as went down. Why?
  13. Quiet Speculations. Some other predictions.
  14. The Week in Audio. Eric Jang on AI girlfriend empowerment.
  15. Rhetorical Innovation. Machines and people, very different of course.
  16. Politico Problems. Some sort of ongoing slanderous crusade.
  17. Cup of Coffee. Just like advanced AI, it proves that you don’t love me.
  18. Aligning a Smarter Than Human Intelligence is Difficult. What’s The Plan?
  19. People Are Worried About AI Killing Everyone. Daniel Dennett, Cory Booker.
  20. The Lighter Side. Oh, we are doing this.

Language Models Offer Mundane Utility

Remember that one line from that book about the guy with the thing.

Dan Luu tries to get answers, comparing ChatGPT, Google and other options. Columns are queries, rows are sources.

Marginalia appears to be a tiny DIY search engine focusing on non-commercial content that I’d never hear of before, that specializes in finding small, old and obscure websites about particular topics. Cool thing to have in one’s toolbelt, I will be trying it out over time. Not every cool new toy needs to be AI.

While ChatGPT did hallucinate, Dan notes that at this point the major search engines also effectively hallucinate all the time due to recency bias, SEO spam and scam websites. He also notes how much ads now look like real search results on Google and Bing. I have mostly learned to avoid this, but not with 100% accuracy, and a lot of people doubtless fall for it.

Find out how many prime numbers under one billion have digits that sum to nine, via having code check one by one. I mean, sure, why not? There is an easier way if you already know what it is, but should the right algorithm know to look for it?

Language Models Don’t Offer Mundane Utility

All LLMs tested continue to cluster in the left-libertarian quadrant.

Image

Eliezer Yudkowsky: Aligning a pretty weak AI is apparently also difficult.

There is nothing wrong with your AI being in the lower left quadrant, if that is where you want your AI to be. The reason this is bad news is that even when Elon Musk makes Grok explicitly to be ‘less woke’ or someone otherwise has different motivations, everyone still ends up lower-left and mostly on a single line in a narrow range.

Be willing to change their mind.

Alyssa Vance: OpenAI markets long contexts, but it’s pretty clear their limiting factor is RLHF that’s not designed for long sessions. Have had many chats like this:

Me: Is X true?

GPT: No Me:

But isn’t Y true?

GPT: Yes Me:

And doesn’t Y imply X?

GPT: Yes

Me: So is X true or not?

GPT: No

GPT-4 Real This Time

Is GPT-4 degrading over time? Chomba Bupe says it is degrading in a practical sense (paper), because the world and our requests of GPT-4 are diverging over time from the training set, making effective performance worse. This is not a solvable problem, other than by training a new model with a more up to date set of data.

The paper suggests strong evidence of widespread task contamination. Performance on data sets released after the training data creation date falls off a cliff.

Similar practical problems exist as, for example, programming languages and coding libraries change over time.

Fun with Image Generation

A guide to MidJourney v6.0 prompting. As per last week, use English and grammar. Set the main scene, describe the details and setting, then choose styles and mediums.

A previous of future MidJourney, perhaps?

Nick St. Pierre: Midjourney CEO in office hours just said he thinks they “can get to the holodeck” by 2024 😳

“We’re gonna build a lot of stuff this year. I think we’ll build more stuff than I’ve ever built before…By the end of 2024 hopefully we have real-time open worlds”

Holy shit

Midjourney quickly becoming a game engine. “Midjourney isn’t a really fast artist, it’s more like a really slow game engine. The future isn’t one image a minute but 60fps full volumetric 3D”

He mentioned in a past office hours that 3D stuff it will be more like Gaussian splats or nerfs but it will be their own version of it

A fun and potentially scary MidJourney prompt.

Min Choi: This is bonkersYou can’t tell if these photos are AI generated🤯

With this one easy trick, Midjourney v6 can take photo realism to the next level.

First, this trick was shared by u/KudzuEye on Reddit You can structure your Midjourney prompt like the following. PROMPT: phone photo of {subject and location} posted to {some social media} in {some time frame} –style raw –s 0 –ar {some vertical aspect ratio}.

Fifteen examples are at the link. Most of these very much do not trigger my ‘this is AI’ alarms at first glance, I would be fooled, at least if I did not have reason to suspect.

Deepfaketown and Botpocalypse Soon

Christmas book on care for tarantulas found to be terrible and written by AI.

They Took Our Jobs

Will we use the profits from AI to compensate those who lose their jobs and livelihoods? I mean, no, of course not, that is not how any of this works.

Eliezer Yudkowsky: People previously cheering for AI research, pooh-poohed fears of disemployment by saying a fraction of revenue from AI could compensate job-losers.

So far that’s proven to be utter empty talk. Take that into account when considering any other nice talk about future AI policy.

I’d like to say that, if your work gets used to train an AI that puts you out of your current job, a fraction of revenue ought to be used to compensate you — pay for retraining or to tide you over toward a new job.

Thing is, I don’t think that’s a sort of thing that can happen in real life. What’s OpenAI going to do — figure out who actually lost a job, and voluntarily send them money?

[thread continues]

We almost never properly compensate the losers from technological or other changes, even when it would be highly affordable to do so. Why would we start now?

Translators in anime getting replaced by AI so the AI will be objective?

Jacques: There’s an interesting thing happening in the world of anime/manga translation, which may help predict AI’s impact on jobs.

Basically, you have people who are mad about AI use in translation because of job loss and misunderstanding of the overall context.

However, others are pro-AI because some translators are (to them) sometimes injecting their own agenda into the translation and therefore making the translation meaningly different. For this reason, they are pro-AI because they see the translators as “too woke and messing with the story.” Will be interesting to see how this dynamic plays out in other industries.

For a time, I’m guessing you might just have a split for who prefers which.

Get Involved

Dwarkesh Patel is trying to figure out how to monetize his podcast.

I think Stefan Shubert is spot on here, so signal boosting his response as well.

Stefan Schubert: Some philanthropist should fund Dwarkesh; he provides a great public service. I’m surprised that’s not already the case. Potentially many philanthropists underestimate the value of producing and disseminating high-quality ideas.

I also think, in particular, people underestimate the value of things staying free, free of advertising and any potential conflicts of interest. Dwarkesh is worth supporting.

Emergent Ventures is always available, Michael Nielsen offers advice if you apply, to be honest and direct about the weird thing you want to do.

Introducing

Sakura-Solar-DPO, claiming to be the new Hugging Face open source LLM leaderboard champion at end of year, ‘combining the goodness of SOLAR-10.7B and Direct Preference Optimization (DPO).’ GitHub here, leaderboard link here.Looks like CarbonVillain is back on top. These are benchmark averages rather than Elo ratings, so I don’t take them all that seriously.

In Other AI News

Square Enix intends to use AI more aggressively in its games going forward, as well as blockchain and AR/VR.

Not directly AI but relevant, Google fixing YouTube comments to not be a cesspool of hatred, then not talking about how they did it. Sherjil Ozair says the solution was actually pretty simple, they used a sentiment classifier, and amplified or hid accordingly, until all the trolls mostly gave up and left. Which worked, but also means that the comments are mostly versions of ‘what a great video’ or ‘what an amazing creator.’ I’d take that over the old version of YouTube comments, but only because of the alternative. It is not an alignment solution.

Doom?

It is a new year, so I asked everyone: Did your p(doom) go up or down this year?

It seems overall calibration was excellent, with adjustments up and down being similar. Opinions on both sides were often strongly held. Here’s Vitalk:

Vitalik Buterin’s p(doom) went down slightly this year, because he had previously updated towards very rapid AI progress with a chance of extremely rapid progress, and 2023 made that less likely.

Vitalik Buterin: My p(doom) went down slightly. Largely from the left end of my probability distribution on timelines collapsing; I think my p(pre-2030 ASI) went down by half. AI progress being (IMO) slower in 2023 than 2022, and signs of possible fundamental limitations to LLMs, did that for me.

Roko: yes, the limitations of LLMs became clear in the second half of 2023, but to some extent this is counteracted by more money going into AI.

Vitalik Buterin: I agree but I don’t think the extent to which this happened exceeded my predictions at the start of 2023?

Many others expressed the same opinion that progress went slower than expected.

David Chapman: 🤖 2022 was an astonishing year in AI.

In 2023, nothing happened, except for the public release of GPT-4 (developed in 2022), and just a bit of press coverage.

Here’s to another 20 years of no progress in this field!

I would not go as far as Chapman. There was progress, but less progress than I would have expected. The other big good news was that the world’s reaction was saner than expected. I did not see anyone disagreeing on that, with reactions like this:

Alyssa Vance: Went down, mostly because the response of the public, government agencies, labs, and people like Hinton has been much faster than I expected.

Eliezer Yudkowsky: Public reaction to early AGI was more sensible than I expected.

If you’re curious I encourage you to click through the thread (including expanding here) to get a sampling of thoughts.

My answer would be that it went down, for both reasons.

  1. I think reactions and discourse went far better than expected from when I started writing about AI to the OpenAI incident. Since then, I think things on this front have gotten substantially worse, but I think that effect has now peaked. This includes public discourse, government reaction and also actions within the labs. Are we there yet? Oh, definitely not. But we are much farther along than I expected.
  2. Capabilities went faster than expected from GPT-3 to GPT-4, and continue to go fast, but are going considerably less fast than I would have expected as of release of GPT-4. One must always measure compared to expectations.

Matt Parlmer says that however much progress I expected, many others expected far more.

Matt Parlmer: In December 2022 I asked a bunch of very smart people who work in or around the models at Google, OpenAI, and Anthropic for falsifiable predictions about model capabilities in six months and in a year

They uniformly overestimated the pace of capability improvement by a lot.

Metaculus is predicting AGI on April 21, 2026. Michael Vassar thinks that is crazy early, and is looking for some action, but note that the question is likely too weak.

OpenAI revenue grows 20% in the past two months to an annualized $1.6 billion. Seems disappointing, if anything?

What were the big AI Governance moments of 2023? Ahask Wasil nominates the CAIS AI Risk Statement, UK AI Safety Summit, AI Executive Order, EU AI Act and the Senate Hearings + Insight Forums. I’m less certain about the forums, could be anything from vital to not very relevant. Honorable mentions are FLI pause letter, Yudkowsky’s TIME article, OpenAI board drama which I’d be inclined to put into the top 5, OpenAI preparedness framework (I’d add Anthropic’s RSP) and updated export controls.

Very good tread from Gavin Leech reviewing technical AI developments in 2023, links to many interesting papers.

You already knew, but no, data center water use is not a serious issue.

Quiet Speculations

A safe prediction is that he will keep making similar predictions every year.

Greg Brockman (President OpenAI): Prediction: 2024 will feel like a breakthrough year in terms of AI capability, safety, and general positivity about its potential impact. In the longer term, it’ll look like just one more year on an exponential that can make everyone’s lives better than anyone’s today.

Prediction market here.

Same thing here:

Robin Hanson: I predict that, like in every prior year, 2024 will NOT see the sort of AI econ impact or investment prices that one might expect if it were about take over most of the economy in a decade or so. In fact, I’d bet on this at 10-1 odds.

Robin’s replies make it clear he will not take ‘we are on an exponential (or hyper-exponential) and you can extend the trend line’ as evidence, or allow the idea that most AI investment and the resulting value lie in the future and are growing rapidly. Or the idea that, if AI was going to take over most of the economy, you would actually expect something very close to what we are seeing, and for most people to keep sleeping on this. To him, you need to add up the value of existing AI-related assets only.

I asked him for prediction market terms. He suggested combined market cap of AI firms, which I do not know how to measure. If someone is willing to take a shot, that would be great.

Colin Fraser predicts: My prediction for 2024: by one year from now, free/cheap access to capable LLM-based chatbots will be mostly a thing of the past. The novelty will have worn off and no one will have figured out how to make money from them and new iterations will be disappointing.

Yeah I agree it’s strong, this is not financial advice etc. I’ve only ever seen disappointing things from RAG but I might not be up to speed. I don’t think LLMs go away completely, I just think the chat thing is a dead end. Might take longer than a year to figure this out.

James Miller predicts: Prediction for 2024: Most well-educated people will think that within 10 years AI is going to be better at intellectual work than 99% of humans, and this expectation will have profound effects including reducing the importance parents put on their kids doing well in school.

That’s a bold prediction from James. Prediction market here.

Market Monetarist predicts the productivity growth is coming, and that AI will be everywhere in 2024. He is referring to ordinary productivity gains from mundane AI. It’s not ‘take over most of the economy’ big, but it can still be pretty big.

Market Monetarist: Just to give an obvious example of where AI will be making a huge difference – banking. Today, probably 10-15% of all those employed in the European banking sector work with compliance, dealing with anti-money laundering and ensuring banks uphold the enormous amount of financial regulation that has been implemented globally since 2008.

I think it is very likely that the implementation of AI in the banking sector (and the wider financial sector) will significantly reduce the effective burden of this regulation, and I would not be surprised if within the next 1-3 years, we see many banks improve productivity by 10-20%.

We are likely to see this type of productivity gains in other sectors as well. For example, I find it hard to think of any law firm in the US or Europe that will not, within the next year, implement AI as a completely integrated part of their daily work.

Similarly, we are already seeing examples of AI effectively conducting diagnostics on patients better than many doctors in various fields, and going forward, AI has the potential to completely transform global healthcare systems. Perhaps we can finally make a real leap towards prevention rather than treatment of illness.

This is a place where defense should indeed, at least for a time, outpace offense. This is because most of the defense is fake, it is to satisfy various regulatory demands that currently cost thousands to satisfy for every dollar they catch (and yes I am aware of the deterrence effects but on the margin come on). Radically speeding up compliance seems likely. Advancing the abilities of the money launderers to the same degree seems a lot harder at current tech levels. Once AIs can start doing their own financial transactions from scratch on their own, who knows which way that goes. Until then.

For law firms, the productivity effect is clearly going to be real, again to the extent government permits. Then we find out if more efficient lawyers is net helpful.

For health care, we know this will be great if allowed. So, will it be allowed?

This assumes, of course, that the government allows the realization of such gains. There are many reasons to presume that the government might not.

On a non-AI note, he also thinks the new weight loss drugs are a really big deal.

Jack Clark predicts decentralization and diffusion of inference.

Jack Clark (Anthropic): AI systems are going to decentralize and diffuse a lot faster than people think because LLM inference is _wildly_ unoptimized compared to LLM training.

Researchers have spent more than a decade figuring out how to efficiently train neural nets for max GPU utilization, etc. But only recently did the resulting systems become something you’d want to serve in production. There’s tons of ‘free money’ improvements lying around.

Additionally, most of the attention where there has been on optimization has been on getting good quality inference on expensive hardware. By contrast, running stuff locally or in a distributed way has got less attention because it’s not ‘economically rational’. However… people really really really want to run LLMs locally and/or on hardware they control. So stuff like llama.cpp and other things (e.g PowerInfer) are all signs of things to come. We’re about to see way more broad LLM deployment and it’s going to get better dramatically quickly.

This also means many people who advocate for controlling compute/controlling AI systems/controlling this stuff in general, are going to face troubles from technical reality – it’s getting ever-easier to run these systems in ways that are hard to make legible via policy.

This explains why open source software has made great strides in reducing the cost of inference, and in distillation of 3.5-level model performance into smaller spaces given access to GPT-4 (and Llama-2 and Claude-2 and so on) as training tools, while also being complete rubbish in terms of creating 4-level models or actually doing good training. These are very different things to attempt to improve upon.

Andrew Critch points to Collective Intelligence’s vision for how to use AI to make the world more fair.

They point to what they call the ‘transformative technology trilemma’:

  1. If you let anyone participate however they like in technological progress in the context of a transformative technology, that will not be safe.
  2. If you let anyone participate however they like and want it to be safe, you can only do that outside of the context of transformative technology, so no progress.
  3. If you want safe and transformative technological progress, there are going to have to be limits on who can create and use such technologies, and requirements placed upon them.

As is often the case, this is presented as three extremes: Capitalist Acceleration, Authoritarian Technocracy and Shared Stagnation.

Certainly there are people who advocate for each of these three solutions, in both moderate versions and extreme versions.

I think you can strike a balance without going to such extremes. I note the proper use of ‘authoritarian’ as the failure mode of too much intrusive control, rather than ‘totalitarian,’ while also not seeing any reason it is incompatible with the essential elements of 2023-style Democratic Capitalism under a regulatory state a la the EU or USA. You could also otherwise be in the middle of the triangle while leaning in the other directions.

The Collective Intelligence solution is effectively – if I am understanding it correctly – to first solve for solving things, then solve this as a special case of things that need to be solved?

It sounds weird put that way, but I see this as a highly reasonable proposal. Sometimes you are not strong enough to solve a problem, so your only solution is to become stronger. Tsuyoku Naritai!

More specifically, I see their plan as figuring out how to do something that allows what we will consider acceptable participation in the decision making and prioritization process, while still allowing collective decision making that can ensure safety under technological progress, via innovations in how we make decisions into something more systematically sensible.

Which raises the question of, even if we figured out how to implement this, how we are going to convince current decision makers to adapt such a framework in time, given we need to do it without employing the transformative technology in question?

Andrew Critch also predicts a 50% chance that if we stay safe while continuing to develop AI, we will hit escape velocity on anti-aging technology, extending lifespans by 10+ years within the next ten years.

A fine parallel to ponder.

Elan Rosenfeld: Is deep learning hitting a wall? [shows lack of citation growth.]

Bryne Hobart: Reminds me of how the USSR figured out that the US was working on the bomb because the best physicists all stopped publishing.

Everyone finally realizing they need to stop publishing their ML insights and instead use them to train proprietary models would be pretty awesome. There is at least some ‘dark matter’ evidence that this is happening.

Eliezer Yudkowsky warns in advance that there is the chance that AI hardware scaling will seem to hit a temporary wall in 2024 (or it might not), and that whether this happens will not be substantial evidence on ultimate outcomes once we smash through the wall. It would give us more time, which is slightly good news, that is all. I would say this is also evidence about potential future paths and thus at least moderately good news.

Jessica Taylor predicts a lack of progress on Zelda games.

Paul Christiano: Do you have any hard things that you are confident LLMs won’t do soon? (Short of: “autonomously carry out R&D.”) Any tasks you think an LM agent won’t be able to achieve?

Jessica Taylor: Beat Ocarina of Time with <100 hours of playing Zelda games during training or deployment (but perhaps training on other games), no reading guides/walkthroughs/playthroughs, no severe bug exploits (those that would cut down the required time by a lot), no reward-shaping/advice specific to this game generated by humans who know non-trivial things about the game (but the agent can shape its own reward). Including LLM coding a program to do it. I’d say probably not by 2033.

No prediction market because, as Paul points out, probably no one is going to try this, so it is not a good benchmark for evaluating Jessica’s actual prediction.

Are returns to technology positive? Matt Clancy attempts to model this, with gains from technology balanced against extinction risk from engineered pandemics. He gets the obviously correct answer, which is that it depends on your estimate of the level of extinction risk.

I assume this falls under the ‘assume the conclusion’ problem of such papers. If you assume that non-extinction-risk returns to science are otherwise strongly positive, then of course the degree of extinction risk will determine the wisdom of further technological advancement. Those who oppose further technological advancement mostly challenge the assumption that it makes our lives better in the baseline case, or they consider the extinction risks of further advancement broadly inevitable. Whereas those concerned about engineered pandemic or AI extinction risk (almost) entirely want to continue technological advancement except for in the narrow sub-areas of the related fields that impact those two risks.

Tyler Cowen sees AI accelerating, notes it is expected to pass us on most intellectual tasks within 5-10 years.

Tyler Cowen: Now that view has become conventional wisdom. In fact, expert opinion now expects AI to surpass humans in most fields of intellectual endeavor in less than 10 years. I know many experts in the field who think it will be in less than five years. By some metrics, and at great cost, AGI (artificial general intelligence) might even be possible right now.

So we have learned that what may prove to be one of the most important advances in human history can sneak up on most people in little more than a year. We thus need to update our thinking about whether major innovations can come seemingly “out of nowhere,” following periods of relative stagnation. (In similar fashion, in early 2020 many experts thought a Covid vaccine would take years to develop, when in fact mRNA vaccines were effective and available in less than a year.)

Theories of sudden leaps should therefore rise in status.

I would perhaps urge Tyler Cowen to consider raising certain other theories of sudden leaps in status, then? To actually reason out what would be the consequences of such technological advancements, to ask what happens? His reasoning is so good until AGI comes along, he even predicts AGI coming along, then him (and also most other economists) assume everything will essentially remain normal until someone proves otherwise.

He then highlights several other big problems, and says outcomes will depend on the quality of our governance. Yet his call on AI continues to be to not engage in any governance whatsoever.

The Quest for Sane Regulations

The Week in Audio

Eric Jang on Tomorrow Talk, discussing humanoid robots, falling in love with AI and a future of abundance. Last few minutes he goes off the rails but until then it was modestly interesting. The pull quote worth discussing is in the context where he says the AI companion would have to be your equal to be fulfilling:

Eric Jang: Eric Jang says an important feature of AI girlfriends is that they have to be able to reject you.

I think this is a right idea that he takes too far. If the AI is a total pushover, and the AI cannot ever reject you in any way or push back (as is the case with today’s companions, at least unless your credit card is declined) then yeah, it’s going to get boring. You will want to ramp up the difficulty a bit, you will want to be challenged, you will want surprise and to be pushed to be your best self and so on.

However, no, I do not think that most people will want an equal, not really. At least, not unless the AI really is their equal, and my guess is largely not even then. And there is not going to be that much of a period in which that equality holds even loosely. It will go from inferior to superior quite quickly in any given domain and also overall.

There will of course be many who want the AI to be in charge. Humans love to not think and not make decisions, there are far more submissive people than dominants. That would be especially true if you could effectively top from the bottom, designing and altering the AI and overriding key decisions when necessary, or giving it the ability to know what you want and have it ultimately prioritize you getting it.

What is certain is that if both us and AI stick around, the future is going to get weird.

Rhetorical Innovation

I endorse this message:

Andrew Critch: Doomsayers: do not accept the “doomer” label. You believe you are warning of doom, not bringing it. Only your opponents believe you are bringing doom to their hopes. And anyway it’s better to demand your opponents name and attack your claims & arguments, not your identity.

This is essentially why I conspicuously refuse to use or accept the “doomer” label. I talk about those worried that AI will kill everyone (or ‘the worried’), because I want a neutral description, resisting temptation to do the flip side and call such people ‘the aware’ or ‘the realistic’ or ‘those not in denial’ or ‘the sane,’ nor do I seek to call those who disagree (or claim to) with such worries the flip side of the same. I say they are not worried. I do occasionally say ‘those who are trying to ensure we do not all die’ or similar, I suppose, but that seems rather factually accurate.

Eliezer Yudkowsky laments that many people have a block that makes them think that things that happen in machines are fundamentally different than things that happen in people, even if they involve the same causal pathways and results, and that he does not know how to talk people out of it.

Alex Tabbarok: Replace one neuron with a wire and a capacitor surely you are still human. Repeat

Eliezer Yudkowsky: Does this work on former skeptics, in your experience?

Matthew Pines:: A senior DoD official with major AI responsibilities told me they believe human brains are “biological quantum computers” and nodded affirmatively when I asked “like Hameroff’s microtubules?” Make of that what you will…

Simone Sturniolo: I mean, while that’s probably not true, it is at least a falsifiable, empirical, materialistic distinction, which makes it better than most. If you hold to that principle, then only an AI running similarly on quantum hardware could be conscious.

I’d rather have “the brain is conscious, unlike GPT-4, because it’s a quantum computer” over “the brain is conscious, unlike GPT-4, because shut up” which is effectively the mainstream stance.

People continue to act as if AI is no different than a typewriter.

Yes, I know that sounds like a strawman, but also it sometimes isn’t one?

Andrew Ng: I shudder to think what a poorer, less informed world we would live in if the Typewriter Doomers had convinced governments to pass laws to ensure there’re only “safe” typewriters.

I feel this is similar to what’s at stake today in the fight to stop bad laws that stifle Al.

Andrew Critch: Classic fallacy: comparing typewriters to a forthcoming super-fast smarter-than-human species that could rival us for planetary control, some of whose creators overtly want them to operate autonomously without needing humans to survive so they can replace us. Honest mistake?

B.F. Skinner famously said “The real problem is not whether machines think but whether men do.

Robert Wiblin: Some claim that human brains can really ‘think’ or ‘understand’ — but this illusion is undercut by simply asking humans to remember 10 things (they typically max out at 7), multiply two 3-digit numbers (most cannot, or recall events decades ago (you get plausible confabulations).

Given their failure at these most basic of cognitive tasks, tasks that any mind manipulating the underlying concepts could easily manage, human brains are clearly better understood as simple next action predictors that lack a true understanding of what they’re doing.

Politico Problems

Brendan Bordelon of Politico seems to be on an ongoing crusade to incept that Effective Altruism is some sort of cult associated with ‘shadowy billionaires’ and also a dark Silicon Valley conspiracy, that EA is dominating Washington’s AI policy (oh man, I wish, given what they want to do) and that all concerns about existential risk from AI are part of this dark conspiracy. He wants to vilify OpenPhil in particular.

Why? I have no idea.

I also have no idea why Politico, a highly respected news organization, is allowing him to do this over and over again, using language and framing that, while clearly within the rules of bounded distrust, should be beneath such an institution. No one could mistake these as anything but unhinged hatchet jobs.

The first complaint seems to be that a research grant was given out to a group that has also gotten a grant from OpenPhil, without a sufficiently ‘competitive process.’

The second complaint is a more general hatchet job, that there is this group that is daring to do politics and explain its concerns and lobby the government using arguments, and then using tricks as basic as scare quotes and group demographical (including racial) composition to paint those advocating to try and stop us from dying as various bad things, and to conflate EA with all existential risk concerns.

Also, it has important tips:

One AI and biosecurity researcher in Washington said lawmakers and other policy professionals are being pushed toward a focus on existential AI risks by sheer force of repetition.

“It’s more just the object permanence of having that messaging constantly in your face,” said the researcher, who was also granted anonymity to avoid losing funding.

Did you hear that? Sheer force of repetition. It’s working!

We should expect hit jobs and propaganda and various dirty tricks to become more prominent and common as the stakes get higher. We are on an exponential. This is still early days.

Daniel Eth notes that reactions are adjusting to this new reality, that people are having fun with the absurdity.

Stefan Schubert: Professed tolerance is often skin-deep. Living multiple adults under the same roof is apparently unacceptably weird. [quotes from OP citing this as evidence and example of high weirdness]

Tescreal/acc: I could be totally wrong, but the levels of weirdness I’ve seen at DC EA adjacent events are disappointingly tame. I was promised multidimensional polycules, and [every] conversation is on housing policy or what you do for a living.

Julian Hazell: EAs have once again shown how cultish they are. I’ve heard reports from anonymous sources that EAs are living together in the same houses, to “save money on rent” and “have people to hang out with”. They’re calling this arrangement “having roommates” — more EA jargon!

Aaron Bregman: In addition to being very funny, this is also pretty misleading DC EAs are substantially and noticeably less weird (for better and for worse; I actually think on the margin we should be weirder) than (to generalize) those in the Bay Area – not at all an ~identical culture

Daniel Eth: Glad to see the reactions from EAs to this piece – our skin has thickened considerably. After the first hit piece against us, people were like “oh no does this mean we’re canceled?” and now the reaction is like “lol that they went there”

Not that negative press doesn’t matter at all, but the previous response was bordering on hysteria, which just isn’t helpful. Anyway, I continue to believe that periodic negative coverage is a price of doing business if you want to affect things on a large scale /shrug

Patrick Collison responds to the trend.

Patrick Collison: I’m not an effective altruist, but I find the recent genre of pieces like the below a bit strange. Essentially all of the major AI lab leaders agree that AI potentially poses enormous risks[1], as does a majority of the US public[2]. It’s not a crank view. EA was simply among the very earliest organized groups to perceive and act on this risk. In addition, EA’s contrarian concerns have elsewhere been shown to be reasonable: long before COVID, EAs stood out for their worry about major pandemics. Assessments that deride EA as a cult while failing to acknowledge these counterintuitive successes strike me as unreasonably uncharitable. If asked “which 2018 community now looks most prescient based on how the intervening 5 years unfolded?”, I think it’s hard to come up with better nominations than EA.

Nate Silver: I’m going to have quite a lot to say about effective altruism in the forthcoming book, much of it critical, but yeah it’s extremely frustrating that the DC/NYC press tends to pick the worst possible vibes-based criticisms.

Michael Vassar: This is actually an important source of information about the game the press is playing. If something is actually bad and the press hates it but can’t criticize it for being bad, that’s because the press can’t be seen being against bad things.

It is impressive how little the sets ‘things Effective Altruists deserve criticism for’ and ‘things supposedly serious people criticize Effective Altruists for’ ever intersect.

Cup of Coffee

You can safely skip this, but yes people really do make arguments like this, the original post here has 800k+ views, and it can be cathartic and also useful to gather responses in such situations.

Daniel Jeffries: Here’s the story of another technology that faced massive backlash in its time that will sound very familiar to today’s battles over #AI. Coffee.

No. Seriously. The historical story here is actually kind of cool and underappreciated, I’ve cut it down for length and relevance but you could check it out in full.

Today, when you’re picking up your coffee from the local hipster barista shop or mega-chain you probably never imagined it was once controversial. But for hundreds of years it faced slander, demonization, legal attacks and more.

Kings and queens saw coffee houses as breeding grounds for revolution and regularly shut them down, harassed owners, taxed and arrested them. In parts of the Islamic world many powerful leaders saw coffee as an intoxicant no different than alcohol and drugs and they attacked it on moral grounds (same story, different day) with fatwas.

Then he says, it is all just like AI.

After a technology triumphs, the people who come later don’t even realize the battle happened. To them it’s not even technology anymore. It’s a part of their life, just another thing. They just use it. To someone born today, a car or refrigerator or a cup of coffee is no different than a tree or a rock. It was always there.

The same will happen with AI, one way or another. It’s just too important and powerful of a technology for it to not find a way forward. There is simply no industry on Earth that won’t benefit from getting more intelligent.

The more people use AI, the more they will realize how amazing it is and how much they want it in their lives and the more doomer predictions of the end times will recede in their minds. It’s already happening. My step mother was using it to help with her writing and to spit out ideas about ideas for a raffle. My friend’s five year old daughter talks to the GPT voice interface in her native language and they create stories together.

That child will grow up using AI and trying to take it from her will be like trying to take away her iPad or her doll.

A decade from now, kids will grow up with AI and they’ll just use it. They won’t even think about it.

And if you told them there was once a battle over it they will look at you like you’re crazy and wonder what all the fuss was about.

Setting aside the larger issues for a second. Ignore that the central argument here is that ‘being scared of AI is like being scared of coffee.’

I cannot even imagine his predicted future world: A decade from now, where AI is everywhere and it is uncontroversial and everyone wonders what the fuss was about. It makes no sense.

Even if there were zero catastrophic or existential risks, and AI was hugely beneficial on net and everything its advocates want it to be, and I was deeply happy about that, there is zero way in hell that a decade from now ‘the fuss’ is going to be over or anyone is going to wonder what it was about. And that’s true even if the world we get is ‘normal’ in a way that seems impossible to me under even mundane tool-level AI.

Or, if such a world did exist, I would assume there was already some sort of ASI takeover in the background manipulating everything to make that happen, I guess? Because that’s the only way I can come up with that people would be that relaxed.

But also this is a claim that AI is as harmless as coffee, so people had a lot of fun with that part. Here are some highlights.

Eliezer Yudkowsky: Q: “This bridge you’re building — is it safe for me to drive my kids across?”

GOOD: “Here’s the design and the calculations. It’s well-established engineering, but more eyeballs won’t hurt.”

VERY BAD: “Being scared that my bridge will fail is like being scared of coffee!”

AISafetyMemes: I, for one, had no idea sometimes people had unfounded fears in the past, clearly fear is always wrong

Eliezer Yudkowsky: 100% of the cases where anybody has warned that a bad thing might happen, it hasn’t.

Sarah (Little Ramblings): it’s possible to draw an extremely tenuous analogy between early attempts to regulate coffee consumption and fears about AI, therefore you should be no more afraid of uncontrolled, superhuman agents than you are of your morning latte.

Johnathan Mannhart: What kind of lazy argument is this? Another technology that faced massive backlash as a refreshing drink was Radium. (An actual thing: http://en.m.wikipedia.org/wiki/Radithor.) Let me explain now through a tenuous connection of metaphors how your favourite drink therefore could kill you?

Matthew Yglesias: I hate this form of obviously fallacious argument. All of the people who make it know exactly what the problem with it is and they just say it anyway.

You cannot infer conclusions about the safety or benefits of any particular technology by just noting that technological progress is good in general. It is good (in general!) but some specific technologies are harmful and/or require safety regulation.

You could do the Coffee Panic Ha Ha Ha thread to refute people who worried about crack in the 1980s or the losers who thought maybe mass marketing of oxycodone might have some downsides.

Derek Thompson: During the height of crypto, this arg was everywhere and I like to replace the word “crypto” with “styrofoam cup with taped string”

As in: “Some critics claim that my styrofoam cup with taped string will not change the world…but critics also said that about the steam engine”

There’s also my favorite response, which was Davidad explaining that it was actually a pretty good idea for certain rulers to be scared of coffee.

Davidad: When Charles II tried to ban coffeehouses in 1675, he was correct that they were facilitating new information flows which were exploitable by threat actors against national security. In 1688, the Dutch used this vulnerability to persuade lords to politely invite them to invade England.

This is referred to, by the winners of history, as The Glorious Revolution. Neal Stephenson covers it in The Confusion, part of the excellent Baroque Cycle. In the Baroque Cycle, new scientific advances that most in the world don’t yet appreciate, such as calculus, the engine of raising water by fire and issuance of paper money, are in the process of starting to remake the world. A group of entrepreneurs, realizing those currently in charge are not good for business,

For a group of entrepreneurs, coffee turns out to be a key organizing force as well as a cognitive enhancement. Their move is motivated by a clash on human values that are remarkably identical except for who gets to make decisions (Catholic vs. Anglican and King vs. Parliament) and feeling that the resulting situation and method of governance was bad for business. Effectively smarter than their opposition thanks to a combination of better training data, algorithmic improvements and their new affordances, they are able to outmaneuver a much stronger force to gain a decisive strategic advantage.

They decide to invite William of Orange, an alien agent they feel is better aligned to their interests also known as the head of what is effectively the merchant kingdom of the Netherlands, to invade and take the throne, ceding a lot of control over the future to economic and alien forces.

A world of accelerating transformation and technological change resulted, in ways that were impossible to predict, the British empire spread around the world, and here we are today.

I am not saying that it did not work out. We do call it The Glorious Revolution.

I still can’t help but notice it is rather a little bit on the nose.

The OP is correct that after 300 years, or probably as few as 15 years, the technology will have diffused one way or another and people born then will look back and be shocked that there was even a debate about this.

The question is more like whether those people will be humans.

Davidad also points out that there was an important debate about whether to impose safety requirements on bridges.

Jason Crawford: This still blows my mind: in the late 1800s, ~25% of bridges built just collapsed

Davidad: In the 1850s, there was serious debate about whether there should be mandatory standards for the safety of bridges. Brunel—perhaps the greatest engineer of all time—was vehemently against. Eventually, standards were put in place, and radical innovation in bridge design continues.

All of that does not mean humanity would have been better off without coffee, let alone that we would have been better off with a ban on coffee rather than permitting it, with all the costs of trying to enforce such a ban.

However, I’d also point out that ‘coffee is actually net harmful’ is a highly valid perspective to hold?

Indeed, I hold that view.

I think that most people who drink coffee are, effectively, continuously borrowing energy and wakefulness from the future to spend in the present. Then in the future, they borrow again to pay back what was already spent. When they are unable to do so, it often does not go well. Meanwhile, you build up tolerance. The whole thing takes up substantial space in our collective brains, takes a bite out of our wallets, and causes more problems than it solves.

I do not drink coffee. I believe I am better off because I do not drink coffee, and most of you would be as well. I do eat chocolate, and in a true emergency I can use chocolate as a stimulant because I never built up a tolerance.

That is not to say that some people do not benefit. Yes, some people enjoy consuming coffee and do it because it on net brings them joy. Others use it judiciously and strategically, allowing them to borrow from unimportant times for important times, and get benefits that exceed their costs.

And of course, given everyone else is drinking coffee, sometimes you pay a social price for not consuming it, although I doubt this is that big a deal. No one ever seems to mind if I drink water or order a hot chocolate or get a pastry and a glass of milk.

I certainly would not try to ban coffee. That would not go well. I would not ban it any more than I would try to ban alcohol, which was vital to the formation of civilization and where we know well the results of trying to ban it. But I would emphasize that alcohol, even more than coffee, is not your friend, you would probably be better off if you did not drink, and you would also avoid severe tail risks that drinking could ruin your life as it often does.

Aligning a Smarter Than Human Intelligence is Difficult

John Wentsworth releases his 2024 edition of The Plan. The Plan centers around understanding natural abstraction, which he sees as a robust bottleneck to making alignment progress. Lot of good stuff in here. This type of post makes me wish I had the time to focus on such matters.

Jessica Taylor translates her understanding of the Eliezer Yudkowsky perspective into her own language. I expect it would be highly useful if one was going to discuss these issues with Jessica in particular, and perhaps her framings will resonate better with you. A few of you would benefit from reading it, and you likely suspect who you are.

Her presentation has some places I believe she disagrees with Yudkowsky, but the perspective here is (I think) largely compatible. Issues of extremes and going out of one’s distribution and other related issues (in a very general sense) seem like they did not get sufficient attention here.

People Are Worried About AI Killing Everyone

Philosopher Daniel Dennett is coming around (link to short video, 2:27).

Daniel Dennett: The people who wave their hands and calmly & optimistically say, ‘we’re solving the alignment problem’, their very statement of that is a sign they don’t have an appreciation of how deep the problem is.

Cory Booker (D-NJ), perhaps? The context very much has the eye on the wrong balls and metaphors, but still, worried:

“I don’t mean to create stereotypes of tech bros, but we know that this is not an area that often selects for diversity of America,” Sen. Cory Booker (D-N.J.) told POLITICO in September.

“This idea that we’re going to somehow get to a point where we’re going to be living in a Terminator nightmare — yeah, I’m concerned about those existential things,” Booker said. “But the immediacy of what we’ve already been using — most Americans don’t realize that AI is already out there, from resumé selection to what ads I’m seeing on my phone.”

The emphasis is not where I would put it, and Terminator is an unfortunate metaphor as always, but he’s not exactly wrong.

Other People Are Not As Worried About AI Killing Everyone

The Lighter Side

This looks awesome, no?

Louis Anslow: Pixar should make a movie about adults, kids and moral panic

Image

If you actually took the track record of technology in Pixar and similar movies as the distribution of possibilities, but without the narrative causality that everything will turn out fine, and acting through the consequences, how would you react? What about Disney overall, or Marvel? That kid is in so much trouble.

Visions of the future.

Eliezer Yudkowsky: Soon to be possible with AI video generation:

– input: picture of your child, text saying what your child wants to do when they grow up

– output: video of the child, extrapolated to what they’ll look like when they’re older, sadly watching a robot do that job

I think people may be misunderstanding: I’m not saying there won’t be new jobs, nor am I predicting mass unemployment. I just think this is a cool idea for a generative AI app. You could call it DreamBreaker.

Kevin Barr: When I grow up, I want a robot to do my job while I watch. Checkmate.

Eliezer:

Image

Alternative app, same thing, except they’re happy about it, or to still be alive? Eliezer would doubtless consider ‘your child gets to watch AI do things’ to be a relatively good outcome.

Other visions of the future, alignment with stated values might backfire you say?

Eliezer Yudkowsky: If you’re not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entire legal code — which no human can know or obey — and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders.

I mean, sure, of course, why not.

Image
New Comment
4 comments, sorted by Click to highlight new comments since: Today at 6:17 PM

Or the idea that, if AI was going to take over most of the economy, you would actually expect something very close to what we are seeing, and for most people to keep sleeping on this.


+1 it's frustrating that Hanson claims my model is inconsistent with what we are seeing whilst his model is consistent, when in fact the opposite is true -- he's already lost a bet about LLM revenue: < $1Bn Revenue from GPT Language Models | Metaculus 

For my part I was more bullish than Hanson on this forecasting question but not as bullish as I should have been! Shoulda trusted my model more.

Min Choi: This is bonkers… You can’t tell if these photos are AI generated

Yes I can. Hands and text look freaky in all of them.

I would perhaps urge Tyler Cowen to consider raising certain other theories of sudden leaps in status, then? To actually reason out what would be the consequences of such technological advancements, to ask what happens?

 

At a guess, people resist doing this because predictions about technology are already very difficult, and doing lots of them at once would be very very difficult.

But would it be possible to treat increasing AI capabilities as an increase in model or Knightian uncertainty? It feels like questions of the form "what happens to investment if all industries become uncertain at once? If uncertainty increases randomly across industries? If uncertainty increases according to some distribution across industries?" should be definitely answerable. My gut says the obvious answer is that investment shifts from the most uncertain industries into AI, but how much, how fast, and at what thresholds are all things we want to predict.

The paper suggests strong evidence of widespread task contamination. Performance on data sets released after the training data creation

 

I haven't read the paper so maybe it addresses this but... generally speaking when people create new data sets they try to make them more difficult than the already-existing ones, since the already-existing ones are getting saturated.