On Tuesday, the US Senate held a hearing on AI.[1] The hearing involved 3 witnesses: Sam Altman, Gary Marcus,[2] and Christina Montgomery.[3] (If you want to watch the hearing, you can watch it here – it's around 3 hours.)

I watched the hearing and wound up live-tweeting[4] quotes that stood out to me, as well as some reactions. I'm copying over quotes to this post that I think might be of interest to others here. Note this was a very impromptu process and I wasn't originally planning on writing a forum post when I was jotting down quotes, so I've presumably missed a bunch of quotes that would be of interest to many here. Without further ado, here are the quotes (organized chronologically):

 

Senator Blumenthal (D-CT): "I think you [Sam Altman] have said, in fact, and I'm gonna quote, 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' You may have had in mind the effect on jobs, which is really my biggest nightmare in the long run..."

Sam Altman: [doesn't correct the misunderstanding of the quote and instead proceeds to talk about possible effects of AI on employment]

...

Sam Altman: "My worst fears are that... we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways; it's why we started the company... I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that." 

...

Sam Altman: "I think the US should lead [on AI regulation], but to be effective, we do need something global... There is precedent – I know it sounds naive to call for something like this... we've done it before with the IAEA... Given what it takes to make these models, the chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies, I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of, that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world."

...

Senator Coons (D-DE): "I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There's another approach that's called 'constitutional AI' that gives the model a set of values or principles to guide its decision making. Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content? ... I'm interested also, what international bodies are best positioned to convene multilateral discussions to promote responsible standards? We've talked about a model being CERN and nuclear energy. I'm concerned about proliferation and nonproliferation."

...

Senator Kennedy (R-LA): "Permit me to share with you three hypotheses that I would like you to assume for the moment to be true...  Hypothesis number 3... there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying... Please tell me in plain English two or three reforms, regulations, if any, that you would implement if you were queen or king for a day..."

Gary Marcus: "Number 1: a safety-review like we use with the FDA prior to widespread deployment... Number 2: a nimble monitoring agency to follow what's going on... with authority to call things back... Number 3... funding geared towards things like AI Constitution... I would not leave things entirely to current technology, which I think is poor at behaving in ethical fashion and behaving in honest fashion. And so I would have funding to... basically focus on AI safety research... there's both... short term & long term and I think we need to look at both rather than just funding models to be bigger... we need to fund models to be more trustworthy."

Sam Altman: "Number 1, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number 2, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we've used in the past is looking to see if a model can self-replicate and self-exfiltrate into the wild. We can give your office a long other list on the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third I would require independent audits..."

Senator Kennedy (R-LA): "Can you send me that information?"

Sam Altman: "We will do that..."

Senator Kennedy (R-LA): "Are there people out there that would be qualified [to administer the rules if they were implemented]?"

Sam Altman: "We would be happy to send you recommendations for people out there."

...

Sam Altman: "Where I think the licensing scheme comes in is not for what these models are capable of today... but... as we head towards artificial general intelligence... the power of that technology... that's where I personally think we need such a scheme."
Senator Hirono (D-HI): "I agree, and that is why, by the time we're talking about AGI, we're talking about major harms that can occur... We can't just come up with something that is gonna' take care of the issues that will arise in the future, especially with AGI."

...

Sam Altman: "[AGI] may take a long time, I'm not sure"

...

Gary Marcus: "There are more genies yet to come from more bottles; some genies are already out, but we don't have machines that can really, for example, self-improve themselves, we don't really have machines that have self-awareness, and we might not ever want to go there."

...

Sam Altman (remember, the hearing is under oath): "We are not currently training what will be GPT-5; we don't have plans to do it in the next 6 months."

 

  1. ^

    Specifically, the hearing was held by the Subcommittee on Privacy, Technology and the Law (within the Senate Judiciary Committee), with the title, "Oversight of A.I.: Rules for Artificial Intelligence."

  2. ^

    Gary is a noted critic of current Deep Learning approaches, both for their limitations (he thinks DL will be/is hitting a wall) and for their risks (for instance, due to dangerous advice from hallucinations and their ability to produce misinformation).

  3. ^

    Christina is the Chief Privacy & Trust Officer of IBM.

  4. ^

    If you want to check out my tweets, you can go to my Twitter profile, though note you'll have to scroll down as I didn't organize these tweets into a thread.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 7:49 AM

It's fascinating how Gary Marcus has become one of the most prominent advocates of AI safety, and particularly what he call long-term safety, despite being wrong on almost every prediction he has made to date.

I read a tweet that said something to the effect that GOFAI researchers remain the best ai safety researchers since nothing they did worked out.

Seriously, how did he do that? I think it's important to understand. Maybe it's as some people cynically told me years ago -- In DC, a good forecasting track record counts for less than a piece of toilet paper? Maybe it's worse than that -- maybe being active on Twitter counts for a lot? Before I cave to cynicism I'd love to hear other takes.

It must be said that he was quite a notable /influential person before I think?

He is a student of Chomsky - knows a lot of the big public intellectuals. He s had a lot of time to build up a reputation.

But yeah I agree it's remarkable.

remember that this hearing is almost entirely a report to the public to communicate the political disagreement process's existing state; almost nothing new happened, and everyone was effectively playing a game of chess about what ads to invite different people to make.

it's odd that Marcus was the only serious safety person on the stand. he's been trying somewhat, but he, like the others, has perverse capability incentives. he also is known for complaining incoherently about deep learning at every opportunity and making bad predictions even about things he is sort of right about. he disagreed with potential allies on nuances that weren't the key point. he said solid things on object level opinions, and if he got taken seriously by anyone it's promising, but given Altman's intense political savvy it doesn't seem like Marcus really gave much of a contrast at all.

One absolutely key thing got loudly promoted: that all cutting edge models should be evaluated for potentially dangerous properties. As far as I can tell no one objected to this

Specifically, for dangerous capabilities which is even better.

Number 2, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we've used in the past is looking to see if a model can self-replicate and self-exfiltrate into the wild. We can give your office a long other list on the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world.

I think the "dangerous capability evaluations" standard makes sense in the current policy environment.

Tons of people in policy can easily understand and agree that there's clear thresholds in AI that are really, really serious problems that shouldn't be touched with a ten-foot-pole, and "things like self-replication" is a good way to put it.

Senator Kennedy (R-LA): "Are there people out there that would be qualified [to administer the rules if they were implemented]?"

Sam Altman: "We would be happy to send you recommendations for people out there."

The other thing I agreed with was revolving door recommendations. There are parts of government that are actually surprisingly competent (especially compared to the average government office), and largely their secret for doing that is by recruiting top talent from the private sector, to work as advisors for a year or so and then return to their natural environment. The classic government officials stay in charge, but they have no domain expertise, and often zero interest in learning any, so they basically defer to the expert advisors and do nothing except handle the office politics so that the expert advisors don't have to (which is their area of expertise anyway). It's generally more complicated than that but this is the basic dynamic.

It kinda sucks that OpenAI gets to be the one to do that, as a reward for defecting and being the first to accelerate, as opposed to ARC or Redwood or MIRI who credibly committed to avoid accelerating. But it's probably more complicated than that. DM me if you're interested and want to talk about this.

Sam Altman (remember, the hearing is under oath): "We are not currently training what will be GPT-5; we don't have plans to do it in the next 6 months."

Interestingly, Altman confirmed that they were working on GPT-5, just three days before six months would have passed from this quote. May 16 -> November 16, confirmation was November 13. Unless they're measuring "six months" "half a year" in days, in which case it the deadline would have been passed by only one day. Or, if they just say "month = 30 days, so 6 months = 180 days", six months after May 16 would be November 12, the day before GPT-5 confirmation.

I wonder if the timing was deliberate. 

Seems possibly relevant that "not having plans to do it in the next 6 months" is different from "have plans to not do it in the next 6 months" (which is itself different from "have strongly committed to not do it in the next 6 months").