Sebastian Schmidt

Wiki Contributions

Comments

I agree that international coordination seems very important. 
I'm currently unsure about how to best think about this. One way is to focus on bilateral coordination between the US and China, as it seems that they're the only actors who can realistically build AGI in the coming decade(s).

Another way, is to attempt to do something more inclusive by also focusing on actors such as UK, EU, India, etc.

An ambitious proposal is the Multinational AGI Consortium (MAGIC). It clearly misses many important components and considerations, but I appreciate the intention and underlying ambition.

Thanks for this - I wasn't aware. This also makes me more disappointed with the voluntary commitments at the recent AI Safety Summit. As far as I can tell (based on your blog post), the language used around external evaluations was also quite soft:
"They should also consider results from internal and external evaluations as appropriate, such as by independent third-party evaluators, their home governments[3], and other bodies their governments deem appropriate."

I wonder if it'd be possible to have much stronger language around this at next year's Summit.

The EU AI Act, is the only regulatory rules that isn't voluntary (i.e., companies have to follow them) and even that doesn't require external evaluations (they use the wording Conformity Assessment) for all high-risk systems.

Any takes on what our best bet is for making high-quality external evals mandatory for frontier models at this stage?

This is a valid concern, but I'm fairly certain Science (the journal - not the field) handled this well. Largely because they're somewhat incentivized to do so (otherwise it could have very bad optics for them) and must have experienced this several times before. I also happen to know one of the senior authors who is significantly above average in conscientiousness.

Good point. Just checked - here's the ranking of Chinese LLMs among the top 20:
7. Yi-Large (01 AI)
12. Qwen-Max-0428 (Alibaba)
15. GLM-4-0116 (Zhipu AI)
16. Qwen1.5-110B-Chat (Alibaba)
19. Qwen1.5-72B-chat (Alibaba)

Of the three, only Zhipu signed the agreement. 

Thanks for this analysis - I found it quite helpful and mostly agree with your conclusions. 
Looking at the list of companies who signed the commitments, I wonder why Baidu isn't on there. Seems very important to have China be part of the discussion, which, AFAIK, mainly means Baidu.

I am honored to be part of enabling more people from around the world to contribute to the safe and responsible development of AI. 

Thanks for posting.

The hindrances you mention seem relevant but also highly related. In particular, the
"Poorly-tested implementation: A disconnection between engaging educational content and its practical application, underscoring the importance of field-tested guidance that works in varied real-life situations."

I'm excited to see advice that has been tested via working closely with individuals. I'm reminded of the Scott Miller (who played a major role within evidence-based psychotherapy) claim on a podcast (paraphrasing): "We have an implementation problem".

Great! I'd expect most people on there are. I know for sure that Paul Rohde and James Norris (the founder) are aware. My rates depends on the people I work with but $200-$300 is the standard rate.

Thank you. This is a really excellent post. I'd like to add a few resources and providers:
1. EA mental health navigator: https://www.mentalhealthnavigator.co.uk/.
2. Overview of providers on EA mental health navigator (not everyone familiar with alignment in significant ways). https://www.mentalhealthnavigator.co.uk/providers
3. Upgradable has some providers that are quite informed around alignment. https://www.upgradable.org/
4. If permissible, I'd like to add myself as a provider (coach) though I don't take on any coachees at present.

 


 

So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief,’ and the latter thingy ‘reality.

Re-reading this after four years. It is still so elegantly beautiful. I don't understand why this wasn't part of sequences highlights. I would have put under "thinking better on purpose" or "the laws governing beliefs".

Load More