LESSWRONG
LW

Sean_o_h
7176780
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
China proposes new global AI cooperation organisation
Sean_o_h1mo152

The text of the plan is here:
http://hk.ocmfa.gov.cn/eng/xjpzxzywshd/202507/t20250729_11679232.htm

Features a section on AI safety:
"Advancing the governance of AI safety. We need to conduct timely risk assessment of AI and propose targeted prevention and response measures to establish a widely recognized safety governance framework. We need to explore categorized and tiered management approaches, build a risk testing and evaluation system for AI, and promote the sharing of information as well as the development of emergency response of AI safety risks and threats. We need to improve data security and personal information protection standards, and strengthen the management of data security in processes such as the collection of training data and model generation. We need to increase investment in technological research and development, implement secure development standards, and enhance the interpretability, transparency, and safety of AI. We need to explore traceability management systems for AI services to prevent the misuse and abuse of AI technologies. We need to advocate for the establishment of open platforms to share best practices and promote international cooperation on AI safety governance worldwide."

Reply
The 4-Minute Mile Effect
Sean_o_h4mo50

For more colour, see this article, which shows the same trend on the same timelines for a bunch of other distances - steady progress till 1940ish, a 10-15 year WW2 gap, then further steady progress mid 1950s on.
https://www.scienceofrunning.com/2017/05/the-roger-bannister-effect-the-myth-of-the-psychological-breakthrough.html?v=47e5dceea252

Reply
The 4-Minute Mile Effect
Sean_o_h4mo60

Sorry, the whole "impossibility of 4 minute mile" / "4 minute mile effect" is a myth.

Bannister did his (successful) attempt in May 1954 because he knew John Landy (in particular, but also a few others) had set his sights on it and were getting close, and he thought (as Landy did too) that Landy would get it that year as soon as he got to Europe. They were both right - Landy did it 6 weeks later.

The reason the record had stayed just over 4 mins for so long was WWII interrupting athletics - Hagg and Andersson had got it down to 4:01.4 pretty quick between 1942-45. Sports folks at the time knew it was going to go.

"The claim that a four-minute mile was once thought to be impossible by "informed" observers was and is a widely propagated myth created by sportswriters and debunked by Bannister himself in his memoir, The Four Minute Mile (1955)."
https://en.wikipedia.org/wiki/Roger_Bannister#Sub-4-minute_mile

Reply
Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk
Sean_o_h2y105

(disclaimer: one of the coauthors) Also, none of the linked comments by the coauthors actually praise the paper as good and thoughtful? They all say the same thing, which is "pleased to have contributed" and "nice comment about the lead author" (a fairly early-career scholar who did lots and lots of work and was good to work with). I called it "timely", as the topic of open-sourcing was very much live at the time.

 

(FWIW, I think this post has valid criticism re: the quality of the biorisk literature cited and the strength with which the case was conveyed; and I think this kind of criticism is very valuable and I'm glad to see it).

Reply
[AN #90]: How search landscapes can contain self-reinforcing feedback loops
Sean_o_h5y10

I believe the working title is 'Intelligence Rising'

Reply
Crisis and opportunity during coronavirus
Sean_o_h5y40

This is super awesome. Thank you for doing this.

Reply
Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos"
Sean_o_h6y60

Johnson was perhaps below average in his application to his studies, but it would be a mistake to think he is/was a below average intelligence pupil.

Reply
Another AI Winter?
Sean_o_h6y*40
I can imagine DM deciding that some very applied department is going to be discontinued, like healthcare, or something else kinda flashy.

With Mustafa Suleyman, the cofounder most focused on applied (and leading DeepMind Applied) leaving for google, this seems like quite a plausible prediction. So a refocusing on being a primarily research company with fewer applied staff (an area that can soak up a lot of staff) resulting in a 20% reduction of staff probably wouldn't provide a lot of evidence (and is probably not what Robin had in mind). A reduction of research staff, on the other hand, would be very interesting.

Reply
2018 AI Alignment Literature Review and Charity Comparison
Sean_o_h7y60

(Cross-posted to the EA forum). (Disclosure: I am executive director of CSER) Thanks again for a wide-ranging and helpful review; this represents a huge undertaking of work and is a tremendous service to the community. For the purpose of completeness, I include below 14 additional publications authored or co-authored by CSER researchers for the relevant time period not covered above (and one that falls just outside but was not previously featured):

Global catastrophic risk:

Ó hÉigeartaigh. The State of Research in Existential Risk

Avin, Wintle, Weitzdorfer, O hEigeartaigh, Sutherland, Rees (all CSER). Classifying Global Catastrophic Risks

International governance and disaster governance:

Rhodes. Risks and Risk Management in Systems of International Governance.

Biorisk/bio-foresight:

Rhodes. Scientific freedom and responsibility in a biosecurity context.

Just missing the cutoff for this review but not included last year, so may be of interest is our bioengineering horizon-scan. (published November 2017). Wintle et al (incl Rhodes, O hEigeartaigh, Sutherland). Point of View: A transatlantic perspective on 20 emerging issues in biological engineering.

Biodiversity loss risk:

Amano (CSER), Szekely… & Sutherland. Successful conservation of global waterbird populations depends on effective governance (Nature publication)

CSER researchers as coauthors:

(Environment) Balmford, Amano (CSER) et al. The environmental costs and benefits of high-yield farming

(Intelligence/AI) Bhatagnar et al (incl Avin, O hEigeartaigh, Price): Mapping Intelligence: Requirements and Possibilities

(Disaster governance): Horhager and Weitzdorfer (CSER): From Natural Hazard to Man-Made Disaster: The Protection of Disaster Victims in China and Japan

(AI) Martinez-Plumed, Avin (CSER), Brundage, Dafoe, O hEigeartaigh (CSER), Hernandez-Orallo: Accounting for the Neglected Dimensions of AI Progress

(Foresight/expert elicitation) Hanea… & Wintle The Value of Performance Weights and Discussion in Aggregated Expert Judgments

(Intelligence) Logan, Avin et al (incl Adrian Currie): Uncovering the Neural Correlates of Behavioral and Cognitive Specialization

(Intelligence) Montgomery, Currie et al (incl Avin). Ingredients for Understanding Brain and Behavioral Evolution: Ecology, Phylogeny, and Mechanism

(Biodiversity) Baynham Herdt, Amano (CSER), Sutherland (CSER), Donald. Governance explains variation in national responses to the biodiversity crisis

(Biodiversity) Evans et al (incl Amano). Does governance play a role in the distribution of invasive alien species?

Outside of the scope of the review, we produced on request a number of policy briefs for the United Kingdom House of Lords on future AI impacts; horizon-scanning and foresight in AI; and AI safety and existential risk, as well as a policy brief on the bioengineering horizon scan. Reports/papers from our 2018 workshops (on emerging risks in nuclear security relating to cyber; nuclear error and terror; and epistemic security) and our 2018 conference will be released in 2019.

Thanks again!

Reply
2018 AI Alignment Literature Review and Charity Comparison
Sean_o_h7y90
It is possible they had timing issues whereby a substantial amount of work was done in earlier years but only released more recently. In any case they have published more in 2018 than in previous years.

(Disclosure: I am executive director of CSER) Yes. As I described in relation to last year's review, CSER's first postdoc started in autumn 2015, most started in mid 2016. First stages of research and papers began being completed throughout 2017, most papers then going to peer-reviewed journals. 2018 is more indicative of run-rate output, although 2019 will be higher.

Throughout 2016-2017, considerable CSER leadership time (mine in particular) has also gone on getting http://lcfi.ac.uk/ up and running, which will increase our output on AI safety/strategy/governance (although CFI also separately works on near term and non-AI safety-related topics).

Thank you for another detailed review! (response cross-posted to EA forum too)

Reply
Load More
28New Leverhulme Centre on the Future of AI (developed at CSER with spokes led by Bostrom, Russell, Shanahan)
10y
1
14New positions and recent hires at the Centre for the Study of Existential Risk (Cambridge, UK)
10y
3
26Postdoctoral research positions at CSER (Cambridge, UK)
10y
8
15Cambridge (England) lecture: Existential Risk: Surviving the 21st Century, 26th February
12y
5
60Update on establishment of Cambridge’s Centre for Study of Existential Risk
12y
15
22Vacancy at the Future of Humanity Institute: Academic Project Manager
12y
0