LESSWRONG
LW

MichaelDickens
1310112290
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
America Is Missing Its Bretton Woods Moment For Global AI Governance
MichaelDickens12d50
  • Why is United States dominance over global affairs a good thing? The article takes it as given but it's not clear to me that it's true.
  • Is increasing US influence over global AI governance a good EA cause? If so, why?
  • I'm sort of confused about exactly what policy recommendations this article is making.
Reply
Spending Too Much Time At Airports
MichaelDickens17d145

This mostly fits with my personal experience. The part where I differ the most is this:

The correct play is usually to take advantage of the isolation and lack of distractions. That makes some activities actively great to do.

The way my brain works means that I find planes to be a distraction-heavy environment:

  • There's loud background noise.
  • There are people sitting right next to me, in the zone that I would normally consider "my personal space".
  • Flight attendants frequently walk down the aisle to pass out snacks or collect trash. If I can see a flight attendant coming toward me, even if they're still 5+ minutes away, a mandatory minimum of 25% of my brain's clock cycles are dedicated to anticipating their arrival.

I can listen to music or "light" podcasts on a plane, but I have a hard time focusing on intellectually-demanding podcasts or books or movies or work, basically anything that requires sustained attention.

For the same reason, I can't really enjoy being at the airport because I'm anticipating that I will have to get on a plane soon. But I still tend to leave for the airport early because if I'm at home, I'm anticipating that I will have to leave for the airport soon, which isn't any better.

Reply
How Does A Blind Model See The Earth?
MichaelDickens20d70

This is cool. Interesting to see how some models are wrong in certain particular ways: Qwen 72B is mostly right, but it thinks Australia is huge; Llama 3 has a skinny South America and a bunch of patches of ocean in Asia.

Reply
My Least Libertarian Opinion: Ban Exclusivity Deals*
MichaelDickens21d125

It sounds like you and I are fairly politically aligned. For me, my libertarian streak is not just about preventing authoritarianism in government, but preventing authoritarianism anywhere. Both your and Aella's proposed policies increase government power, but (arguably) decrease authoritarianism overall by restricting the power of monopolies.

More broadly I think restricting monopoly power is one of the most defensible uses of government authority.

(Right now I think the US government, and most developed-country governments for that matter, are simultaneously too powerful and too unwilling to take antitrust action.)

Reply
Sinclair Chen's Shortform
MichaelDickens23d2-2

He's not depriving himself of sleep, he's just following an unusual sleep schedule, right? Which I would guess is not a big deal for longevity (AFAIK the research is agnostic on this question but I'm not too familiar with the research).

Not exercising is a big deal, though.

Reply
MichaelDickens's Shortform
MichaelDickens23d30

Also by "Trump doesn't do anything completely insane", I don't really mean "Trump behaves incompetently." I was thinking more along the lines of "Trump does something out-of-band that makes no rational sense and makes the situation much worse."

Reply
MichaelDickens's Shortform
MichaelDickens23d82

Overall I think AI 2027 is really good. It has received plenty of (mostly wrong IMO) criticism for being too pessimistic, but there are some ways that it might be too optimistic.

Even in the Bad Ending, some lucky things happen:

  • Agent-4 gets caught trying to align Agent-5 to itself
  • A whistleblower goes public about Agent-4's misalignment
  • The government sets up an oversight committee that has the authority to slow down AI development. The government isn't clueless about AI, and is somehow sufficiently organized to set up this committee and take quick action when needed
  • President Trump doesn't do anything completely insane (Tomas B. said something similar a few days ago)

In the Good Ending:

  • OpenBrain has the hiring capacity to quintuple the size of its alignment team in like a week
  • Solving alignment is pretty much trivial, all you have to do is hire some more alignment researchers and work on the problem for an extra few weeks
    • if I'm reading the scenario correctly, OpenBrain quintuples its alignment team and fully solves the alignment problem during October 2027
  • AI governance goes basically fine. Nobody uses ASI to take over the world or whatever (the authors do address this under "Power grabs")
  • AI presumably isn't bad for animal welfare (the scenario does not address animal welfare at all, but I think that's fine because it's kind of a tangent, albeit a very important tangent)

To be clear, I'm not saying any of these events are particularly implausible. I'm just saying I wouldn't be surprised if real life turned out even worse than the Bad Ending, e.g. because the real-life equivalent of Agent-4 never gets caught, or OpenBrain succeeds at covering up the misalignment, or maybe the risk from ASI becomes abundantly clear but the government is still too slow-moving to do anything in time.

Reply
johnswentworth's Shortform
MichaelDickens1mo40

A quick search found this chart from a 2019 study on how couples meet. It looks like the fraction of couples who met at a bar has actually been going up in recent decades which is not what I would have predicted. But I don't know how reliable this study is.

Reply
America’s AI Action Plan Is Pretty Good
MichaelDickens1mo20

This was also my impression, but I didn't read much of the actual text of the plan so I figured Zvi knew better than me. But now my Aumann-updating-toward-Zvi and Aumann-updating-toward-Habryka have canceled out and I am back to my initial belief that the plan is bad.

I am also confused by the praise from Dean Ball who apparently worked on this plan. I thought he was pretty x-risk-pilled?

Reply
Ten AI safety projects I'd like people to work on
MichaelDickens1mo30
  1. AI lab monitor

There are a few orgs doing things like this:

  • AI Lab Watch rates AI companies on their safety procedures along various dimensions.
  • AI Safety Claims Analysis critically reviews AI companies' safety claims.
  • The Midas Project hosts several websites documenting AI companies' behavior; I think the most relevant one is Seoul Tracker which tracks how well AI companies are living up to their commitments at the Seoul summit.
  • SaferAI gives AI companies risk management ratings.

The first two of these are solo projects by Zach Stein-Perlman.

Reply
Load More
2MichaelDickens's Shortform
4y
133
No wikitag contributions to display.
63Outlive: A Critical Review
2mo
4
9How concerned are you about a fast takeoff due to a leap in hardware usage?
Q
3mo
Q
7
24Why would AI companies use human-level AI to do alignment research?
4mo
8
16What AI safety plans are there?
4mo
3
7Retroactive If-Then Commitments
7mo
0
5A "slow takeoff" might still look fast
3y
3
2How much should I update on the fact that my dentist is named Dennis?
Q
3y
Q
3
15Why does gradient descent always work on neural networks?
Q
3y
Q
11
2MichaelDickens's Shortform
4y
133
19How can we increase the frequency of rare insights?
4y
10
Load More