LESSWRONG
LW

613
Mikhail Samin
2610273445
Message
Dialogue
Subscribe

My name is Mikhail Samin (diminutive Misha, @Mihonarium on Twitter, @misha on Telegram). 

Humanity's future can be enormous and awesome; losing it would mean our lightcone (and maybe the universe) losing most of its potential value.

My research is currently focused on AI governance and improving the understanding of AI and AI risks among stakeholders. I also have takes on what seems to me to be the very obvious shallow stuff about the technical AI notkilleveryoneism; but many AI Safety researchers told me our conversations improved their understanding of the alignment problem.
 

I'm running two small nonprofits: AI Governance and Safety Institute and AI Safety and Governance Fund. Learn more about our results and donate: aisgf.us/fundraising


I took the Giving What We Can pledge to donate at least 10% of my income for the rest of my life or until the day I retire (why?).

In the past, I've launched the most funded crowdfunding campaign in the history of Russia (it was to print HPMOR! we printed 21 000 copies =63k books) and founded audd.io, which allowed me to donate >$100k to EA causes, including >$60k to MIRI.

[Less important: I've also started a project to translate 80,000 Hours, a career guide that helps to find a fulfilling career that does good, into Russian. The impact and the effectiveness aside, for a year, I was the head of the Russian Pastafarian Church: a movement claiming to be a parody religion, with 200 000 members in Russia at the time, trying to increase separation between religious organisations and the state. I was a political activist and a human rights advocate. I studied relevant Russian and international law and wrote appeals that won cases against the Russian government in courts; I was able to protect people from unlawful police action. I co-founded the Moscow branch of the "Vesna" democratic movement, coordinated election observers in a Moscow district, wrote dissenting opinions for members of electoral commissions, helped Navalny's Anti-Corruption Foundation, helped Telegram with internet censorship circumvention, and participated in and organized protests and campaigns. The large-scale goal was to build a civil society and turn Russia into a democracy through nonviolent resistance. This goal wasn't achieved, but some of the more local campaigns were successful. That felt important and was also mostly fun- except for being detained by the police. I think it's likely the Russian authorities would imprison me if I ever visit Russia.]

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
6Mikhail Samin's Shortform
3y
300
New Statement Calls For Not Building Superintelligence For Now
Mikhail Saminnow20

Here are some comments by signers:

Rep. Don Beyer


I don’t think he actually signed?

Reply
Mikhail Samin's Shortform
Mikhail Samin7d50

Some of those; and some people who talk to those.

Reply
Mikhail Samin's Shortform
Mikhail Samin7d20

Talking to many people.

Reply
Mikhail Samin's Shortform
Mikhail Samin7d*176

Horizon Institute for Public Service is not x-risk-pilled

Someone saw my comment and reached out to say it would be useful for me to make a quick take/post highlighting this: many people in the space have not yet realized that Horizon people are not x-risk-pilled.

Edit: some people reached out to me to say that they've had different experiences (with a minority of Horizon people).

Reply
We’ve automated x-risk-pilling people
Mikhail Samin10d20

Hmm, what are you referring to?

Reply
Mikhail Samin's Shortform
Mikhail Samin11d20

Thanks, that’s helpful!

(Yep, it was me ranting about experiencing someone betraying my trust in a fairly sad way, who I really didn’t expect to do that, and who was very non-smart/weirdly scripted about doing it, and it was very surprising until I learned that they’ve not read planecrash. I normally don’t go around viewing anyone this way; and I dislike it when (very rarely! i can’t recall any other situations like that!) I do feel this way about someone.)

Reply
Mikhail Samin's Shortform
Mikhail Samin11d20

I think many people around me would’ve had the same assumption that this particular person read planecrash. I don’t want to say more as I probably don’t want to say that they specifically did that because I think their goals are still similar to my, even if they’re very mistaken and are doing some very counterproductive things, and I definitely want to err on the side on not harming someone’s life/social status without a strong reason why it would be good for the community to know a fact about them.

NPC-like behavior was mostly due to them doing the thing they seemed to ascribe to themselves as what they should just be doing in their role, without willingness to really consider arguments; planecrash was just a thing that would’ve given them the argument why you shouldn’t take the specific actions they’ve taken. (Basic human decency and friendship would also suffice, but if someone read planecrash and still did the thing I would not want to deal with them in any way in the future, the way you wouldn’t want to deal with someone who just screws you over for no reason.)

"You didn't act like I think the fictional character Keltham would have" is not a reasonable criticism of anyone.

I agree; it was largely what they did, which has nothing to do with planecrash. There are just some norms, that I expect it would be good for the community to have, that one implicitly learns from planecrash.

Reply
Mikhail Samin's Shortform
Mikhail Samin12d*520

We're sending copies of the book to everyone with >5k followers!

If you have >5k followers on any platform (or know anyone who does), (ask them to) DM me the address for a physical copy of If Anyone Builds It, or an email address for a Kindle copy.

So far, sent 13 copies to people with 428k followers in total.

Reply2
Mikhail Samin's Shortform
Mikhail Samin12d20

(I’m curious what caused two people to downvote this to -18.)

Reply
Mikhail Samin's Shortform
[+]Mikhail Samin12d*-16-14
Load More
51We’ve automated x-risk-pilling people
22d
33
77OpenAI Claims IMO Gold Medal
3mo
74
33No, Futarchy Doesn’t Have This EDT Flaw
4mo
28
6Superintelligence's goals are likely to be random
7mo
6
81No one has the ball on 1500 Russian olympiad winners who've received HPMOR
9mo
21
72How to Give in to Threats (without incentivizing them)
1y
34
11Can agents coordinate on randomness without outside sources?
Q
1y
Q
16
76Claude 3 claims it's conscious, doesn't want to die or be modified
2y
118
33FTX expects to return all customer money; clawbacks may go away
2y
1
The Tree of AI Alignment on Arbital
3 months ago
The Tree of AI Alignment on Arbital
3 months ago
(+37676)
Decision theory
7 months ago
(+142)
Functional Decision Theory
7 months ago
(+242)
Translations Into Other Languages
3 years ago
(+84/-60)