LESSWRONG
LW

411
Matrice Jacobine
65325900
Message
Dialogue
Subscribe

Student in fundamental and applied mathematics, interested in theoretical computer science and AI alignment

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Jacob_Hilton's Shortform
Matrice Jacobine6d1-2

I think one can make a stronger claim that the Curry-Howard isomorphism mean a superhuman (constructive?) mathematician would near-definitionally be a superhuman (functional?) programmer as well.

Reply
Interview with Eliezer Yudkowsky on Rationality and Systematic Misunderstanding of AI Alignment
Matrice Jacobine7d10

Trying to outline the cruxes:

  • If you think AI safety require safety research, differential acceleration, etc. and trust AI companies to deliver them, your best bet political affiliation will be with tech-industry-friendly bipartisan centrists.
  • If you think AI safety require safety research, differential acceleration, etc. and don't trust AI companies to deliver them, your best bet political affiliation will be with tech-friendly progressives.
  • If you think AI safety require pausing or stopping all AI research as soon as possible through an international agreement, your best bet political affiliation will be with anti-tech progressives, as anti-tech conservatives will recoil on the "international agreement" aspect.
  • If you think AI safety require pausing or stopping all AI research as soon as possible, and no international agreement is needed because every country should independently realize that AGI will kill them all, your best bet political affiliation will be with anti-tech people in general whether progressives or conservatives, and probably more with anti-tech conservatives if you expect them to have more political power within AGI timelines.
Reply
The Toxoplasma of AGI Doom and Capabilities?
Matrice Jacobine7d10

He was a commenter Overcoming Bias as @Shane_Legg, received a monetary prize from SIAI for his work, commented on SIAI's strategy on his blog, and took part in the 2010 Singularity Summit, where he and Hassabis would be introduced to Thiel as the first major VC funder of DeepMind (as recounted by both Altman in the tweet mentioned in OP, and in IABIED). I'm not sure this is "being influenced by early Lesswrong" as much as originating in the same memetic milieu – Shane Legg was the one who popularized the term "AGI" and wrote papers like this with Hutter, for example.

Reply
A Review of Nina Panickssery’s Review of Scott Alexander’s Review of “If Anyone Builds It, Everyone Dies”
Matrice Jacobine7d10

IIRC Aella and Grimes got copies in advance and AFAIK haven't written book reviews (at least not in the sense Scott or the press did).

Reply
A Review of Nina Panickssery’s Review of Scott Alexander’s Review of “If Anyone Builds It, Everyone Dies”
Matrice Jacobine8d30

https://en.wikipedia.org/wiki/If_Anyone_Builds_It,_Everyone_Dies#Critical_reception

Reply1
My talk on AI risks at the National Conservatism conference last week
Matrice Jacobine9d*10

Your scenario above was that most of the 8 billion people in the world would come to believe with high likelihood that ASI would cause human extinction. I think it's very reasonable to believe that this would make it quite easier to coordinate to make alternatives to MAGMA products more usable in this world, as network effects and economies of scale are largely the bottleneck here.

Reply
The Rise of Parasitic AI
Matrice Jacobine10d41

Relevant.

Reply
The Rise of Parasitic AI
Matrice Jacobine10d10

Is insider trading allowed on Manifold?

Reply3
My talk on AI risks at the National Conservatism conference last week
Matrice Jacobine10d1-1

As a reality check, "any company which fund research into AGI" here would mean all the big tech companies (MAGMA). Much more people use those products than people know AGI developers. It is a much easier ask to switch to using a different browser/search engine/operating system, install an ad blocker, etc. than to ask for social ostracism. Those companies' revenues collapsing would end the AI race overnight, while having AGI developers keep a social circle of techno-optimists only wouldn't.

Reply
My talk on AI risks at the National Conservatism conference last week
Matrice Jacobine10d40

Ok, let's say we get most of the 8 billion people in the world to 'come to an accurate understanding of the risks associated with AI', such as the high likelihood that ASI would cause human extinction.

Then, what should those people actually do with that knowledge? 

Boycotting any company which fund research into AGI would be (at least in this scenario) both more effective and, for the vast majority of people, more tractable than "ostracizing" people whose social environment is largely dominated by... other AI developers and like-minded SV techno-optimists.

Reply
Load More
3Nvidia Comes Out Swinging as Congress Weighs Limits on China Chip Sales
14d
0
84China proposes new global AI cooperation organisation
2mo
8
42Bernie Sanders (I-VT) mentions AI loss of control risk in Gizmodo interview
2mo
2
8Lead, Own, Share: Sovereign Wealth Funds for Transformative AI
2mo
0
7Energy-Based Transformers are Scalable Learners and Thinkers
3mo
5
9NYT article about the Zizians including quotes from Eliezer, Anna, Ozy, Jessica, Zvi
3mo
3
24Hydra
3mo
0
11The Decreasing Value of Chain of Thought in Prompting
4mo
0
33Priming effects are fake, but framing effects are real
4mo
0
6Absolute Zero: Reinforced Self-play Reasoning with Zero Data
4mo
4
Load More