LESSWRONG
LW

1606
Tao Lin
932Ω511710
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Tao Lin's Shortform
4y
6
Safety researchers should take a public stance
Tao Lin23d122

I support a magically enforced 10+ year AGI ban. It's hard for me to concretely imagine a ban enforced by governments, because it's hard to disentangle what that counterfactual government would be like, but I support a good government enforced AGI slowdown. I do like it when people shout doom from the rooftops though, because it's better for my beliefs to be closer to global average average, and the global discourse is extremely far from overshooting doominess.

Reply
Buck's Shortform
Tao Lin23d10

Yeah it goes out of its way to say the opposite, but if you know Nate and Eliezer the book gives the impression that their pdooms are still extremely high, and responding to the author's beliefs even when those aren't exactly the same as the text is sometimes correct, although not really in this case.

Reply1
Max Harms's Shortform
Tao Lin1mo60

If you have a lump of 7,000 neurons, they can each connect to each other neuron, and you can spherical-cow approximate that as a 7000x7000 matrix multiplication. That matrix multiplication will all happen within O(1) spikes, 1/100 of a second. That's ~700GFlop. An H100 GPU takes ~1 millisecond to do that operation, or 1M cycles, to approximate one brain spike cycle! And the gpu has 70B or whatever transistors, so it's more like 10M transistors per neuron!

[This comment is no longer endorsed by its author]Reply
My AI Model Delta Compared To Christiano
Tao Lin1mo30

Note that since Paul started working for the US government a few years ago, he has withdrawn from public discussion of AI safety to avoid PR and conflict of interests, so going off his writings are significantly behind his current beliefs.

Reply
Generative AI is not causing YCombinator companies to grow more quickly than usual (yet)
Tao Lin1mo10

YC batches have grown 3x since 2016. I expect a significant market saturation / low hanging fruit effect, reducing the customer base of each startup compared to when there were only 200/year. On the last decade of Y Combinator | by Jared Heyman | Medium

Reply
Yudkowsky on "Don't use p(doom)"
Tao Lin2mo20

I'm surprised that's the question. I would guess that's not what Eliezer means because he says Dath Ilan is responding sufficiently to AI risk but also hints at Dath Ilan still spending a significant fraction of its resources on AI safety (I've only read a fraction of the work here, maybe wrong). I have a background belief that the largest problems don't change that much, and it's rare for problems to go from #1 problem to not-in-top-10 problems, and that most things have diminishing returns such that it's not worthwhile to solve them so thoroughly. An alternative definition that's spiritually similar that I like more is; "What policy could governments implement such that the improving the AI x-risk policy would now not be the #1 priority, if the governments were wise.". This isolates AI / puts it in context of other global problems, such that the AI solution doesn't need to prevent governments from changing their minds over the next 100 years or whatever needs to happen for the next 100 years to go well.

Reply
Thomas Kwa's Shortform
Tao Lin2mo40

I would expect aerodynamic maneuvering MIRVS to work and not be prohibitively expensive. The closest deployed version appears to be https://en.wikipedia.org/wiki/Pershing_II which has 4 large fins. You likely don't need that much steering force

Reply
Consider chilling out in 2028
Tao Lin2mo80

I really struggle to think of problems you want to wait 2.5 years to solve - when you identify a problem, you usually want to work on solving it within the month. Just update most of the way now + a tiny bit over time as evidence comes in. As others commented, no doom by 2028 is very little evidence

Reply
How Fast Can Algorithms Advance Capabilities? | Epoch Gradient Update
Tao Lin2mo10

I heard some rumors that gpt 4.5 got good pretraining loss but bad downstream performance. If that's true the loss scaling laws may have worked correctly. If not, yeah a lot of things can go wrong and something did, whether that's hardware issues, software bugs, or machine learning problems or problems with their earlier experiments

Reply
OpenAI Claims IMO Gold Medal
Tao Lin3mo51

This is OpenAI cot style. See it in the original o1 blog post. https://openai.com/index/learning-to-reason-with-llms/

Reply
Load More
77Send us example gnarly bugs
Ω
2y
Ω
10
34Causal scrubbing: results on induction heads
Ω
3y
Ω
1
34Causal scrubbing: results on a paren balance checker
Ω
3y
Ω
2
2Tao Lin's Shortform
4y
6