LESSWRONG
LW

Josh Snider
72Ω30350
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
An epistemic advantage of working as a moderate
Josh Snider12d20

Avoiding what you suggested is why private conversations are an advantage. I think you misunderstood the essay, unless I'm misunderstanding your response.

Reply
Underdog bias rules everything around me
Josh Snider13d23

> I think it's cultural and goes back to the rise of Christianity.

This seems testable with a cross-cultural analysis. Not just the pre-Christian Greek stories that Garrett mentioned, but Chinese, Japanese, Indian, and Middle Eastern cultures should have plenty of non-Christian stories.

Reply
How Does A Blind Model See The Earth?
Josh Snider21d7-1

This is pretty cool. As for Opus, could you just use it for "free" by running it in Claude Code and use your account's built-in usage limits.

Edit: That might also work for gemini-cli and 2.5 Pro.

Reply
The King and the Golem - The Animation
Josh Snider24d10

This is a great story and the animation is also great. Good work everyone!

Reply
The Problem
Josh Snider26d10

Oh yeah, I also find that annoying.

Reply
The Problem
Josh Snider26d20

> If we were to put a number on how likely extinction is in the absence of an aggressive near-term policy response, MIRI’s research leadership would give one upward of 90%.

This is what I interpreted as implying p(doom) > 90%, but it's clearly a misreading to assume that someone advocating for "an aggressive near-term policy response" believes that it has a ~0% chance of happening.

I am in camp 2, but will try to refine my argument more before writing it down.

Reply1
The Problem
Josh Snider1mo40

This is great. I recognize that this is almost certainly related to the book "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All", which I have preordered, but as a standalone piece I feel estimating p(doom) > 90 and dismissing alignment-by-default without an argument is too aggressive.

Reply
Comp Sci in 2027 (Short story by Eliezer Yudkowsky)
Josh Snider1mo10

Yeah, I don't think I read this when it came out, but I'm happy to read it now.

Reply
I am worried about near-term non-LLM AI developments
Josh Snider1mo10

https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement (and the sequel) seem highly related.

Reply
A night-watchman ASI as a first step toward a great future
Josh Snider1mo2-1

If you had a solution to alignment, building a Night-Watchman ASI would be decent, but that is a massive thing to assume. At the point where you could build this, it might be better to just build an ASI with the goal of maximizing flourishing.

Reply
Load More
No posts to display.