LESSWRONG
LW

172
Eli Tyre
8041Ω535511424
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
29Eli's shortform feed
6y
324
Christian homeschoolers in the year 3000
Eli Tyre18h90

Is this that bad?

I think most Christians are probably pretty happy, humane lives. And the ways in which their lives are not happy seem likely to be improved a lot by trustworthy superintelligence.

Like if a guy is gay, growing up trapped in a intensely Christian environment that is intent on indoctrinating him that homosexuality is sinful, seems pretty bad. But in the year 3000, it seems like the Christian superintelligence will either have effective techniques for removing his homosexual urges. 

It does seem bad if you're trapped in an equilibrium where everyone knows that being gay is sinful, and also that removing homosexual urges is sinful, and also there's enormous superintelligence resources propping up those beliefs, such that it's not plausible for one to escape the memetic traps. Is that what you anticipate?

Reply
The Case Against AI Control Research
Eli Tyre18h40

Their alignment team gets busy using the early transformative AI to solve the alignment problems of superintelligence. The early transformative AI spits out some slop, as AI does. Alas, one of the core challenges of slop is that it looks fine at first glance, and one of the core problems of aligning superintelligence is that it’s hard to verify;

Ok, but wouldn't we also be testing our AIs on problems that are easy to verify?

Like, when the cutting edge AIs are releasing papers that are elegantly solving long standing puzzles in physics or biology, and making surprising testable predictions along the way, we'll know that they're capable of producing more than slop. 

Are you proposing that...

  1. The AIs won't be producing legible-and-verifiable breakthroughs in other fields, but they will be spitting out some ideas for AI alignment / control that seem promising to lab researchers, who decide to go with it?
  2. The AIs will be be producing legible-and-verifiable breakthroughs in other fields, but those same AIs will be producing slop in the case of Alignment (perhaps because tricking the humans is the path of least resistance with alignment, but not with physics)?

...or something else?

Reply
The title is reasonable
Eli Tyre19h20

Fuck yeah. This is inspiring. It makes me feel proud and want to get to work.

Reply
JDP Reviews IABIED
Eli Tyre2d20

Yeah, I saw.

Reply
JDP Reviews IABIED
Eli Tyre2d*20

I have 60% probability that you intentionally structured the post to feel like the pattern of how you felt reading the book

I'll take that bet. 1:1, $100?

[This comment is no longer endorsed by its author]Reply
don't_wanna_be_stupid_any_more's Shortform
Eli Tyre3d32

is the media attention of publishing a book through standards publishers worth putting the authors motives in question?

Yes. It's approximately the whole point. The authors have already produced massive amounts of free online content raising the alarm about AI risk. Those materials have had substantial impact, persuading the type of person who tends to read and be interested in long blog posts, of that kind. But that is a limited audience. 

The point of publishing a proper book is precisely to reach a larger audience, and to shift the overton window of what's views are known to be respectable.

Reply
don't_wanna_be_stupid_any_more's Shortform
Eli Tyre3d84

Books released by standard publishers, sold at bookstores, get much more media attention and readership than free e-books. 

Reply
AnnaSalamon's Shortform
Eli Tyre5d40

I'd pay at least $100 to someone who could tell me where to buy a mask like that, or how to easily assemble the pieces.

Reply
jdp's Shortform
Eli Tyre5d20

I found an advance copy. :)

How? I thought MIRI was trying to be very careful with copies getting around before the launch day.

Reply
adamzerner's Shortform
Eli Tyre6d42

Getting more experience that might inform what you what sounds like a generally sound idea, but isn't the "baby" stage only like 5% of whole process of raising a child? If you don't like taking care of babies that doesn't mean that you overall don't want kids, right? 

Reply
Load More
Center For AI Policy
2 years ago
Blame Avoidance
3 years ago
Hyperbolic Discounting
3 years ago
23Evolution did a surprising good job at aligning humans...to social status
2y
37
48On the lethality of biased human reward ratings
2y
10
14Smart Sessions - Finally a (kinda) window-centric session manager
2y
3
63Unpacking the dynamics of AGI conflict that suggest the necessity of a premptive pivotal act
2y
2
20Briefly thinking through some analogs of debate
3y
3
146Public beliefs vs. Private beliefs
3y
30
144Twitter thread on postrationalists
4y
33
22What are some good pieces on civilizational decay / civilizational collapse / weakening of societal fabric?
Q
4y
Q
8
38What are some triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model?
Q
4y
Q
16
42I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction
4y
29
Load More