Wiki Contributions

Comments

Great summary, and really happy that this helped you!

I'd recommend people read Rick Hanson's paper on HEAL, if they're interested too: https://rickhanson.net/wp-content/uploads/2021/12/LLPE-paper-final2.pdf

Does it make sense to put any money into a pension given your outlook on AGI?

I really like the way it handles headlines and bullet point lists!

In an ideal world I'd like the voice to sound less robotic. Something like https://elevenlabs.io/ or https://www.descript.com/overdub.  How much I enjoy listening to text-to-speech content depends a lot on how grating I find the voice after long periods of listening.

Honestly, no plans at the moment. Writing these was a covid lockdown hobby. It's vaguely possible I'll finish it one day but I wouldn't hold your breath. Sorry.

But I rarely see anyone touch on the idea of "what if we only make something as smart as us?"

 

But why would intelligence reach human level and then halt there? There's no reason to think there's some kind of barrier or upper limit at that exact point.

Even in the weird case where that were true, aren't computers going to carry on getting faster? Just running a human level AI on a very powerful computer would be a way of creating a human scientist that can think at 1000x speed, create duplicates of itself, modify it's own brain. That's already a superintelligence isn't it?  

A helpful way of thinking about 2 is imagining something less intelligent than humans trying to predict how humans will overpower it.

You could imagine a gorilla thinking "there's no way a human could overpower us. I would just punch it if it came into my territory." 

The actual way a human would overpower it is literally impossible for the gorilla to understand (invent writing, build a global economy, invent chemistry, build a tranquilizer dart gun...)

The AI in the AI takeover scenario is that jump of intelligence and creativity above us. There's literally no way a puny human brain could predict what tactics it would use. I'd imagine it almost definitely involves inventing new branches of science.

I think that's true of people like: Steven Pinker and Neil deGrasse Tyson. They're intelligent but clearly haven't engaged with the core arguments because they're saying stuff like "just unplug it" and "why would it be evil?"

But there's also people like...

Robin Hanson. I don't really agree with him but he is engaging with the AI risk arguments, has thought about it a lot and is a clever guy.

Will MacAskill. One of the most thoughtful thinkers I know of, who I'm pretty confident will have engaged seriously with the AI Risk arguments. His p(doom) is far lower than Eliezer's. I think he says 3% in What We Owe The Future.

Other AI Alignment experts who are optimistic about our chances of solving alignment and put p(doom) lower (I don't know enough about the field to name people.)

And I guess I am reserving some small amount of probability for "most of the world's most intelligent computer scientists, physicists, mathematicians aren't worried about AI Risk, could I be missing something?" My intuitions from playing around on prediction markets is you have to adjust your bets slightly for those kind of considerations.

I find Eliezer and Nates' arguments compelling but I do downgrade my p(doom) somewhat (-30% maybe?) because there are intelligent people (inside and outside of LW/EA) who disagree with them.

I had some issues with the quote

Will continue to exist regardless of how well you criticize any one part of it.

I'd say LW folk are unusually open to criticism. I think if there were strong arguments they really would change people's minds here. And especially arguments that focus on one small part at a time.

But have there been strong arguments? I'd love to read them.

 

There's basically little reason to engage with it. These are all also evidence that there's something epistemically off with what is going on in the field.

For me the most convincing evidence that LW is doing something right epistemically is how they did better than basically everyone on Covid. Granted that's not the alignment forum but it was some of the same people and the same weird epistemic culture at work.

For me the core of it feels less like trying to "satisfying the values you think you should have, while neglecting the values you actually have" and more like having a hostile orientation to certain values I have.

I might be sitting at my desk working on my EA project and the parts of me that are asking to play video games, watch arthouse movies, take the day off and go hiking, find a girlfriend are like yapping dogs that won't shut up. I'll respond to their complaints once I've finished saving the world.

Through CFAR workshops, lots of goal factoring, journaling, and Focusing I'm getting some traction on changing that pattern. 

I've realised that values (or perhaps 'needs' fits better) are immutable facts about myself. Like my height or hair colour.  And getting annoyed at them for not being different makes about as much sense as shouting at the sky for raining.

The part of me that wants to maximize impact has accepted that moving to the Bay Area and working 80-hours a week at an EA org is a fabricated option.  A realistic plan takes into account my values that constrain me to want to live near my family, have lots of autonomy over my schedule and work independently on projects I control. Since realising that, my motivation, productivity, sense of agency (and ironically expected impact) have improved.  The future feels a lot brighter – probably because a whole load of internal conflict I wasn't acknowledging has been resolved.

You are however only counting one side here

 

In that comment I was only offering plausible counter-arguments to "the amount of people that were hurt by FTX blowing up is a rounding error."

How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified. 

I think we basically agree here.

I'm in favour of more complicated models that include more indirect effects, not less.

Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term. 

The fact that I can't predict and quantify ahead of time all the possible harms that result from fraud doesn't convince me that those concerns are unjustified.

We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it. 

Apart from anything else I don't think money is necessarily the most important bottleneck.

Load More