Eh, I think it's possible that worms do have feelings. But even if they don't, my mind is able to trick me into having empathy for them in a way which it doesn't for video game characters. Are worms like people? shrugs I'm not arguing whether they are or aren't. All I can say for sure is that something about them triggers my protective and benevolent instincts, and I have chosen to lean into that instead of rationalizing it away.
As for 'type of place', I guess what I mean is that I want the world to have more people with bountiful empathy. And I can't very well be optimistic about achieving that if I don't practice it myself. It's not a utilitarian view; it's an admittedly non-rationalized desire. It's also very true that practicing empathy in this way makes me feel good about myself. I don't know if I am looking for a little dopamine hit, or if it's just in my nature to care about critters, but it makes me feel like a good person.
Thank you for pressing me on this to get me to do some introspection.
I'll register my prediction here as well. I largely agree with your projection, although my median case looks a little bit more advanced. Also, note that I am not vouching for your arguments.
75% - likely we live in a world that feels pretty normal. That is, similar to what you described or a bit more advanced, as mentioned.
Here are some places I differ from your predictions which might give insight into what I mean by "a little bit more advanced":
- In general, I anticipate more progress, both in terms of tech, and its integration into our world.
- AI might have transformed some major industries and careers, even without providing novel research or human level insights. It's still not enough to cause an unprecedented crisis or anything. It's still in the range of historical economic transitions.
- It's also possible that AI has come up with some valuable scientific insights, just not often enough to be considered TAI or to completely disrupt the world/economy/society.
- AI might be able to replace more coders than you've described, as well as other knowledge workers.
- AI will be able to tell genuinely funny jokes.
- Self-driving cars of the type you've described are possible, although I think 2029 would be a safer bet.
- There will be real advances, but overall Christmas 2027 will still feel like Christmas 2024. My grandparents (who have never used a smart phone or a laptop) won't have noticed at all.
~ 8% or less on us living in a world like AI2027, or one with advances at least as fast and transformative. Foom lives here.
~ 8% goes to different 'weird' futures. For instance, what if robotics absolutely explodes, and we start seeing robots everywhere, but AI itself is still pretty bland? Or what if specialist systems take over the economy, but you still can't really have a conversation with an AI that doesn't fall apart quickly. Or there is a completely new paradigm that is more generally smart than LLMs, but is slow and lacks knowledge. Or there is AGI, but it is extremely expensive, or it's sealed in a lab. Or etc. etc. etc. This category includes industrial revolution magnitude changes that aren't just 'LLMs get better and we have to deal with a new, often superior intelligence'. It also includes major advances in AI that don't cause grand transformations. Eh, it's kind of odd to lump these together I suppose. But the point of this category was to be a catch all for unpredictable scifi scenarios I guess.
~ 8% goes on a complete AI bust, where it's generally accepted that it was a mistake to invest so much in AI and to integrate it into our economy. An AI winter is imminent and not very controversial. Undramatic AI plateaus do NOT live here.
This is all based on not having any major disruptions to the world. For instance, I'm not considering the implications of a global war, or another pandemic.
I should also note that while this puts my odds of 2027 Foom and Doom in the single digits or lower... that's still an awful high figure for the end of all humanity. Flip a coin 7-9 times. If it's head each time, then every one of us will be dead in 3 years.
There is no causal link. It's about 1) practicing empathy, and 2) making the world the type of place you'd like it to be.
When I see a worm drowning in a puddle, or stuck on hot pavement, I rescue it. Is it important? Probably not. But I'd like to think that if anyone ever saw me drowning in a puddle, they would rescue me.
Thank you for writing this post.
As a native English speaker, that seems pretty unnatural to me. But your choice of course!
Calling out a small typo, as this is clearly meant as a persuasive reference point:
"On* our view, the international community’s top immediate priority should be creating an “off switch” for frontier AI development"
Presumably, "On" here should be "In"
Another factor to consider when asking whether 'salt reduces cooking time': Presumably, if the boiling temperature is higher, there will be some extra amount of time needed to raise the water temp that much. So your 2.6 seconds becomes (2.6s - extraPreBoilingTime). It's feasible even that this turns your value negative. That, combined with the amount of time needed to grab the salt, and clean up any spilled salt (lets say 1 in 5 times) almost certainly ends up with you losing time, on average.
So your friend isn't even technically correct. Bob is wrong on both counts.
I'd advise dumping him as a friend. Save your starchy water though.
No need to pay me for this. It's just an anecdote.
I live near a farm where there are chickens and a donkey. The chickens routinely sit on, and poop on, the donkey. I imaging the same happens with cows when they cohabitate with birds.
I appreciate you writing this up. I have been a bit starved for this community's opinions on the current political situation. I felt very strongly that electing Trump was a huge mistake on almost every level. But I think there is something we have overlooked which has become more obvious to me over the last 9 months.
Advocacy has been extremely diluted. I want to protest against the acceleration of AI. I want to invest my time and emotional energy toward what I think is the largest existential risk to humanity. I want to call my representatives monthly to let them know that I am a single issue voter on AI safety.
What I actually have done is protest the Trump administration. What I have actually done is join grassroots movements to halt the slide of democratic and social institutions. I know I'm not the only one. Even if I personally wasn't invested in the crisis of the current moment, I know other people would be. There is only so much bandwidth to go around, and even if I was focusing on AI, my audience would certainly be less receptive.