LESSWRONG
LW

nem
2267550
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Not stepping on bugs
nem11h20

Eh, I think it's possible that worms do have feelings. But even if they don't, my mind is able to trick me into having empathy for them in a way which it doesn't for video game characters. Are worms like people? shrugs I'm not arguing whether they are or aren't. All I can say for sure is that something about them triggers my protective and benevolent instincts, and I have chosen to lean into that instead of rationalizing it away.
 

As for 'type of place', I guess what I mean is that I want the world to have more people with bountiful empathy. And I can't very well be optimistic about achieving that if I don't practice it myself. It's not a utilitarian view; it's an admittedly non-rationalized desire. It's also very true that practicing empathy in this way makes me feel good about myself. I don't know if I am looking for a little dopamine hit, or if it's just in my nature to care about critters, but it makes me feel like a good person. 

Thank you for pressing me on this to get me to do some introspection.

Reply
My AI Predictions for 2027
nem2d10

I'll register my prediction here as well. I largely agree with your projection, although my median case looks a little bit more advanced. Also, note that I am not vouching for your arguments. 

75% -  likely we live in a world that feels pretty normal. That is, similar to what you described or a bit more advanced, as mentioned. 

Here are some places I differ from your predictions which might give insight into what I mean by "a little bit more advanced":
- In general, I anticipate more progress, both in terms of tech, and its integration into our world. 
- AI might have transformed some major industries and careers, even without providing novel research or human level insights. It's still not enough to cause an unprecedented crisis or anything. It's still in the range of historical economic transitions.
- It's also possible that AI has come up with some valuable scientific insights, just not often enough to be considered TAI or to completely disrupt the world/economy/society.
- AI might be able to replace more coders than you've described, as well as other knowledge workers. 
- AI will be able to tell genuinely funny jokes. 
- Self-driving cars of the type you've described are possible, although I think 2029 would be a safer bet.
- There will be real advances, but overall Christmas 2027 will still feel like Christmas 2024. My grandparents (who have never used a smart phone or a laptop) won't have noticed at all.

~ 8% or less on us living in a world like AI2027, or one with advances at least as fast and transformative. Foom lives here.

~ 8% goes to different 'weird' futures. For instance, what if robotics absolutely explodes, and we start seeing robots everywhere, but AI itself is still pretty bland? Or what if specialist systems take over the economy, but you still can't really have a conversation with an AI that doesn't fall apart quickly. Or there is a completely new paradigm that is more generally smart than LLMs, but is slow and lacks knowledge. Or there is AGI, but it is extremely expensive, or it's sealed in a lab. Or etc. etc. etc. This category includes industrial revolution magnitude changes that aren't just 'LLMs get better and we have to deal with a new, often superior intelligence'. It also includes major advances in AI that don't cause grand transformations. Eh, it's kind of odd to lump these together I suppose. But the point of this category was to be a catch all for unpredictable scifi scenarios I guess.

~ 8% goes on a complete AI bust, where it's generally accepted that it was a mistake to invest so much in AI and to integrate it into our economy. An AI winter is imminent and not very controversial. Undramatic AI plateaus do NOT live here.


This is all based on not having any major disruptions to the world. For instance, I'm not considering the implications of a global war, or another pandemic.

 

 

I should also note that while this puts my odds of 2027 Foom and Doom in the single digits or lower... that's still an awful high figure for the end of all humanity. Flip a coin 7-9 times. If it's head each time, then every one of us will be dead in 3 years.  

Reply
Not stepping on bugs
nem2d10

There is no causal link. It's about 1) practicing empathy, and 2) making the world the type of place you'd like it to be.

Reply
Not stepping on bugs
nem6d41

When I see a worm drowning in a puddle, or stuck on hot pavement, I rescue it. Is it important? Probably not. But I'd like to think that if anyone ever saw me drowning in a puddle, they would rescue me. 

Thank you for writing this post.

Reply
The Problem
nem1mo158

As a native English speaker, that seems pretty unnatural to me. But your choice of course!

Reply
The Problem
nem1mo71

Calling out a small typo, as this is clearly meant as a persuasive reference point:

"On* our view, the international community’s top immediate priority should be creating an “off switch” for frontier AI development"

Presumably, "On" here should be "In"

Reply
The salt in pasta water fallacy
nem1mo10

Another factor to consider when asking whether 'salt reduces cooking time': Presumably, if the boiling temperature is higher, there will be some extra amount of time needed to raise the water temp that much. So your 2.6 seconds becomes (2.6s - extraPreBoilingTime). It's feasible even that this turns your value negative. That, combined with the amount of time needed to grab the salt, and clean up any spilled salt (lets say 1 in 5 times) almost certainly ends up with you losing time, on average. 

So your friend isn't even technically correct. Bob is wrong on both counts.

I'd advise dumping him as a friend. Save your starchy water though.

Reply
Correct my H5N1 research
nem9mo60

No need to pay me for this. It's just an anecdote. 

I live near a farm where there are chickens and a donkey. The chickens routinely sit on, and poop on, the donkey. I imaging the same happens with cows when they cohabitate with birds. 

Reply1
The Information: OpenAI shows 'Strawberry' to feds, races to launch it
nem1y1-2

Should we be worried about the alignment of Strawberry itself? 
If it is misaligned, and is providing training data for their next Gen, then it can poison the well, even if Strawberry itself is nowhere near TAI.

Please tell me that they have considered this...
Or that I am wrong and it's not a valid concern. 

Reply
My hour of memoryless lucidity
nem1y101

Anecdote. The first time I went under anesthesia, I was told by a nurse that I would not remember her talking to me. I took it as a challenge. I told her to give me a word to remember. When I finally sobered up, I was able to remember that word, but pretty much nothing else at all from my experience.

This leads me to suspect that Drake's achievement had more to do with concerted effort and holding it in RAM than it did with storing the thought in long term memory. 

Reply
Load More
8Seeking advice on careers in AI Safety
4mo
2
10Just How Good Are Modern Chess Computers?
1y
1
2The Anthropic Principle Tells Us That AGI Will Not Be Conscious
2y
8
2What does pulling the fire alarm look like?
Q
2y
Q
0
24Is it a bad idea to pay for GPT-4?
2y
8
11The Pinnacle
2y
0
8I Believe we are in a Hardware Overhang
3y
0