LESSWRONG
LW

2525
Amalthea
69701720
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
leogao's Shortform
Amalthea43m10

I'm not super happy with my phrasing, and Ben's "glory" mentioned in a reply indeed seems to capture it better. 

The point you make about theoretical research agrees with what I'm pointing at - whether you perceive a problem as interesting or not is often related to the social context and potential payoff. 
What I'm specifically suggesting that if you took this factor out of ML, it wouldn't be much more interesting than many other fields with a similar balance of empirical and theoretical components.

Reply
The Doomers Were Right
Amalthea2d10

What you're pointing at applies if AI merely makes most work obsolete without significantly disturbing the social order otherwise, but you're not considering (also historically common) replacement/displacement scenarios. It is clearly bad from my perspective if (e.g.) either:
1) Controllable strong AI gets used to takeover the world and in time replace the human population by the dictators offspring. 
2) Humans get displaced by AIs. 

In either case, the surviving parties may well look back on the current state of affairs and consider their world much improved, but it's likely we wouldn't on reflection. 

Reply
leogao's Shortform
Amalthea4d2-1

From my perspective, the interesting parts are "getting computers to think and do stuff" and getting exciting results, which hinges on the possible payoff rather than whether the problem itself is technically interesting or not. As such, the problems seem to be a mix of empirical research and math, maybe with some inspiration from neuroscience, and it seems unlikely to me that they're intellectually substantially different from other fields with a similar profile. (I'm not a professional AI researcher, so maybe the substance of the problems changes once you reach a high enough level that I can't fathom.) 

Reply
leogao's Shortform
Amalthea4d161

Aren't these basically mostly "works on capabilities because of status + power"?

(E.g. if you only care about challenging technical problems, you'll just go do math)

Reply
leogao's Shortform
Amalthea9d10

Fwiw, I read a number of Smil's books, and it was my impression that he strongly expressed that same opinion about sigmoids, and the mentioned example might have been precisely an attempt to illustrate how you can show everything with fitting the right sigmoid. (But it's been awhile since I read his books)

Reply
Raemon's Shortform
Amalthea1mo52

I'm actually not sure what this refers to. E.g. when Boaz Barak's posts spark some discussions it seems pretty civil and centered on the issue. The main disagreements don't necessarily get resolved, but at least they were identified, and I didn't get any serious signs of tribalism. 
But maybe this is me skipping over the offending comments (I tend to ignore things that don't feel intellectually interesting), or this is not an example of the dynamic that you refer to? 

Reply
Saying Goodbye
Amalthea3mo32

This is not an obvious solution, since (as you probably are aware) you run into the threat of human disempowerment given sufficiently strong models. You may disagree with this being an issue, but it would at least need to be argued. 

Reply
Sodium's Shortform
Amalthea3mo35

I also think it is intellectually and morally serious for people who are sitting on $20 trillion of capital

 

This should also be "unserious", it seems like the transcript is wrong here. 

Reply
OpenAI Claims IMO Gold Medal
Amalthea3mo30

Got it! I'm more inclined to generally expect that various half-decent ideas may unlock surprising advances (for no good reason in particular), so I'm less skeptical that this may be true. 
Also, while math is of course easy to verify, assuming they haven't significantly used verification in the training process, it makes their claims more reasonable.  
 

Reply
OpenAI Claims IMO Gold Medal
Amalthea3mo2-1

Sure, math is not an example of a hard-to-verify task, but I think you're getting unnecessarily hung up on these things. It does sound like it may be a new and in a narrow sense unexpected technical development, and it's unclear how significant it is. I wouldn't try to read into their communications much more.

Reply
Load More