All Posts

Sorted by Magic (New & Upvoted)

Wednesday, June 7th 2023
Wed, Jun 7th 2023

AI 10
World Modeling 2
Lightcone Infrastructure 1
Organization Updates 1
Forecasts (Specific Predictions) 1
Scaling Laws 1
More
Shortform
7Mitchell_Porter9d
Eliezer recently tweeted that most people can't think, even most people here [https://twitter.com/ESYudkowsky/status/1665165312247975937], but at least this is a place where some of the people who can think, can also meet each other [https://twitter.com/ESYudkowsky/status/1665439386089955330].  This inspired me to read Heidegger's 1954 book What is Called Thinking? [https://en.wikipedia.org/wiki/What_Is_Called_Thinking%3F] (pdf [https://www.sas.upenn.edu/~cavitch/pdf-library/Heidegger_What_Is_Called_Thinking.pdf]),  in which Heidegger also declares that despite everything, "we are still not thinking".  Of course, their reasons are somewhat different. Eliezer presumably means that most people can't think critically, or effectively, or something. For Heidegger, we're not thinking because we've forgotten about Being, and true thinking starts with Being.   Heidegger also writes, "Western logic finally becomes logistics, whose irresistible development has meanwhile brought forth the electronic brain." So of course I had to bring Bing into the discussion.  Bing told me what Heidegger would think of Yudkowsky [https://pastebin.com/XccznywE], then what Yudkowsky would think of Heidegger [https://pastebin.com/EeS9qMMg], and finally we had a more general discussion about Heidegger and deep learning [https://pastebin.com/LPryEh0E] (warning, contains a David Lynch spoiler). Bing introduced me to Yuk Hui [https://en.wikipedia.org/wiki/Yuk_Hui], a contemporary Heideggerian who started out as a computer scientist, so that was interesting.  But the most poignant moment came when I broached the idea that perhaps language models can even produce philosophical essays, without actually thinking. Bing defended its own sentience, and even creatively disputed the Lynchian metaphor, arguing that its "road of thought" is not a "lost highway", just a "different highway". (See part 17, line 254.) 
6O O9d
If alignment is difficult, it is likely inductively difficult (difficult regardless of your base intelligence), and ASI will be cautious of creating a misaligned successor or upgrading itself in a way that risks misalignment. You may argue it’s easier for an AI to upgrade itself, but if the process is hardware bound or even requires radical algorithmic changes, the ASI will need to create an aligned successor as preferences and values may not transfer directly to new architectures or hardwares. If alignment is easy we will likely solve it with superhuman narrow intelligences and aligned near peak human level AGIs. I think the first case is an argument against FOOM, unless the alignment problem is solvable but only at higher than human level intelligences (human meaning the intellectual prowess of the entire civilization equipped with narrow superhuman AI). That would be a strange but possible world.
1
4Writer9d
Rational Animations has a subreddit: https://www.reddit.com/r/RationalAnimations/ [https://www.reddit.com/r/RationalAnimations/] I hadn't advertised it until now because I had to find someone to help moderate it.  I want people here to be among the first to join since I expect having LessWrong users early on would help foster a good epistemic culture.
2lc9d
The greatest generation imo deserves their name, and we should be grateful to live on their political, military, and scientific achievements.
2O O9d
The fact that this was completely ignored is a little disappointing. This is a very important question that would help put upper bounds to value drift, but it seems that answering it limits the imagination when it comes to ASI. Has there ever been an answer to it? I have a feeling larger brains have a higher coordination problem between its subcomponents, especially when you hit information transfer limits. This would put some hard limits on how much you can scale intelligence but I may be wrong. A fermi estimate on the upper bounds of intelligence may eliminate some problem classes alignment arguments tend to include.
4
Wiki/Tag Page Edits and Discussion