All of jdp's Comments + Replies

While Paul was at OpenAI, they accidentally overoptimized a GPT policy against a positive sentiment reward model. This policy evidently learned that wedding parties were the most positive thing that words can describe, because whatever prompt it was given, the completion would inevitably end up describing a wedding party.

In general, the transition into a wedding party was reasonable and semantically meaningful, although there was at least one observed instance where instead of transitioning continuously, the model ended the current story by generating a s

... (read more)

Get married, drive a white/silver car, and then buy a house near roads, greenery, and water. Got it.

Answer by jdpDec 16, 20215

The book Silicon Dreams: Information, Man, and Machine by Robert Lucky is where I got mine. It's a pop science book that explores the theoretical limits of human computer interaction using information theory. It's written to do exactly the thing you're asking for: Convey deep intuitions about information theory using a variety of practical examples without getting bogged down in math equations or rote exercises.

Covers topics like:

  • What are the bottlenecks to human information processing?
  • What is Shannon's theory of information and how does it work?
  • What i
... (read more)

Very grim. I think that almost everybody is bouncing off the real hard problems at the center and doing work that is predictably not going to be useful at the superintelligent level, nor does it teach me anything I could not have said in advance of the paper being written. People like to do projects that they know will succeed and will result in a publishable paper, and that rules out all real research at step 1 of the social process.

This is an interesting critique, but it feels off to me. There's actually a lot of 'gap' between the neat theory explanat... (read more)

1Alexander Gietelink Oldenziel2y
[I am a total noob on history of deep learning & AI]  From a cursory glance I find Schmidhuber's take convincing.  He argues that the (vast) majority of conceptual & theoretical advances in deep learning have been understood decades before - often by Schmidhuber and his collaborators.  Moreover, he argues that many of the current leaders in the field improperly credit previous discoveries [https://people.idsia.ch/~juergen/critique-turing-award-bengio-hinton-lecun.html] It is unfortunate that the above poster is anonymous. It is very clear to me that there is a big difference between theoretical & conceptual advances and the great recent practical advances due to stacking MOAR layers.  It is possible that remaining steps to AGI consists of just stacking MOAR layers: compute + data + comparatively small advances in data/compute efficiency + something something RL Metalearning will produce an AGI. Certainly, not all problems can be solved [fast] by incremental advances and/or iterating on previous attempts. Some can.  It may be the unfortunate reality that creating [but not understanding!] AGI is one of them. 

As a fellow "back reader" of Yudkowsky, I have a handful of books to add to your recommendations:

Engines Of Creation by K. Eric Drexler

Great Mambo Chicken and The Transhuman Condition by Ed Regis

EY has cited both at one time or another as the books that 'made him a transhumanist'. His early concept of future shock levels is probably based in no small part on the structure of these two books. The Sequences themselves borrow a ton from Drexler, and you could argue that the entire 'AI risk' vs. nanotech split from the extropians represented an argument about ... (read more)

6Eric Raymond2y
Great Mambo Chicken and Engines of Creation were in my reference list for a while, until I decided to cull the list for more direct relevance to systems of training for rationality.  It was threatening to get unmanageably long otherwise.  I didn't know there was a biography of Korzybski.  Thanks!