All of Joe Kwon's Comments + Replies

The Intense World Theory of Autism

Very interesting post! 

1) I wonder what your thoughts are on how "disentangled" having a "dim world" perspective and being psychopathic are (completely "entangled" being: all psychopaths experience dim world and all who experience dim world are psychopathic).  Maybe I'm also packing too many different ideas/connotations into the term "psychopathy". 

2) Also, the variability in humans' local neuronal connection and "long-range" neuronal connections seems really interesting to me. My very unsupported, weak suspicion is that perhaps there is a c... (read more)

Less Wrong is a text-based forum. It has no audio. Video is rare. It barely even has any pictures. I would be surprised if the userbase wasn't skewed toward people with lower thresholds for stimulation.

How should my timelines influence my career choice?

Just a small note that your ability to contribute via research doesn’t go from 0 now, to 1 after you complete a PhD! As in, you can still contribute to AI Safety with research during a phd

Internal Information Cascades

Thanks for posting this! I was wondering if you might share more about your "isolation-induced unusual internal information cascades" hypothesis/musings! Really interested in how you think this might relate to low-chance occurrences of breakthroughs/productivity.

2JenniferRM4moSo, I think Thomas Kuhn can be controversial to talk about, but I feel like maybe "science" isn't even "really recognizable science" maybe until AFTER it becomes riddled with prestige-related information cascades? Kuhn noticed, descriptively, that when you look at actual people trying to make progress in various now-well-defined "scientific fields" all the way back at the beginnings, you find heterogeneity of vocabulary, re-invention of wheels, arguments about epistemology, and so on. This is "pre-science" in some sense. The books are aimed at a general audience. Everyone starts from scratch. There is no community that considers itself able to ignore the wider world and just geek out together [] but instead there is just a bunch of boring argumentative Tesla-caliber geniuses doing weird stuff that isn't much copied or understood by others. THEN, a Classic arises. Historically almost always a book. Perhaps a mere monograph. There have been TWO of them named Principia [] Mathematica [] already! It sweeps through a large body of people and everyone who reads it can't help but feel like conversations with people who haven't read it are boring retreads of old ideas. The classic lays out a few key ideas, a few key experiments, and a general approach that implies a bunch of almost-certainly-tractable open problems. Then people solve those almost-certainly-tractable problems like puzzles, one after another, and write to each other about it, thereby "making progress" with durable logs of the progress in the form of the publications. That "puzzle and publish" dynamic is "science as usual". Subtract the classic, and you don't have a science... and it isn't that you don't necessarily have something fun or interesting or geeky or gadgety or mechanistic or relevant to the effecting of all things possibl
Time & Memory

"To me, it feels viscerally like I have the whole argument in mind, but when I look closely, it's obviously not the case. I'm just boldly going on and putting faith in my memory system to provide the next pieces when I need them. And usually it works out."

This closely relates to the kind of experience that makes me think about language as post hoc symbolic logic fitting to the neural computations of the brain. Which kinda inspired the hypothesis of a language model trained on a distinct neural net being similar to how humans experience consciousness (and gives the illusion of free will). 

3Joe Kwon6mo [] My original idea (and great points against the intuition by Rohin)
Partial-Consciousness as semantic/symbolic representational language model trained on NN

So, I thought it would be a neat proof of concept if GPT3 served as a bridge between something like a chess engine’s actions and verbal/semantic level explanations of its goals (so that the actions are interpretable by humans). e.g. bishop to g5; this develops a piece and pins the knight to the king, so you can add additional pressure to the pawn on d5 (or something like this).

In response, Reiichiro Nakano shared this paper: 
which kinda shows it's possible to have agent state/action representations in natural langu... (read more)

4rohinmshah9mo(I've only read the abstract of the linked paper.) If you did something like this with GPT-3, you'd essentially have GPT-3 try to rationalize the actions of the chess engine the way a human would. This feels more like having two separate agents with a particular mode of interaction, rather than a single agent with a connection between symbolic and subsymbolic representations. (One intuition pump: notice that there isn't any point where a gradient affects both the GPT-3 weights and the chess engine weights.)
Value of building an online "knowledge web"

Thanks, I hadn't thought about those limitations

Value of building an online "knowledge web"

For the basic features, I got used to navigating everything within a hour. I'll be on the lookout for improvements to Roam or other note-taking programs like this