All Posts

Sorted by Magic (New & Upvoted)

Sunday, May 31st 2020
Sun, May 31st 2020

Frontpage Posts
2lsusr19h[BOOK REVIEW] SURFING UNCERTAINTY Surfing Uncertainty is about predictive coding, the theory in neuroscience that each part of your brain attempts to predict its own inputs. Predictive coding has lots of potential consequences. It could resolve the problem of top-down vs bottom-up processing. It cleanly unifies lots of ideas in psychology. It even has implications for the continuum with autism on one end and schizophrenia on the other. The most promising thing about predictive coding is how it could provide a mathematical formulation for how the human brain works. Mathematical formulations are great because once they let you do things like falsify experiments and simulate things on computers. But while Surfing Uncertainty goes into many of the potential implications of predictive codings, the author never hammers out exactly what "prediction error" means in quantifiable material terms on the neuronal level. This book is a reiteration of the scientific consensus[1] [#fn-ePPhQWx9vvkGAnvu4-1]. Judging by the total absense of mathematical equations on the Wikipedia page for predictive coding [], I suspect the book never defines "prediction error" in mathematically precise terms because no such definition exists. There is no scientific consensus. Perhaps I was disappointed with this book because my expectations were too high. If we could write equations for how the human brain performs predictive processing then we would be significantly closer to building an AGI than where we are right now []. -------------------------------------------------------------------------------- 1. The book contains 47 pages of high-quality scientific citations. ↩︎ [#fnref-ePPhQWx9vvkGAnvu4-1]

Saturday, May 30th 2020
Sat, May 30th 2020

8Ariel Kwiatkowski1dHas anyone tried to work with neural networks predicting the weights of other neural networks? I'm thinking about that in the context of something like subsystem alignment, e.g. in an RL setting where an agent first learns about the environment, and then creates the subagent (by outputting the weights or some embedding of its policy) who actually obtains some reward

Friday, May 29th 2020
Fri, May 29th 2020

3Draconarius2dHilbert’s Motel improvement This hotel is 2 star at best, imagine having to pack up your stuff every time the hotel receives a new guest? I’ve decided to fix that. The hotel still has infinite rooms and guests but this time every other room is unoccupied which prepares the hotel for an infinite amount of new visitors without inconveniencing the current residence.
1__nobody3dObservation: It should generally be safe to forbid non-termination when searching for programs/algorithms. In practice, all useful algorithms terminate: If you know that you're dealing with a semi-decidable thing and doing serious work, you'll either (a) add a hard cutoff, or (b) structure the algorithm into a bounded step function and a controller that decides whether or not to run for another step. That transformation is not adding significant overhead size-wise, so you're bound to find a terminating algorithm "near" a non-terminating one! Sure, that slightly changes the interface – it's now allowed to abort with "don't know", but that's a transformation that you likely would have applied anyway. Even if you consider that a drawback, not having to deal with potentially non-terminating programs / being able to use a description format that cannot represent non-terminating forms should more than make up for that. (I just noticed this while thinking about how to best write something in Coq (and deciding on termination by "fuel limit"), after AABoyles' shortform on logical causal isolation [] with its tragically simple bit-flip search had recently made me think about program enumeration again…)

Thursday, May 28th 2020
Thu, May 28th 2020

17Raemon4dI had a very useful conversation with someone about how and why I am rambly. (I rambled a lot in the conversation!). Disclaimer: I am not making much effort to not ramble in this post. A couple takeaways: 1. Working Memory Limits One key problem is that I introduce so many points, subpoints, and subthreads, that I overwhelm people's working memory (where human working memory limits is roughly "4-7 chunks"). It's sort of embarrassing that I didn't concretely think about this before, because I've spent the past year SPECIFICALLY thinking about working memory limits, and how they are the key bottleneck on intellectual progress. So, one new habit I have is "whenever I've introduced more than 6 points to keep track of, stop and and figure out how to condense the working tree of points down to <4. (Ideally, I also keep track of this in advance and word things more simply, or give better signposting for what overall point I'm going to make, or why I'm talking about the things I'm talking about) ... 2. I just don't finish sente I frequently don't finish sentences, whether in person voice or in text (like emails). I've known this for awhile, although I kinda forgot recently. I switch abruptly to a new sentence when I realize the current sentence isn't going to accomplish the thing I want, and I have a Much Shinier Sentence Over Here that seems much more promising. But, people don't understand why I'm making the leap from one half-finished thought to another. So, another simple habit is "make sure to finish my god damn sentences, even if I become disappointed in them halfway through" ... 3. Use Mindful Cognition Tuning to train on *what is easy for people to follow*, as well as to improve the creativity/usefulness of my thoughts. I've always been rambly. But a thing that I think has made me EVEN MORE rambly in the past 2 years is a mindful-thinking-technique, where you notice all of your thoughts on the less-than-a-second level, so that you can notice which tho
7Paul Crowley3dFor the foreseeable future, it seems that anything I might try to say to my UK friends about anything to do with LW-style thinking is going to be met with "but Dominic Cummings". Three separate instances of this in just the last few days.

Tuesday, May 26th 2020
Tue, May 26th 2020

9AABoyles5dAnything sufficiently far enough away from you is causally isolated from you. Because of the fundamental constraints of physics, information from there can never reach here, and vice versa. you may as well be in separate universes [] . The performance of AlphaGo got me thinking about algorithms we can't access. In the case of AlphaGo, we implemented the algorithm (AlphaGo) which discovered some strategies we could never have created. (Go Master Ke Jie famously said "I would go as far as to say not a single human has touched the edge of the truth of Go.") Perhaps we can imagine a sort of "logical causal isolation." An algorithm is logically causally isolated from us if we cannot discover it (e.g. in the case of the Go strategies that AlphaGo used) and we cannot specify an algorithm to discover it (except by random accident) given finite computation over a finite time horizon (i.e. in the lifetime of the observable universe). Importantly, we can devise algorithms which search the entire space of algorithms (e.g. generate all permutations all possible strings of bits less than length n as n approaches infinity), but there's little reason to expect that such a strategy will result in any useful outputs of some finite length (there appear to be enough atoms in the universe (1080) to represent all possible algorithms of length [] log2(1080)≈265. There's one important weakness in LCI (that doesn't exist in Physical Causal Isolation). We can randomly jump to algorithms of arbitrary lengths. This stipulation gives us the weird ability to pull stuff from outside our LCI-cone into it. Unfortunately, we cannot do so with the expectation of arriving at a useful algorithm. (There's an interesting question about which I haven't yet thought about the distribution of useful algorithms of a given length.) Hence we must add the caveat to our definit
6TurnTrout5dSentences spoken aloud are a latent space embedding of our thoughts; when trying to move a thought from our mind to another's, our thoughts are encoded with the aim of minimizing the other person's decoder error.
5Mary Chernyshenko5dSome other people who play to win It's a crowd I'd come into contact with as a manager of an online bookshop (and most of the reason I quitted). Usually, I can pretend they don't exist, but... we all know how it goes... and now that they don't make my blood boil every weekend, I can afford to speak about them. "Some other people" will play to win - say, a facebook lottery with a book for a prize, and they will mean it. If they don't win, they will say the lottery was rigged. Public righteous indignation on every player's behalf is a weapon (and for the manager, a potent vaccine against righteously indignant polemics of many other kinds). Private appeals to the manager's pity; commenting the rules' exploitable/exploited loopholes - after the winner is announced; repeating actions which have already been answered elsewhere in the thread. I don't include 'filing a complaint' here, because it's frankly too straightforward for most of them, most of the time; the bookshop would likely send them a book with an eloquent blessing/apology, just to get them to shut up and earn good PR points for "owning up to mistakes". But in practice, it still matters too much to be the actual winner, and the brain of the trophy-gatherer works like other brains don't. At least not for a while. I'm not unusually out-of-touch with customers; I was recommended for the job after two years in an offline shop. And this was... entirely different. I'd never encountered people with whole profiles dedicated to reposting online lotteries - living people I had to call on the phone. It is another world. When I read about (simple) "pure" game theoretical problems, in which the players "care only about winning", I cannot reconcile the image of Worthy Rivals the author has in mind with the actual Really-Want-This-Whatever Whiners who seek out such contests. Get it, not the passively allowing themselves to be drawn into a strategic game kind of players, but the self-sorting to exploit as many offers as

Sunday, May 24th 2020
Sun, May 24th 2020

12Raemon8dThere's a problem at parties where there'll be a good, high-context conversation happening, and then one-too-many-people join, and then the conversation suddenly dies. Sometimes this is fine, but other times it's quite sad. Things I think might help: * If you're an existing conversation participant: * Actively try to keep the conversation small. The upper limit is 5, 3-4 is better. If someone looks like they want to join, smile warmly and say "hey, sorry we're kinda in a high context conversation right now. Listening is fine but probably don't join." * If you do want to let a newcomer join in, don't try to get them up to speed (I don't know if I've ever seen that actually work). Instead, say "this is high context so we're not gonna repeat the earlier bits, maybe wait to join in until you've listened enough to understand the overall context", and then quickly get back to the conversation before you lose the Flow. * If you want to join a conversation: * If there are already 5 people, sorry, it's probably too late. Listen if you find it interesting, but if you actively join you'll probably just kill the conversation. * Give them the opportunity to gracefully keep the conversation small if they choose. (say something like "hey can I join? It sounds like maybe a high context conversation, no worries if you wanna keep it small.") * Listen for longer before joining. Don't just wait till you understand the current topic – try to understand the overall vibe, and what previous topics might be informing the current one. Try to get a sense of what each current participant is getting out the conversation. When you do join, do so in a small way that gives them affordance to shift back to an earlier topic if your new contribution turned out to be not-actually-on-topic.
3ryan wong7dThere are two kinds of pleasurable feelings. The first one is a self-reinforcing loop, where the in-the-moment pleasure leads to craving for more pleasure, such as mindlessly scrolling through social media, or eating highly-processed, highly-palatable food. The second is pleasure gained through either thoughtfully consuming good content, like listening to good music or reading good books, or the fulfillment of a task that's meaningful, such as getting good grades or getting a promotion for sustained conscientious effort. The first is pleasure for its own sake, without any "real world rewards" that come with it, ie, distractions. The second isn't as "addictive" as the first, nor does it cause the same spikes in pleasure, but it comes with real world tangible rewards. There is no way to completely eliminate the human need for the first pleasure. But the need can be reduced. The ratio of second:first pleasure, is the degree to which a person is able to achieve his goals, the degree to which a person is successful.

Saturday, May 23rd 2020
Sat, May 23rd 2020

6ESRogs8dI'm looking for an old post where Eliezer makes the basic point that we should be able to do better than intellectual figures of the past, because we have the "unfair" advantage of knowing all the scientific results that have been discovered since then. I think he cites in particular the heuristics and biases literature as something that thinkers wouldn't have known about 100 years ago. I don't remember if this was the main point of the post it was in, or just an aside, but I'm pretty confident he made a point like this at least once, and in particular commented on how the advantage we have is "unfair" or something like that, so that we shouldn't feel at all sheepish about declaring old thinkers wrong. Anybody know what post I'm thinking of?
3William_Darwin9dI've been thinking about people's mindset as it relates to spending their free time. Specifically, when you go to do something 'productive' like learn about a new topic, work through exercises in a textbook, go through an online course, you feel that you have to intentionally decide not to play video games, watch Netflix, etc and forego short-term happiness? Or do you feel that this decision is straightforward because that's what you would prefer to be doing and you don't feel like you sacrifice anything?

Friday, May 22nd 2020
Fri, May 22nd 2020

29mingyuan9dI get sick of people saying things that imply that rationality has no practical, tangible benefit (e.g. "I moved to the Bay and am no better off" etc). Lots of discussion about this topic talks about physical fitness, career success, or investing. But since this is my shortform and I can say whatever I want, I want to talk about a concept that I've personally found helpful: the idea of fire alarms (as talked about in There’s No Fire Alarm for Artificial General Intelligence and Sunset at Noon), which is sort of like just another concept handle for noticing confusion. When I was eleven, my family spent Thanksgiving at my grandparents' house. That weekend, as we were getting ready to make the several-hour drive back home, the adults were doing odd jobs around the house and my sister and I were hanging out with our cousin in the den, maybe watching a movie or something. At some point, my cousin looked up and asked, "Is someone screaming?" My priors on someone screaming were extremely low, and I didn't hear what she heard, so I said, "Nah, it's just a machine," and turned back to what I was doing. Turned out it was someone screaming, and my decision to ignore that possibility could very well have meant the difference between my dad living and dying. (The details aren't important, but for anyone who's worried, he lived.) Another anecdote. In late 2017, shortly after the release of both There’s No Fire Alarm for Artificial General Intelligence and Sunset at Noon, I was standing with Habryka in a dimly lit and dusty room, setting up A/V for a show. In a moment of stillness, Habryka looked over and said, "Is that smoke?" Due to the poor lighting and my low prior on things randomly catching on fire, my knee-jerk response was still "Nah." But luckily Habryka wasn't so dismissive, and he went over and unplugged the definitely-actually-smoking cord before a proper fire could start. I've similarly witnessed the entire LessWrong team insisting on investigating every time they
14mingyuan9dWhen I was in high school, I once had a conversation with a classmate that went something like this (except that it was longer and I was less eloquent): Him: "German is a Scandinavian language." Me: "No, it's not. German and the Scandinavian languages both fall under the umbrella of Germanic languages, but 'Scandinavian languages' refers to a narrower category that doesn't include German." Him: "Well that's your opinion." Me: "No??? That's not what an opinion is???" Him: "Look, it's your opinion that German isn't a Scandinavian language, and it's my opinion that it is. We can agree to disagree." Me: ??????????????????!!!!!!!!!????!?!?!?!?! *punches self in face* ---- When I was taking a required intro biology course in college, I had already read a bunch of LW and SSC, notably including That Chocolate Study []. So when the professor put Bohannon's results and methodology up on the projector, I was ready as heck to talk about all of the atrocities therein. The professor asked us to pair up with the person next to us to discuss whether we believed Bohannon's results, and I decided to give the freshman next to me the chance to speak first before I absolutely demolished everything. The girl turned to me with wide eyes and a confident, creaky-voice drawl, and said, verbatim: "I think it's true, because chocolate is known to be a superfood." I was floored. How could this be happening in real life? I was at an elite college with a sub-10% acceptance rate, and this person next to me had just said "known to be" and "superfood" like they explained anything – like they meant anything. I will never forget those words. Looking back, that may have been the day I decided to move to the Bay after graduating. No regrets.

Load More Days