A beautifully designed collection of books, each small enough to fit in your pocket. The book set contains over forty chapters by more than twenty authors including Eliezer Yudkowsky and Scott Alexander. This is a collection of opinionated essays exploring argument, aesthetics, game theory, artificial intelligence, introspection, markets, and more, as part of LessWrong's mission to understand the laws that govern reasoning and decision-making, and build a map that reflects the territory.
I'm an infovore. The failure mode for an infovore is spending too much time reading and not enough time doing. I was bit especially hard by this a few days ago - I hadn't caught up on LW posts in while and had 100+ in my RSS reader. I wanted to read them all but I knew I shouldn't.
I had the same issue with Hacker News for many years. I broke the cycle by signing for for a daily "best of HN" feed, however even this proved to be too much content. I replaced it with a weekly best of digest and that's been working well for me - the digest includes the top 50 posts by votes, so that's the maximum number of enticing nuggets...
“We wanted flying cars, instead we got 140 characters,” says Peter Thiel’s Founders Fund, expressing a sort of jaded disappointment with technological progress. (The fact that the 140 characters have become 280, a 100% increase, does not seem to have impressed him.)
Thiel, along with economists such as Tyler Cowen (The Great Stagnation) and Robert Gordon (The Rise and Fall of American Growth), promotes a “stagnation hypothesis”: that there has been a significant slowdown in scientific, technological, and economic progress in recent decades—say, for a round number, since about 1970, or the last ~50 years.
When I first heard the stagnation hypothesis, I was skeptical. The arguments weren’t convincing to me. But as I studied the history of progress (and looked at the numbers), I slowly came around, and...
Something missing from the top-level post: why stagnation.
I'll just put out that one of the tiny things that most gave me a sense of "fuck" in relation to stagnation was reading an essay written in 1972 that was lamenting the "publish or perish" phenomenon. I had previously assumed that that term was way more recent, and that people were trying to fix it but it would just take a few years. To realize it was 50 years old was kinda crushing honestly.
Here's google ngrams showing how common the phrase "publish or perish" was in books through the last 200 years... (read more)
What is up with spirituality? I mean, from an atheistic perspective?
In my experience, atheists tend to focus on the empirical question of whether there is an all-powerful supernatural creature behind all that we observe. And yeah, there probably isn’t.
But having won that point, what does one make of the extreme popularity of religion? I think the usual answer given is something like ‘well, we used to be very ignorant and not have good explanations of natural phenomena, plus we tend to see agents in everything because our agent detection software is oversensitive’.
Which might explain the question ‘Why would people think a supernatural agent controls things?’. But what seems like only a corner of religion.
Another big part of religion—and a thing that also occurs outside religion—seems to be...
From what I've read, the hormone Oxytocin appears to be behind many of the emotions people generally describe as "spiritual". While the hormone is still being studied, there is evidence that indicates it can increase feelings of connection to entities larger than the self, increase feelings of love and trust with others, and promote feelings of belonging in groups.
The emotion of elevation, which appears to be linked to oxytocin, is most often caused by witnessing other people do altruistic or morally agreeable actions. This may explain the tendency for man... (read more)
Pardon me while I make my way to the rooftops.
So I’m sure it’s not that simple especially because of regulatory issues, but… did you hear the one where humanity could have produced enough mRNA vaccine for the entire world by early this year, and could still decide to do it by the end of this year, but decided we would rather save between four and twelve billion dollars?
If not, there’s a section on that.
Meanwhile, we also can’t figure out how to put the vaccine doses we already have into people’s arms in any reasonable fashion. New policies are helping with that, and we are seeing signs that things are accelerating, but wow is this a huge disaster.
We took some steps this week towards sane policy. Everyone over...
Prompted by your comment, when I wrote more stuff last night, I made it standalone:
🇨🇦 People liked my Canada comment on Zvi's post on Jan 14th, so here's another update as a top-level post. I thought I wouldn't have much to say but apparently I wrote some stuff!
(I want to underscore that this is a rambly summary from someone who does not have the same thorough researchy energy or rigorous models as Zvi or many other LWers in many situations. If you have major decisions to make, use this summary as at most a jumping off point. Slightly BC-heavy because I moved to BC a few months ago and have been getting more news here. Also some of my rambles involve info that is probably common-knowledge to most Canadians who are informed whatsoever, I guess because I'm imagining people from other...
Appreciating you chiming in. That's a great point about how different rural communities are doing different. I kind of had the impression some rural areas in the prairies were doing bad, but I didn't off-hand have a sense of where or why. Your rough sketch with vague notions is helpful on that front.
I drove across the country on the way out to BC a couple months ago, and it's indeed hard to imagine the farming areas in the south half of the prairies having much covid spread, whereas it makes sense that resource-extraction areas would for the 2 reasons you describe. That plus exponentials/nonlinearities seems sufficient to explain most of the discrepancy, maybe.
Reminder of the rules of Stag Hunt:
From the outside, the obvious choice is for everyone to hunt Stag. But in real-world situations, there’s lots of noise and uncertainty, and not everyone sees the game the same way, so the Schelling choice is Rabbit.
How does one make a Stag hunt happen, rather than a Rabbit hunt, even though the Schelling choice is Rabbit?
If one were utterly unscrupulous, one strategy would be to try to trick everyone into thinking that Stag is...
I haven't seen a strong argument that "stag hunt" is a good model for reality. If you need seven people to hunt stag the answer isn't to have seven totally committed people, who never get ill, have other things to do, or just don't feel like it. I'd rather have ten people who who are 90% committed, and be ready to switch to rabbit the few days when only six show up.
[Epistemic status: Strong opinions lightly held, this time with a cool graph.]
I argue that an entire class of common arguments against short timelines is bogus, and provide weak evidence that anchoring to the human-brain-human-lifetime milestone is reasonable.
In a sentence, my argument is that the complexity and mysteriousness and efficiency of the human brain (compared to artificial neural nets) is almost zero evidence that building TAI will be difficult, because evolution typically makes things complex and mysterious and efficient, even when there are simple, easily understood, inefficient designs that work almost as well (or even better!) for human purposes.
In slogan form: If all we had to do to get TAI was make a simple neural net 10x the size of my brain, my brain would still look the...
They define "one data point" as "one token," which is fine. But it seems equally defensible to define "one data point" as "what the model can process in one forward pass," which is ~1e3 tokens. If the authors had chosen that definition in their paper, I would be showing you a picture that looked identical except with different numbers on the data axis, and you would conclude from the picture that the brain should have around 1e12 data points to match its 1e15 params!
Holy shit, mind blown! Then... how ar... (read more)
Epistemic status: highly confident (99%+) this is an issue for optimal play with human consequentialist judges. Thoughts on practical implications are more speculative, and involve much hand-waving (70% sure I’m not overlooking a trivial fix, and that this can’t be safely ignored).
Note: I fully expect some readers to find the core of this post almost trivially obvious. If you’re such a reader, please read as “I think [obvious thing] is important”, rather than “I’ve discovered [obvious thing]!!”.
In broad terms, this post concerns human-approval-directed systems generally: there’s a tension between [human approves of solving narrow task X] and [human approves of many other short-term things], such that we can’t say much about what an approval-directed system will do about X, even if you think you’re training an X...
I am unsure as to what the judge's incentive is to select the result that was more useful, given that they still have access to both answers? Is it just because the judge will want to be such that the debaters would expect them to select the useful answer so that the debaters will provide useful answers, and therefore will choose the useful answers?
If that's the reason, I don't think you would need a committed deontologist to get them to choose a correct answer over a useful answer, you could instead just pick someone who doesn't think very hard about cert... (read more)
This is post 5 of 10 in my cryonics signup guide, and the second of five posts on life insurance.
In this post, I'll cover the different types of life insurance policies you might want to use to fund your cryopreservation. This is the most complicated part of this entire sequence and it's taken me many, many hours of confusion to reach even the tenuous understanding I'm presenting here. Please bear with me and let me know if you spot any errors or have any questions.
Note that in addition to being labyrinthine, the life insurance landscape changes fairly often, such that the options that were available to you when you signed up for cryonics ten years ago might no longer be offered. They're always adding new types of...
Buying a cheap term policy makes sense if you expect to be able to self fund later.
Three can keep a secret, if two of them are dead
Benjamin Franklin
First you say, “Someone needs to hang for this as a turn of phrase” and of course you don’t mean that literally. That would be horrific, it’s just a turn of phrase. Indeed you are genuinely horrified. Next it becomes “I wish we could just shoot him”, but of course you weren’t serious and you’d never actually do it. Again, you completely believe this. But before you know it, the palace is in flames and you’re getting ready to string up the king in his pajamas, but despite the illumination you’re still blind to your tendency to decieve yourself.
I've started a (free) Substack, in case anyone is interested.
Similarly to Hacker Newsletter there is a weekly digest of Lesswrong posts on Rational Newsletter.