Recent Discussion

I appreciate Zoe Curzi's revelations of her experience with Leverage.  I know how hard it is to speak up when no or few others do, and when people are trying to keep things under wraps.

I haven't posted much publicly about my experiences working as a researcher at MIRI (2015-2017) or around CFAR events, to a large degree because I've been afraid.  Now that Zoe has posted about her experience, I find it easier to do so, especially after the post was generally well-received by LessWrong.

I felt moved to write this, not just because of Zoe's post, but also because of Aella's commentary:

I've found established rationalist communities to have excellent norms that prevent stuff like what happened at Leverage. The times where it gets weird is typically when

...

Psilocybin-based psychedelics are indeed considered low-risk both in terms of addiction and overdose. This chart sums things up nicely, and is a good thing to 'pin on your mental fridge':

https://upload.wikimedia.org/wikipedia/commons/thumb/a/a5/Drug_danger_and_dependence.svg/1920px-Drug_danger_and_dependence.svg.png

2ChristianKl8mYou would likely hire someone who's traditionally trained, credentialed and has work experience instead of doing a bunch of your own psych-experiments, likely in a tradition like gestalttherapy that focuses on being nonmanipulative.
10ChristianKl33mIt would have probably better if you would have focused on your experience and drop all of the talk about Zoe from this post. That would make it easier for the reader to just take the information value from your experience. I think that your post is still valuable information but that added narrative layer makes it harder to interact with then it would have been if it would have been focused more on your experience.
4Eli Tyre36mWithout denying that it is a small org and staff usually have some input over hiring, that input is usually informal. My understanding is that in the period when Anna was ED, there was an explicit all-staff discussion when they were considering a hire (after the person had done a trial?). In the Pete era, I'm sure Pete asked for staff members' opinions, and if (for instance) I sent him an email with my thoughts on a potential hire, he would take that info into account, but there was not institutional group meeting.

There's this idea in computer science wherein the maximum theoretical speedup that can be acquired with an arbitrary number of processors is related to the percentage of the program which can be parallelized. If we have two segments of code that take the same amount of time to execute with one CPU core in which the first segment can't be parallelized at all and the second segment is perfectly parallelizable, we can only run the program twice as fast, no matter how many CPU cores we have.

There's a similar idea in economics. It seems like the most powerful and civilizationally relevant feature controlling the medium to long term change in the price of goods is the extent to which the production of that good can be decoupled...

This is an essay about one of those "once you see it, you will see it everywhere" phenomena.  It is a psychological and interpersonal dynamic roughly as common, and almost as destructive, as motte-and-bailey, and at least in my own personal experience it's been quite valuable to have it reified, so that I can quickly recognize the commonality between what I had previously thought of as completely unrelated situations.

The original quote referenced in the title is "There are three kinds of lies: lies, damned lies, and statistics."


Background 1: Gyroscopes

Gyroscopes are weird.

Except they're not.  They're quite normal and mundane and straightforward.  The weirdness of gyroscopes is a map-territory confusion—gyroscopes seem weird because my map is poorly made, and predicts that they will do something other than their normal,...

I think the point is that we first point at a "splashy stuff" that turns out later to have details such as made out of hydrogen. "splashy stuff" is not conceptually connnected to hydrogen because if those details were found out to be different it would still be "that splashy stuff"

Water is pretty figured out, but for example dark matter could be a number of different things. If one thinks of neutronic dark matter or axionic dark matter they would be different aqnd not equivalent to each other but they still succeed to be dark matter. And this kidn of thing... (read more)

2Slider1hThe more sophisticated views are not that relevant for the fact that the naive view is false / fragile. Even for the more complicated case if there is anti-gouging it just means you have to price to good for the totality of history rather than a spot time price. This will mean that in calm times the price will be a bit higher. Any seller that would sell at a lower price taking only calm time realities into account would suffer unmitigated shocks from rare events. With gouging on it means that the financial hit from rare events is borne out mainly with the populace. For example with military one could have a reasonable expectation to be defended from invasions. Say that the goverment runs out of troops and hires private mercenaries to provide the defence. Fullfilling a goverment duty makes sense for the goverment to carry the burden and foot the bill on that. One could imagine that the very same peope could be deployed but instead of the goverment footing the bill the people defended pay the bills. In this arrangement the people themselfs organise the defence and don't enjoy protection by the state. One could have a "strategic snowstorm reserve" where goverment does the preparing and upon declaration of an emergency such as a snowstorm would flood shovels outside of the market mechanics. The catch would be that those reserves are not a freebie source of shovels at calm times. What tends to rather happen is that existing logisitical lines are repurposed or dual purposed for such alternate distribution means. If you want to have shovels in peoples hands shovel stores are atleast on okay delivery vector. You could do it so that it is nationalised for the duration of the execptional circumstances. Or you could do less drastic adjustments just as long as it works as an effective delivery vector. If people not being able to cough up big cash fronts stops shovels being delivered then it becomes ineffective. One could even do stuff like letting everybody be gouged but in
4AllAmericanBreakfast3hThis is a great point. When I feel frustrated with a faulty conversation, I often start by projecting my own motivations onto the other person. Or that their explicitly stated goals are their real goals. Even if I know that this is wrong, I try to act as if that were so. “You say you’re doing this for the good of humanity? Then I’m going to respond as if that’s really what you cared about, and that you’re going about it badly, even if I know deep down that you’re lying about your motives.” It’s this perverse form of “bravery.” There’s a class of shallow incoherent lies that we’re all supposed to know are shallow lies, and yet act as if they were deep coherent truths. It can feel like bravery to “expose” the lie by taking it literally and showing how incoherence is the result. An alternative is to focus not on bravely confronting the lie in order to expose the object level truth, but to focus on cannily understanding the motive and function of the lie itself. “Did you lie to me just now, and what’s the truth of the matter” is a very different question from, “I think you just lied to me, so why did you do that?” “You just pretended to care about price gouging, so why did you do that” seems like a good way to confront such statements, at least some of the time.
2Duncan_Sabien3hMore like "better lawmakers require actual unusual effort; they won't just happen by default."

[Epistemic status: The authors of this book make many factual claims that I'm not equipped to conclusively verify. Much of the publicly available information on the food industry comes from agribusinesses themselves or from activists who bitterly oppose them. In this review I've tried to summarize the authors' claims as they've presented them, with the occasional corroborating link, but as a layman I can't offer a much more complex perspective on these topics beyond what I learned from this book. The value judgments expressed in this review are my attempt to capture the authors' point of view, except where otherwise noted. I've absorbed many convincing arguments against factory farming from Effective Altruists over the years though, and as of this writing I've drastically cut back my meat...

(Content warning: self-harm, parts of this post may be actively counterproductive for readers with certain mental illnesses or idiosyncrasies.)

What doesn't kill you makes you stronger. ~ Kelly Clarkson.

No pain, no gain. ~ Exercise motto.

The more bitterness you swallow, the higher you'll go. ~ Chinese proverb.

I noticed recently that, at least in my social bubble, pain is the unit of effort. In other words, how hard you are trying is explicitly measured by how much suffering you put yourself through. In this post, I will share some anecdotes of how damaging and pervasive this belief is, and propose some counterbalancing ideas that might help rectify this problem.

I. Anecdotes

1. As a child, I spent most of my evenings studying mathematics under some amount of supervision from my mother. While...

Pain, like money, is a measurable metric compared to skill level which is a much more abstract set of metrics to be measured on. We generally use tests and competitions to measure skill level, but during the personal growth period where those options aren't accessible, people tend to equate suffering as a result of learning to measure progress. Like you said, it's not very reliable since there is really no correlation between pain and skill level. Also your speed of learning can change how much time/enjoyment/suffering you go through as you learn, but ulti... (read more)

This is a crosspost from my personal website. Inspired by: Naval, If Sapiens Were a Blogpost and Brett Hall’s podcast.

Many people have recommended the book The Beginning of Infinity: Explanations that Transform the World by David Deutsch to me. I don’t know how, because I can’t imagine any of them actually finished it. Previously on my blog I’ve reviewed books and been critical of aspects of them. But this post is more of a summary of The Beginning of Infinity. I decided to write it this way because this book is very complicated, reasonably long and frequently misunderstood. Deutsch is a physicist at Oxford and a pioneer of quantum computing, but his interests are wide-ranging.

All progress comes from good explanations

In this book I argue that all progress,

...

"Our ancestors followed many practices which work, but for which they had no explanation."

That would be very surprising for a species that reflexively attempts to explain things.

Also, in the book, he specifies that's he's explaining the unprecedented rate of consistent progress from the scientific revolution onward.

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with

In [Prediction] We are in an Algorithmic Overhang I made technical predictions without much explanation. In this post I explain my reasoning. This prediction is contingent on there not being a WWIII or equivalent disaster disrupting semiconductor fabrication.


I wouldn't be surprised if an AI takes over the world in my lifetime. The idea makes me uncomfortable. I question my own sanity. At first I think "no way could the world change that quickly". Then I remember that technology is advancing exponentially. The world is changing faster than ever has before and the pace is accelerating.

Superintelligence is possible. The laws of physics demand it. If superintelligence is possible then it is inevitable. Why hasn't we built one yet? There are four[1] candidate limitations:

  • Data. We lack sufficient training data.
  • Hardware.
...
4Donald Hobson9hPossibly GPT3 x 100. Or RL of similar scale. Very likely Evolution (with enough compute, but you might need a lot of compute.) AIXI. You will need a lot of compute. I was kind of referring to the disjunction.
2Donald Hobson9hThe set of designs that look like "Human brains + BCI + Reinforcement learning" is large. There is almost certainly something superintelligent in that design space, and a lot of things that aren't. Finding a superintelligence in this design space is not obviously much easier than finding a superintelligence in the space of all computer programs. I am unsure how this bypasses algorithmic complexity and hardware issues. I would not expect human brains to be totally plug and play compatible. It may be that the results of wiring 100 human brains together (with little external compute) are no better than the same 100 people just talking. It may be you need difficult algorithms and/or lots of hardware as well as BCI's.

I think using AI + BCI + human brains will be easier than straight AI for the same reason that it’s easier to finetune pretrained models for a specific task than it is to create a pretrained model. The brain must have pretty general information processing structure, and I expect it’s easier to learn the interface / input encoding for such structures than it is to build human level AI.

Part of that intuition comes from how adaptable the brain is to injury, new sensory modalities, controlling robotic limbs, etc. Another part of the intuition comes from how mu... (read more)

3Quintin Pope11hI’m actually woking on an AI progress timeline / alignment failure story where the big risk comes from BCI-enabled coordination tech (I've sent you the draft if you're interested). I.e., instead of developing superintelligence, the timeline develops models that can manipulate mood/behavior through a BCI, initially as a cure for depression, then gradually spreading through society as a general mood booster / productivity enhancer, and finally being used to enhance coordination (e.g., make everyone super dedicated to improving company profits without destructive internal politics). The end result is that coordination models are trained via reinforcement learning to maximize profits or other simple metrics and gradually remove non-optimal behaviors in pursuit of those metrics. This timeline makes the case that AI doesn’t need to be superhuman to pose a risk. The behavior modifying models manipulate brains through BCIs with far fewer electrodes than the brain has neurons and are much less generally capable than human brains. We already have a proof of concept [https://www.nature.com/articles/s41591-021-01480-w] that a similar approach can cure depression, so I think more complex modifications like loyalty/motivation enhancement are possible in the not too distant future. You may also find the section of my timeline addressing progress standard in AI interesting: I think there's a reasonable case that AI progress will continue at approximately the same trajectory as it has over the last ~50 years.

Introduction

Yesterday* I talked about a potential treatment for Long Covid, and referenced an informal study I’d analyzed that tried to test it, which had seemed promising but was ultimately a let down. That analysis was too long for its own post, so it’s going here instead. 

Gez Medinger ran an excellent-for-its-type study of interventions for long covid, with a focus on niacin, the center of the stack I took. I want to emphasize both how very good for its type this study was, and how limited the type is. Surveys of people in support groups who chose their own interventions is not a great way to determine anything. But really rigorous information will take a long time and some of us have to make decisions now, so I...

4Daniel_Eth5h"the protocol I analyze later requires a specific form of niacin" What's the form? Also, do you know what sort of dosage is used here? If niacin is helpful for long covid, I wonder if taking it decreases the chances of getting long covid to begin with. Given how well tolerated it is, it might be worth taking just in case.

The original protocol is here (which specifies the niacin form, a suggested dose, and some support vitamins), and my analysis of it is here. There's a comment here on a study that maybe found niacin useful for acute covid, although I haven't investigated and have low confidence by default.

I think there's merit to taking nutrition seriously and stocking up on many things, but am in general wary of treating taking vitamins as a free action. Even a daily pill consumes attention, and it can be very hard to notice negative long-term effects or changes in optima... (read more)

tl;dr: The LessWrong team is re-organizing as Lightcone Infrastructure. LessWrong is one of several projects we are working on to ensure the future of humanity goes well. We are looking to hire software engineers as well as generalist entrepreneurs in Berkeley who are excited to build infrastructure to ensure a good future.

I founded the LessWrong 2.0 team in 2017, with the goal of reviving LessWrong.com and reinvigorating the intellectual culture of the rationality community. I believed the community had great potential for affecting the long term future, but that the failing website was a key bottleneck to community health and growth.

Four years later, the website still seems very important. But when I step back and ask “what are the key bottlenecks for improving the longterm future?”, just...

3makeswell8hI would love to work on this. I applied through your website. Commenting here in case you get a huge flood of random resumes, then maybe my comment will help me stand out. Here's my LinkedIn: https://www.linkedin.com/in/max-pietsch-1ba12ba7/

We’re reading them all. Please don’t also leave a comment just to stand out, that’s not a good race to the bottom. (Thanks for your application!)