Lightcone Infrastructure FundraiserGoal 1:$628,696 of $1,000,000
Customize
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+
habryka6915
2
Reputation is lazily evaluated When evaluating the reputation of your organization, community, or project, many people flock to surveys in which you ask randomly selected people what they think of your thing, or what their attitudes towards your organization, community or project are.  If you do this, you will very reliably get back data that looks like people are indifferent to you and your projects, and your results will probably be dominated by extremely shallow things like "do the words in your name invoke positive or negative associations". People largely only form opinions of you or your projects when they have some reason to do that, like trying to figure out whether to buy your product, or join your social movement, or vote for you in an election. You basically never care about what people think about you while engaging in activities completely unrelated to you, you care about what people will do when they have to take any action that is related to your goals. But the former is exactly what you are measuring in attitude surveys. As an example of this (used here for illustrative purposes, and what caused me to form strong opinions on this, but not intended as the central point of this post): Many leaders in the Effective Altruism community ran various surveys after the collapse of FTX trying to understand what the reputation of "Effective Altruism" is. The results were basically always the same: People mostly didn't know what EA was, and had vaguely positive associations with the term when asked. The people who had recently become familiar with it (which weren't that many) did lower their opinions of EA, but the vast majority of people did not (because they mostly didn't know what it was).  As far as I can tell, these surveys left most EA leaders thinking that the reputational effects of FTX were limited. After all, most people never heard about EA in the context of FTX, and seemed to mostly have positive associations with the term, and the average like
leogao366
6
it's quite plausible (40% if I had to make up a number, but I stress this is completely made up) that someday there will be an AI winter or other slowdown, and the general vibe will snap from "AGI in 3 years" to "AGI in 50 years". when this happens it will become deeply unfashionable to continue believing that AGI is probably happening soonish (10-15 years), in the same way that suggesting that there might be a winter/slowdown is unfashionable today. however, I believe in these timelines roughly because I expect the road to AGI to involve both fast periods and slow bumpy periods. so unless there is some super surprising new evidence, I will probably only update moderately on timelines if/when this winter happens
sapphire5723
43
Don't Induce psychosis intentionally. Don't take psychedelics while someone probes your beliefs. Don't let anyone associated with Michael Vasser anywhere near you during an altered state. Edit: here is a different report from three years ago with the same person administering the methods:  Mike Vasser followers practice intentionally inducing psychosis via psychedelic drugs. Inducing psychosis is a verbatim self report of what they are doing. I would say they practice drug induced brain washing. TBC they would dispute the term brain washing and probably would not like the term 'followers' but I think the terms are accurate and they are certainly his intellectual descendants.  Several people have had quite severe adverse reactions (as observed by me). For example rapidly developing serious literal schizophrenia. Schizophrenia in the very literal sense of paranoid delusions and conspiratorial interpretations of other people's behavior. The local Vasserite who did the 'therapy'/'brainwashing' seems completely unbothered by this literal schizophrenia.  As you can imagine this behavior can cause substantial social disruption. Especially since the Vasserite's don't exactly believe in social harmony.  This has all precipitated serious mental health events in many other parties. Though they are less obviously serious than "they are clinically schizophrenic now".But that is a high bar. I have been very critical of cover ups in lesswrong. I'm not going to name names and maybe you don't trust me. But I have observed this all directly. If you are let people toy with your brain while you are under the influence of psychedelics you should expect high odds of severe consequences. And your friends mental health might suffer as well.   Edit: these are recent events. To my knowledge never referenced on lesswrong. 
leogao168
4
people often say that limitations of an artistic medium breed creativity. part of this could be the fact that when it is costly to do things, the only things done will be higher effort
leogao179
1
a take I've expressed a bunch irl but haven't written up yet: feature sparsity might be fundamentally the wrong thing for disentangling superposition; circuit sparsity might be more correct to optimize for. in particular, circuit sparsity doesn't have problems with feature splitting/absorption

Popular Comments

Recent Discussion

TLDR:

With access to a 3D printer and some lack of regard for aesthetics, you can build a Levoit 300 (a popular air filter) clone for roughly 25% of the price. 

  • And against the cheapest possible competition, it is roughly price per volume of air competitive/better but much quieter + modular + not soft vendor locked to their filters.

Why not just buy n many X filters?

If there is one thing I associate the LW community with, that would be the undying love for Eliezer Yudkowsky and his role as the rightful caliph[1] Lumenators and Air Filters.

Ever since my allergy diagnosis, I've thought of various schemes to cover my house with the latter. The obvious decision is to acquire as many as IKEA/Levoit/CheapInc. filters as fast and as cheaply per...

quila10

upvoted, i think this article would be better with comparison to the recommendations in thomas kwa's shortform about air filters

In light of reading through Raemon's shortform feed, I'm making my own. Here will be smaller ideas that are on my mind.

5Hazard
For anyone curious about what the sPoOkY and mYsTeRiOuS Michael Vassar actually thinks about various shit, many of his friends have blogs and write about what they chat about, and he's also been on several long form podcasts. https://naturalhazard.xyz/ben_jess_sarah_starter_pack https://open.spotify.com/episode/1lJY2HJNttkwwmwIn3kyIA?si=em0lqkPaRzeZ-ctQx_hfmA https://open.spotify.com/episode/01z3WDSIHPDAOuVp1ZYUoN?si=VOtoDpw9T_CahF31WEhZXQ https://open.spotify.com/episode/2RzlQDSwxGbjloRKqCh1xg?si=XuFZB1CtSt-FbCweHtTnUA https://open.spotify.com/episode/33nrhLwrNJJtbZolZsTUGN?si=Sd0dZTANTpy8FS-RFhr4cQ
4niplav
Thank you for collecting those links :-) I've listened to two or three of the interviews (and ~three other talks from a long time ago), and I still have no clue what the central claims are, what the reasoning supporting them is &c. (I understand it most for Zvi Mowshowitz and Sarah Constantin, less for Jessica Taylor, and least for Benjamin Hoffman & Vassar). I also don't know of anyone who became convinced of or even understood any of Michael Vassar's views/stances through his writing/podcasts alone—it appears to almost always happens through in-person interaction.
Hazard20

Does Jessica's Anti-Normativity post or Ben's Can Crimes be Discussed Literally & Guilt, Shame, Depravity posts make sense to you? If there's specific posts you want to talk about not making sense / not being clear what the point is, I'm down to chat about them.

3AprilSR
I want to say I have to an extent (for all three), though I guess there's been second-hand in person interactions which maybe counts. I dunno if there's any sort of central thesis I could summarize, but if you pointed me at like any more specific topics I could take a shot at translating. (Though I'd maybe prefer to avoid the topic for a little while.) In general, I think an actual analysis of the ideas involved and their merits / drawbacks existing would've been a lot more helpful for me than just... people having a spooky reputation was.
3Ben Pace
Thanks for answering; good to hear that you don't think you've had any severe or long-lasting consequences (though it sounds like one time LSD was a contributor to your episode of bad mental health). I guess here's other question that seems natural: it's been said that some people take LSD on either the personal advice of Michael Vassar, or otherwise as a result of reading/discussing his ideas. Are either of those true for you?
1AprilSR
Nope. I've never directly interacted with Vassar at all, and I haven't made any particular decisions at all due to his ideas. Like, I've become more familiar with his work as of the past several months, but it was one thing of many. I spent a lot of time thinking about ontology and anthropics and religion and stuff... mostly I think the reason weird stuff happened to me at the same time as I learned more about Vassar is just that I started rethinking rather a lot of things at the same time, where "are Vassar's ideas worth considering?" was just one specific question that came up of many. (Plausibly the expectation that Vassar's ideas might be dangerous turned slightly into a self-fulfilling prophecy by making it more likely for me to expand on them in weirder directions or something.)

Thanks again. 

I am currently holding a rough hypothesis of "when someone is interested in exploring psychosis and psychedelics, they become more interested in Michael Vassar's ideas", in that the former causes the latter, rather than the other way around.

5habryka
I think "psychosis is underrated" and/or "psychosis is often the sign of a good kind of cognitive processing" are things I have heard from at least people very close to Michael (I think @jessicata made some arguments in this direction):  (To be clear, I don't think "jessicata is in favor of psychosis" is at all a reasonable gloss here, but I do think there is an attitude towards things like psychosis that I disagree with that is common in the relevant circles)

About 15 years ago, I read Malcolm Gladwell's Outliers. He profiled Chris Langan, an extremely high-IQ person, claiming that he had only mediocre accomplishments despite his high IQ. Chris Langan's theory of everything, the Cognitive Theoretic Model of the Universe, was mentioned. I considered that it might be worth checking out someday.

Well, someday has happened, and I looked into CTMU, prompted by Alex Zhu (who also paid me for reviewing the work). The main CTMU paper is "The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory".

CTMU has a high-IQ mystique about it: if you don't get it, maybe it's because your IQ is too low. The paper itself is dense with insights, especially the first part. It uses quite a lot of nonstandard terminology (partially...

I did some more thinking, and realized particles are the irreps of the Poincaire group. I wrote up some more here, though this isn't complete yet:

https://www.lesswrong.com/posts/LpcEstrPpPkygzkqd/fractals-to-quasiparticles

One of the first things they teach you in algebra is that the letters you use to signify variables are arbitrary, and you can use whatever you want[1]. Like most of the 'first things' students are taught, this is almost entirely a lie: every letter has implicit connotations, and if (for example) you use "n" for a non-integer variable, it'll confuse someone reading your work. More importantly, if you don't know what symbol choices imply, it'll be harder for you to understand what an equation is implicitly communicating, making it even more difficult to grasp the concepts that are actually being laid out.

So I've decided to go through the English alphabet and explicitly explain the connotations of each character as they might be used by a [unusually-bright-highschooler|reasonably-clever-college-student]-level...

gjm20

I'm confused by what you say about italics. Mathematical variables are almost always italicized, so how would italicizing something help to clarify that it isn't a variable?

2cubefox
Related: In the equation y=ax+b, the values of all four variables are unknown, but x and y seem to be more unknown (more variable?) than a and b. It's not clear what the difference is exactly.
1Coafos
Usually, k is not just an arbitrary real number, but an integer, like in cos[(2k+1)π]=−1. For arbitrary constants to multiply by I think λ (lambda, greek letter) is used.
2abstractapplic
You're right. I'll delete that aside.

In an attempt to get myself to write more here is my own shortform feed. Ideally I would write something daily, but we will see how it goes.

69habryka
Reputation is lazily evaluated When evaluating the reputation of your organization, community, or project, many people flock to surveys in which you ask randomly selected people what they think of your thing, or what their attitudes towards your organization, community or project are.  If you do this, you will very reliably get back data that looks like people are indifferent to you and your projects, and your results will probably be dominated by extremely shallow things like "do the words in your name invoke positive or negative associations". People largely only form opinions of you or your projects when they have some reason to do that, like trying to figure out whether to buy your product, or join your social movement, or vote for you in an election. You basically never care about what people think about you while engaging in activities completely unrelated to you, you care about what people will do when they have to take any action that is related to your goals. But the former is exactly what you are measuring in attitude surveys. As an example of this (used here for illustrative purposes, and what caused me to form strong opinions on this, but not intended as the central point of this post): Many leaders in the Effective Altruism community ran various surveys after the collapse of FTX trying to understand what the reputation of "Effective Altruism" is. The results were basically always the same: People mostly didn't know what EA was, and had vaguely positive associations with the term when asked. The people who had recently become familiar with it (which weren't that many) did lower their opinions of EA, but the vast majority of people did not (because they mostly didn't know what it was).  As far as I can tell, these surveys left most EA leaders thinking that the reputational effects of FTX were limited. After all, most people never heard about EA in the context of FTX, and seemed to mostly have positive associations with the term, and the average like

practically all metrics of the EA community's health and growth have sharply declined, and the extremely large and negative reputational effects have become clear.

I want more evidence on your claim that FTX had a major effect on EA reputation. Or: why do you believe it?


Edit: relevant thing habryka said that I didn't quote above:

For the EA surveys, these indicators looked very bleak: 

"Results demonstrated that FTX had decreased satisfaction by 0.5-1 points on a 10-point scale within the EA community"

"Among those aware of EA, attitudes remain positive a

... (read more)
8Guive
This is good. Please consider making it a top level post. 
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

Previously: Sadly, FTX

I doubted whether it would be a good use of time to read Michael Lewis’s new book Going Infinite about Sam Bankman-Fried (hereafter SBF or Sam). What would I learn that I did not already know? Was Michael Lewis so far in the tank of SBF that the book was filled with nonsense and not to be trusted?

I set up a prediction market, which somehow attracted over a hundred traders. Opinions were mixed. That, combined with Matt Levine clearly reporting having fun, felt good enough to give the book a try.

I need not have worried.

Going Infinite is awesome. I would have been happy with my decision on the basis of any one of the following:

The details I learned or clarified about the psychology of SBF...

2Democritus
I strongly disagree that Will McAskill or EA generally are responsible for "misaligning" Sam Bankman-Fried. I don't see much evidence that either effective altruism or "Effective Altruism" did much to cause his life to play out as it did. Sam is, as you see, someone who is able and willing to lie and mislead others. You should approach his comments regarding his motivations with the same skepticism you apply to the other things he says.

Is that a claim of this post? It's a long post so I might be forgetting a place where Zvi writes that, but I think most of the relevant parts of this book review are about how MacAskill and EAs are partly responsible for empowering Sam Bankman-Fried, for supporting him with great talent and trust with funders and a positive public image.

O O10

O1’s release has made me think Yann Lecun’s AGI timelines are probably more correct than shorter ones

7plex
Depends what they do with it. If they use it to do the natural and obvious capabilities research, like they currently are (mixed with a little hodge podge alignment to keep it roughly on track), I think we just basically for sure die. If they pivot hard to solving alignment in a very different paradigm and.. no, this hypothetical doesn't imply the AI can discover or switch to other paradigms. I think doom is almost certain in this scenario.

If we could trust OpenAI to handle this scenario responsibly, our odds would definitely seem better to me.

3Noosphere89
I'd say that we'd have a 70-80% chance of going through the next decade without causing a billion deaths if powerful AI comes.
4eggsyntax
I realize that asking about p(doom) is utterly 2023, but I'm interested to see if there's a rough consensus in the community about how it would go if it were now, and then it's possible to consider how that shifts as the amount of time moves forward.