All of steven0461's Comments + Replies

Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

"Problematic dynamics happened at Leverage" and "Leverage influenced EA Summit/Global" don't imply "Problematic dynamics at Leverage influenced EA Summit/Global" if EA Summit/Global had their own filters against problematic influences. (If such filters failed, it should be possible to point out where.)

[Book Review] "The Bell Curve" by Charles Murray

Your posts seem to be about what happens if you filter out considerations that don't go your way. Obviously, yes, that way you can get distortion without saying anything false. But the proposal here is to avoid certain topics and be fully honest about which topics are being avoided. This doesn't create even a single bit of distortion. A blank canvas is not a distorted map. People can get their maps elsewhere, as they already do on many subjects, and as they will keep having to do regardless, simply because some filtering is inevitable beneath the eye of Sa... (read more)

[Book Review] "The Bell Curve" by Charles Murray

due to the mechanisms described in "Entangled Truths, Contagious Lies" and "Dark Side Epistemology"

I'm not advocating lying. I'm advocating locally preferring to avoid subjects that force people to either lie or alienate people into preferring lies, or both. In the possible world where The Bell Curve is mostly true, not talking about it on LessWrong will not create a trail of false claims that have to be rationalized. It will create a trail of no claims. LessWrongers might fill their opinion vacuum with false claims from elsewhere, or with true claims, ... (read more)

I'm not advocating lying.

I understand that. I cited a Sequences post that has the word "lies" in the title, but I'm claiming that the mechanism described in the cited posts—that distortions on one topic can spread to both adjacent topics, and to people's understanding of what reasoning looks like—can apply more generally to distortions that aren't direct lies.

Omitting information can be a distortion when the information would otherwise be relevant. In "A Rational Argument", Yudkowsky gives the example of an election campaign manager publishing survey re... (read more)

[Book Review] "The Bell Curve" by Charles Murray

"Offensive things" isn't a category determined primarily by the interaction of LessWrong and people of the sneer. These groups exist in a wider society that they're signaling to. It sounds like your reasoning is "if we don't post about the Bell Curve, they'll just start taking offense to technological forecasting, and we'll be back where we started but with a more restricted topic space". But doing so would make the sneerers look stupid, because society, for better or worse, considers The Bell Curve to be offensive and does not consider technological forecasting to be offensive.

1Said Achmiz22dI’m sorry, but this is a fantasy. It may seem reasonable to you that the world should work like this, but it does not. To suggest that “the sneerers” would “look stupid” is to posit someone—a relevant someone, who has the power to determine how people and things are treated, and what is acceptable, and what is beyond the pale—for them to “look stupid” to. But in fact “the sneerers” simply are “wider society”, for all practical purposes. “Society” considers offensive whatever it is told to consider offensive. Today, that might not include “technological forecasting”. Tomorrow, you may wake up to find that’s changed. If you point out that what we do here wasn’t “offensive” yesterday, and so why should it be offensive today, and in any case, surely we’re not guilty of anything, are we, since it’s not like we could’ve known, yesterday, that our discussions here would suddenly become “offensive”… right? … well, I wouldn’t give two cents for your chances, in the court of public opinion (Twitter division). And if you try to protest that anyone who gets offended at technological forecasting is just stupid… then may God have mercy on your soul—because “the sneerers” surely won’t.
[Book Review] "The Bell Curve" by Charles Murray

You'd have to use a broad sense of "political" to make this true (maybe amounting to "controversial"). Nobody is advocating blanket avoidance of controversial opinions, only blanket avoidance of narrow-sense politics, and even then with a strong exception of "if you can make a case that it's genuinely important to the fate of humanity in the way that AI alignment is important to the fate of humanity, go ahead". At no point could anyone have used the proposed norms to prevent discussion of AI alignment.

[Book Review] "The Bell Curve" by Charles Murray

Another way this matters: Offense takers largely get their intuitions about "will taking offense achieve my goals" from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate "will taking offense specifically against LessWrong achieve my goals", but most actors simply aren't paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead ... (read more)

8Zack_M_Davis22dI agree that offense-takers are calibrated against Society-in-general, not particular targets. As a less-political problem with similar structure, consider ransomware [https://en.wikipedia.org/wiki/Ransomware] attacks. If an attacker encrypts your business's files and will sell you the encryption key for 10 Bitcoins, do you pay (in order to get your files back, as common sense and causal decision theory agree), or do you not-pay (as a galaxy-brained updateless-decision-theory play to timelessly make writing ransomware less profitable, even though that doesn't help the copy of you in this timeline)? It's a tough call! If your business's files are sufficiently important, then I can definitely see why you'd want to pay! But if someone were to try to portray the act of paying as pro-social, that would be pretty weird. If your Society knew how, law-abiding citizens would prefer to coordinate not to pay attackers, which is why the U.S. Treasury Department is cracking down on facilitating ransomware payments [https://home.treasury.gov/policy-issues/financial-sanctions/recent-actions/20201001] . But if that's not an option ... If coordinating to resist extortion isn't an option, that makes me very interested in trying to minimize the extent to which there is a collective "us". "We" should be emphasizing that rationality is a subject matter that anyone can study, rather than trying to get people to join our robot cult and be subject to the commands and PR concerns of our leaders. Hopefully that way, people playing a sneaky consequentialist image-management strategy and people playing a Just Get The Goddamned Right Answer strategy can at least avoid being at each other's throats fighting over who owns the "rationalist" brand name.
[Book Review] "The Bell Curve" by Charles Murray

I think simplifying all this to a game with one setting and two players with human psychologies obscures a lot of what's actually going on. If you look at people of the sneer, it's not at all clear that saying offensive things thwarts their goals. They're pretty happy to see offensive things being said, because it gives them opportunities to define themselves against the offensive things and look like vigilant guardians against evil. Being less offensive, while paying other costs to avoid having beliefs be distorted by political pressure (e.g. taking it elsewhere, taking pains to remember that politically pressured inferences aren't reliable), arguably de-energizes such people more than it emboldens them.

9Said Achmiz22dThis logic would fall down entirely if it turned out that “offensive things” isn’t a natural kind, or a pre-existing category of any sort, but is instead a label attached by the “people of the sneer” themselves to anything they happen to want to mock or vilify (which is always going to be something, since—as you say—said people in fact have a goal of mocking and/or vilifying things, in general). Inconveniently, that is precisely what turns out to be the case…
4steven046122dAnother way this matters: Offense takers largely get their intuitions about "will taking offense achieve my goals" from experience in a wide variety of settings and not from LessWrong specifically. Yes, theoretically, the optimal strategy is for them to estimate "will taking offense specifically against LessWrong achieve my goals", but most actors simply aren't paying enough attention to form a target-by-target estimate. Viewing this as a simple game theory textbook problem might lead you to think that adjusting our behavior to avoid punishment would lead to an equal number of future threats of punishment against us and is therefore pointless, when actually it would instead lead to future threats of punishment against some other entity that we shouldn't care much about, like, I don't know, fricking Sargon of Akkad.
[Book Review] "The Bell Curve" by Charles Murray

My claim was:

if this model is partially true, then something more nuanced than an absolutist "don't give them an inch" approach is warranted

It's obvious to everyone in the discussion that the model is partially false and there's also a strategic component to people's emotions, so repeating this is not responsive.

[Book Review] "The Bell Curve" by Charles Murray

I think an important cause of our disagreement is you model the relevant actors as rational strategic consequentialists trying to prevent certain kinds of speech, whereas I think they're at least as much like a Godzilla that reflexively rages in pain and flattens some buildings whenever he's presented with an idea that's noxious to him. You can keep irritating Godzilla until he learns that flattening buildings doesn't help him achieve his goals, but he'll flatten buildings anyway because that's just the kind of monster he is, and in this way, you and Godzi... (read more)

The relevant actors aren't consciously being strategic about it, but I think their emotions are sensitive to whether the threat of being offended seems to be working. That's what the emotions are for, evolutionarily speaking. People are innately very good at this! When I babysit a friend's unruly 6-year-old child who doesn't want to put on her shoes, or talk to my mother who wishes I would call more often, or introspect on my own rage at the abject cowardice of so-called "rationalists", the functionality of emotions as a negotiating tactic is very clear to... (read more)

1Said Achmiz23dBut of course there’s an alternative. There’s a very obvious alternative, which also happens to be the obviously and only correct action: Kill Godzilla.
[Book Review] "The Bell Curve" by Charles Murray

standing up to all kinds of political entryism seems to me obviously desirable for its own sake

I agree it's desirable for its own sake, but meant to give an additional argument why even those people who don't agree it's desirable for its own sake should be on board with it.

if for some reason left-wing political entryism is fundamentally worse than right-wing political entryism then surely that makes it not necessarily hypocritical to take a stronger stand against the former than against the latter

Not necessarily objectively hypocritical, but hypocritical in the eyes of a lot of relevant "neutral" observers.

[Book Review] "The Bell Curve" by Charles Murray

"Stand up to X by not doing anything X would be offended by" is not what I proposed. I was temporarily defining "right wing" as "the political side that the left wing is offended by" so I could refer to posts like the OP as "right wing" without setting off a debate about how actually the OP thinks of it more as centrist that's irrelevant to the point I was making, which is that "don't make LessWrong either about left wing politics or about right wing politics" is a pretty easy to understand criterion and that invoking this criterion to keep LW from being a... (read more)

[Book Review] "The Bell Curve" by Charles Murray

Some more points I want to make:

  • I don't care about moderation decisions for this particular post, I'm just dismayed by how eager LessWrongers seem to be to rationalize shooting themselves in the foot, which is also my foot and humanity's foot, for the short term satisfaction of getting to think of themselves as aligned with the forces of truth in a falsely constructed dichotomy against the forces of falsehood.
  • On any sufficiently controversial subject, responsible members of groups with vulnerable reputations will censor themselves if they have sufficien
... (read more)

It would be really nice to be able to stand up to left wing political entryism, and the only principled way to do this is to be very conscientious about standing up to right wing political entryism, where in this case “right wing” means any politics sufficiently offensive to the left wing, regardless of whether it thinks of itself as right wing.

"Stand up to X by not doing anything X would be offended by" is obviously an unworkable strategy, it's taking a negotiating stance that is maximally yielding in the ultimatum game, so should expect to receive as ... (read more)

[Book Review] "The Bell Curve" by Charles Murray

I agree that LW shouldn't be a zero-risk space, that some people will always hate us, and that this is unavoidable and only finitely bad. I'm not persuaded by reasons 2 and 3 from your comment at all in the particular case of whether people should talk about Murray. A norm of "don't bring up highly inflammatory topics unless they're crucial to the site's core interests" wouldn't stop Hanson from posting about ems, or grabby aliens, or farmers and foragers, or construal level theory, or Aumann's theorem, and anyway, having him post on his own blog works fin... (read more)

9Vaniver22dNot within the mainstream politics, but within academic / corporate CS and AI departments.
[Book Review] "The Bell Curve" by Charles Murray

My observation tells me that our culture is currently in need of marginally more risky spaces, even if the number of safe spaces remains the same.

Our culture is desperately in need of spaces that are correct about the most important technical issues, and insisting that the few such spaces that exist have to also become politically risky spaces jeopardizes their ability to function for no good reason given that the internet lets you build as many separate spaces as you want elsewhere.

Our culture is desperately in need of spaces that are correct about the most important technical issues

I also care a lot about this; I think there are three important things to track.

First is that people might have reputations to protect or purity to maintain, and so want to be careful about what they associate with. (This is one of the reasons behind the separate Alignment Forum URL; users who wouldn't want to post something to Less Wrong can post someplace classier.)

Second is that people might not be willing to pay costs to follow taboos. The more a spac... (read more)

I’m going to be a little nitpicky here. LW is not “becoming,” but rather already is a politically risky space, and has been for a long time. There are several good reasons, which I and others have discussed elsewhere here. They may not be persuasive to you, and that’s OK, but they do exist as reasons. Finally, the internet may let you build a separate forum elsewhere and try to attract participants, but that is a non-trivial ask.

My position is that accepting intellectual risk is part and parcel of creating an intellectual environment capable of maintaining... (read more)

[Book Review] "The Bell Curve" by Charles Murray

And so you need to make a pitch not just "this pays for itself now" but instead something like "this will pay for itself for the whole trajectory that we care about, or it will be obvious when we should change our policy and it no longer pays for itself."

I don't think it will be obvious, but I think we'll be able to make an imperfect estimate of when to change the policy that's still better than giving up on future evaluation of such tradeoffs and committing reputational murder-suicide immediately. (I for one like free speech and will be happy to advocate for it on LW when conditions change enough to make it seem anything other than pointlessly self-destructive.)

[Book Review] "The Bell Curve" by Charles Murray

I agree that the politics ban is a big sacrifice (regardless of whether the benefits outweigh it or not)

A global ban on political discussion by rationalists might be a big sacrifice, but it seems to me there are no major costs to asking people to take it elsewhere.

(I just edited "would be a big sacrifice" to "might be a big sacrifice", because the same forces that cause a ban to seem like a good idea will still distort discussions even in the absence of a ban, and perhaps make them worse than useless because they encourage the false belief that a rational discussion is being had.)

[Book Review] "The Bell Curve" by Charles Murray

This could be through any number of mechanisms like

A story I'm worried about goes something like:

  • LW correctly comes to believe that for an AI to be aligned, its cognitive turboencabulator needs a base plate of prefabulated amulite
  • the leader of an AI project tries to make the base plate out of unprefabulated amulite
  • another member of the project mentions off-hand one time that some people think it should be prefabulated
  • the project leader thinks, "prefabulation, wasn't that one of the pet issues of those Bell Curve bros? well, whatever, let's just go
... (read more)
2020 PhilPapers Survey Results

Taking the second box is greedy and greed is a vice. This might also explain one-boxing by Marxists.

2020 PhilPapers Survey Results

I also wonder if anyone has argued that you-the-atoms should two-box, you-the-algorithm should one-box, and which entity "you" refers to is just a semantic issue.

2020 PhilPapers Survey Results

With Newcomb's Problem, I always wonder how much the issue is confounded by formulations like "Omega predicted correctly in 99% of past cases", where given some normally reasonable assumptions (even really good predictors probably aren't running a literal copy of your mind), it's easy to conclude you're being reflective enough about the decision to be in a small minority of unpredictable people. I would be interested in seeing statistics on a version of Newcomb's Problem that explicitly said Omega predicts correctly all of the time because it runs an identical copy of you and your environment.

3steven046125dI also wonder if anyone has argued that you-the-atoms should two-box, you-the-algorithm should one-box, and which entity "you" refers to is just a semantic issue.
Tell the Truth

Obviously the idea is not to never risk making enemies, but the future is to some extent a hostage negotiation, and, airy rhetoric aside, it's a bad idea to insult a hostage taker's mother, causing him to murder lots of hostages, even if she's genuinely a bad person who deserves to be called out.

Tell the Truth

Even in the complete absence of personal consequences, expressing unpopular opinions still brings disrepute on other opinions that are logically related or held by the same people. E.g., if hypothetically there were a surprisingly strong argument for murdering puppies, I would keep it to myself, because only people who care about surprisingly strong arguments would accept it, and others would hate them for it, impeding their ability to do all the less horrible and more important things that there are surprisingly strong arguments for.

4Vladimir_Nesov20dIn this case the principle that leaves the state of evidence undisturbed is to keep any argument for not murdering puppies to yourself as well, for otherwise you in expectation would create filtered evidence in favor of not murdering puppies. This is analogous to trial preregistration, you just do the preregistration like an updateless agent, committing to act as if you've preregistered to speak publicly on any topic on which you are about to speak regardless of what it turns out you have to say on it. This either prompts you to say a socially costly thing (if you judge the preregistration a good deal) or to stay silent on a socially neutral or approved thing (if the preregistration doesn't look like a good deal).
5lsusr1moIf people would think badly upon my community for acting righteously then I welcome their transient [https://blog.samaltman.com/the-strength-of-being-misunderstood] disdain as proof that I am saying something worth saying.
2EI1moYou don't talk about because you want others to accept your position. You talk about it, so others have a chance to convince you to abandon that position, either for you to take theirs or something entirely different. How do you know that you've read everything to take up your position if you don't bother giving others who have put into their own time and thoughts into this a chance to present their arguments? But at the end of the day, we just gotta what we gotta do that makes us happy.
Voting for people harms people

The harms described in these articles mostly arise from politicization associated with voting rather than from the act of voting itself. If you focused on that politicization, without asking people to give up their direct influence on which candidates were elected, I think there'd be much less unwillingness to discuss.

steven0461's Shortform Feed

There's still a big gap between Betfair/Smarkets (22% chance Trump president) and Predictit/FTX (29-30%). I assume it's not the kind of thing that can be just arbitraged away.

steven0461's Shortform Feed

Another thing I feel like I see a lot on LW is disagreements where there's a heavy thumb of popularity or reputational costs on one side of the scale, but nobody talks about the thumb, and it makes it hard to tell if people are internally trying to correct for the thumb or if they're just substituting the thumb for whatever parts of their reasoning or intuition they're not explicitly talking about, and a lot of what looks like disagreement about the object level arguments that are being presented may actually be disagreement about the thumb. For example, in the case of the parent comment, maybe such a thumb is driving judgments of the relative values of oranges and pears.

4Vladimir_Nesov1moTogether with my interpretation [https://www.lesswrong.com/posts/8xomBzAcwZ6WTC8QB/steven0461-s-shortform-feed-1?commentId=NwDZCBXzJXma2yQzg] of the preceding example this suggests an analogy between individual/reference-class charity and filtered evidence. The analogy is interesting as a means of transfering understanding of errors in ordinary charity to the general setting where the salient structure in the sources of evidence could have any nature. So what usually goes wrong with charity is that the hypotheses about possible kinds of thinking behind an action/claim are not deliberatively considered (or consciously noticed), so the implicit assumption is intuitive, and can occasionally be comically wrong (or at least overconfident) in a way that would be immediately recognized if considered deliberatively. This becomes much worse if failure of charity is a habit, because then the training data for intuition can become systematically bad, dragging down the intuition itself to a point where it starts actively preventing deliberative consideration from being able to work correctly, so the error persists even in the face of being pointed out. If this branches out into the anti-epistemology territory, particularly via memes circulating in a group that justify the wrong intuitions about thinking of members of another group, we get a popular error with a reliably trained cognitive infrastructure for resisting correction. But indeed this could happen for any kind of working with evidence that needs some Bayes and reasonable hypotheses to stay sane! So a habit of not considering obvious possibilities about origin of evidence risks training systematically wrong intuitions that make noticing their wrongness more difficult. In a group setting, this gets amplified with echo chamber/epistemic bubble effects, which draw their power from the very same error of not getting deliberatively considered as significant forces that shape available evidence.
steven0461's Shortform Feed

What's the name of the proto-fallacy that goes like "you should exchange your oranges for pears because then you'll have more pears", suggesting that the question can be resolved, or has already been resolved, without ever considering the relative value of oranges and pears? I feel like I see it everywhere a lot, including on LW.

4Vladimir_Nesov1moSounds like failing at charity, not trying to figure out what thinking produced a claim/question/behavior and misinterpreting it as a result. In your example, there is an implication of difficulty with noticing the obvious, when the correct explanation is most likely having a different objective, which should be clear if the question is given half a thought. In some cases, running with the literal meaning of a claim as stated is actually a misinterpretation, since it differs from the intended meaning.
4steven04611moAnother thing I feel like I see a lot on LW is disagreements where there's a heavy thumb of popularity or reputational costs on one side of the scale, but nobody talks about the thumb, and it makes it hard to tell if people are internally trying to correct for the thumb or if they're just substituting the thumb for whatever parts of their reasoning or intuition they're not explicitly talking about, and a lot of what looks like disagreement about the object level arguments that are being presented may actually be disagreement about the thumb. For example, in the case of the parent comment, maybe such a thumb is driving judgments of the relative values of oranges and pears.
steven0461's Shortform Feed

Suppose you have an AI powered world stabilization regime. Suppose somebody makes a reasonable moral argument about how humanity's reflection should proceed, like "it's unfair for me to have less influence just because I hate posting on Facebook". Does the world stabilization regime now add a Facebook compensation factor to the set of restrictions it enforces? If it does things like this all the time, doesn't the long reflection just amount to a stage performance of CEV with human actors? If it doesn't do things like this all the time, doesn't that create a serious risk of the long term future being stolen by some undesirable dynamic?

Petrov Day Retrospective: 2021

If Petrov pressing the button would have led to a decent chance of him being incinerated by American nukes, and if he valued his life much more than he valued avoiding the consequences he could expect to face for not pressing, then he had no reason to press the button even from a purely selfish perspective, and pressing it would have been a purely destructive act, like in past LW Petrov Days, or maybe a kind of Russian roulette.

2Raemon1molol
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Well, I don't think it's obviously objectionable, and I'd have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like "we'd all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we're talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren't generally either truth-tracking or good for them" seems plausible to me. But I think it's obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn't care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.

The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from "psychotic," and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren't already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is co... (read more)

2jessicata1moI wouldn't find it objectionable. I'm not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

There's a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don't have to already agree about the robot apocalypse to be abl... (read more)

Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it's even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. ... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

It sounds like they meant they used to work at CFAR, not that they currently do.

The interpretation of "I'm a CFAR employee commenting anonymously to avoid retribution" as "I'm not a CFAR employee, but used to be one" seems to me to be sufficiently strained and non-obvious that we should infer from the commenter's choice not to use clearer language that they should be treated as having deliberately intended for readers to believe that they're a current CFAR employee.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Or maybe you should move out of the Bay Area, a.s.a.p. (Like, half seriously, I wonder how much of this epistemic swamp is geographically determined. Not having the everyday experience, I don't know.)

I wonder what the rationalist community would be like if, instead of having been forced to shape itself around risks of future superintelligent AI in the Bay Area, it had been artificial computing superhardware in Taiwan, or artificial superfracking in North Dakota, or artificial shipping supercontainers in Singapore, or something. (Hypothetically, let's sa... (read more)

How to think about and deal with OpenAI

Hmm, I was imagining that in Anna's view, it's not just about what concrete social media or other venues exist, but about some social dynamic that makes even the informal benevolent conspiracy part impossible or undesirable.

How to think about and deal with OpenAI

a benevolent conspiracy that figured out which conversations could/couldn’t nudge AI politics in useful ways

functional private fora with memory (in the way that a LW comment thread has memory) that span across organizations

What's standing in the way of these being created?

8Raemon1moMostly time and attention. This has been on the list of things the LessWrong team has considered working on and there's just a lot of competing priorities.
What role should LW play in AI Safety?

By being the community out of which MIRI arose

I would say the LW community arose out of MIRI.

4Chris_Leong2moThanks for pointing this out.
Great Power Conflict

Preemptive attack. Albania thinks that Botswana will soon become much more powerful and that this would be very bad. Calculating that it can win—or accepting a large chance of devastation rather than simply letting Botswana get ahead—Albania attacks preemptively.

FWIW, many distinguish between preemptive and preventive war, where the scenario you described falls under "preventive", and "preemptive" implies an imminent attack from the other side.

1Zach Stein-Perlman2moHa, I took intro IR last semester so I should have caught this. Fixed, thanks.
A simulation basilisk

Agents using simulations to influence other simulations seems less likely than than agents using simulations to influence reality, which after all is causally upstream of all the simulations.

Why didn't we find katas for rationality?

People like having superpowers and don't like obeying duties, so those who try to spread rationality are pressured to present it as a superpower instead of a duty.

Why didn't we find katas for rationality?

Why aren't there katas for diet? Because diet is about not caving to strong temptations inherent in human nature, and it's hard to practice not doing something. Maybe rationality is the same, but instead of eating bad foods, the temptation is allowing non-truth-tracking factors to influence your beliefs.

2Viliam2moI could imagine an exercise consisting of walking amidst hundreds of delicious cakes, where you know that if you start eating them, no one will stop you... A more cruel exercise would be having dozens of unknown meals (for example, spheres of blended matter, colored by random food colors) that you are supposed to taste, eat the healthy ones, and spit out the unhealthy ones. (To make it simple, the healthy ones are vegetables, dairy, and meat; the unhealthy ones all contain lots of sugar and/or salt.) The idea in both cases is that if you repeatedly succeed in the arena, you will become more resistant against temptations in real life, perhaps to the point that you will stop perceiving cakes as food options, and even if you accidentally taste one, you will automatically spit it out. But even this is ultimately a mindless activity. Rationality is about judgment, seeing things in perspective, etc. Though some parts of it could be made "automatical", such as probability estimates (like calibration game, except that you are interrupted by phone at random moments of your day, and asked to quickly estimate a probability of something), murphy-jitsu (but again somehow being interrupted and asked to do this in real-life situations). Maybe a browser plugin that would detect you writing a general statement, and open a popup window asking you to provide three specific examples? EDIT: An interesting and immediately useful exercise would be to establish a daily routine where in the morning you list your most important short-term and long-term tasks, and provide a probability estimate that you will do that thing today (the entire short-term thing, or the specified "next step" of the long-term thing). This should help you focus on the important stuff, and also become realistic about your abilities to do it.

People like having superpowers and don't like obeying duties, so those who try to spread rationality are pressured to present it as a superpower instead of a duty.

wunan's Shortform

There was some previous discussion here.

Why would they want the state of the universe to be unnatural on Earth but natural outside the solar system?

edit: I think aliens that wanted to prevent us from colonizing the universe would either destroy us, or (if they cared about us) help us, or (if they had a specific weird kind of moral scruples) openly ask/force us not to colonize, or (if they had a specific weird kind of moral scruples and cared about being undetected or not disturbing the experiment) undetectably guide us away from colonization. Sending a very restricted ambiguous signal seems to require a further unlikely motivation.

steven0461's Shortform Feed

According to electionbettingodds.com, this morning, Trump president 2024 contracts went up from about 0.18 to 0.31 on FTX but not elsewhere. Not sure what's going on there or if people can make money on it.

2steven04611moThere's still a big gap between Betfair/Smarkets (22% chance Trump president) and Predictit/FTX (29-30%). I assume it's not the kind of thing that can be just arbitraged away.
1MikkW3moHuh. I am curious to hear explanations if anyone has one.

perhaps the aliens are like human environmentalists who like to keep everything in its natural state

Surely if they were showing themselves to the military then that would put us in an unnatural state.

2James_Miller3moYes good point. They might be doing this to set up a situation where they tell us to not build Dyson spheres. If we accept that aliens are visiting us and observe that the universe is otherwise in a natural state we might infer that the aliens don't want us to disturb this state outside of our solar system.
Could you have stopped Chernobyl?

Preventing a one-off disastrous experiment like Chernobyl isn't analogous to the coming problem of ensuring the safety of a whole field whose progress is going to continue to be seen as crucial for economic, humanitarian, military, etc. reasons. It's not even like there's a global AI control room where one could imagine panicky measures making a difference. The only way to make things work out in the long term is to create a consensus about safety in the field. If experts feel like safety advocates are riling up mobs against them, it will just harden their view of the situation as nothing more than a conflict between calm, reasonable scientists and e.g. over-excitable doomsday enthusiasts unbalanced by fictional narratives.

1Carlos Ramirez3moI highlight later on that Beirut is a much more pertinent situation. No control room there either, just failures of coordination and initiative. Also, experts are not omnipotent. At this point, I don't think there are arguments that will convince the ones who are deniers, which is not all of them. It is now a matter of reigning that, and other, field(s) in.
What 2026 looks like

Is it naive to imagine AI-based anti-propaganda would also be significant? E.g. "we generated AI propaganda for 1000 true and 1000 false claims and trained a neural net to distinguish between the two, and this text looks much more like propaganda for a false claim".

What does GDP growth look like in this world?

Another reason the hype fades is that a stereotype develops of the naive basement-dweller whose only friend is a chatbot and who thinks it’s conscious and intelligent.

Things like this go somewhat against my prior for how long it takes for culture ... (read more)

6AllAmericanBreakfast4moAs a sort-of example, sleuths against scientific fraud are already using tools [https://www.nature.com/articles/d41586-021-02134-0] to detect fake papers generated by GPT. already using GPT-detecting AI tools [https://arxiv.org/pdf/2107.06751.pdf] to detect AI-generated or -translated papers, even if the generating tool wasn't GPT.

Thanks for the critique!

Propaganda usually isn't false, at least not false in a nonpartisan-verifiable way. It's more about what facts you choose to emphasize and how you present them. So yeah, each ideology/faction will be training "anti-propaganda AIs" that will filter out the propaganda and the "propaganda" produced by other ideologies/factions.

In my vignette so far, nothing interesting has happened to GDP growth yet.

I think stereotypes can develop quickly. I'm not saying it's super widespread and culturally significant, just that it blunts the hype a ... (read more)

steven0461's Shortform Feed

It's complicated. Searching the article for "structural uncertainty" gives 10 results about ways they've tried to deal with it. I'm not super confident that they've dealt with it adequately.

steven0461's Shortform Feed

There's a meme in EA that climate change is particularly bad because of a nontrivial probability that sensitivity to doubled CO2 is in the extreme upper tail. As far as I can tell, that's mostly not real. This paper seems like a very thorough Bayesian assessment that gives 4.7 K as a 95% upper bound, with values for temperature rise by 2089 quite tightly constrained (Fig 23). I'd guess this is an overestimate based on conservative choices represented by Figs 11, 14, and 18. The 5.7 K 95% upper bound after robustness tests comes from changing the joint prio... (read more)

2ChristianKl5moAdditionally I think it's not real because if there would be such warming through feedback effects there's enough time to do heavy geoengineering. Geoengineering has it's own risks but it's doable in "runaway warming" scenarios.
4Steven Byrnes5moHow do they deal with model uncertainty (unknown unknowns)?
steven0461's Shortform Feed

Thinking out loud about some arguments about AI takeoff continuity:

If a discontinuous takeoff is more likely to be local to a particular agent or closely related set of agents with particular goals, and a continuous takeoff is more likely to be global, that seems like it incentivizes the first agent capable of creating a takeoff to make sure that that takeoff is discontinuous, so that it can reap the benefits of the takeoff being local to that agent. This seems like an argument for expecting a discontinuous takeoff and an important difference with other al... (read more)

steven0461's Shortform Feed

Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth (William D. Nordhaus)

Has anyone looked at this? Nordhaus claims current trends suggest the singularity is not near, though I wouldn't expect current trends outside AI to be very informative. He does seem to acknowledge x-risk in section Xf, which I don't think I've seen from other top economists.

3Florian Habermacher6moNordhaus seems to miss the point here, indeed. He does statistics purely on historic macro-economic data. In these, there could not be even a hint of the singularity we'd here talk about - and that also he seems to refer to in his abstract (imho). This core singularity effect of self-accelerating, nearly infinitely fast intelligence improvement, once a threshold is crossed, is almost by definition invisible in present data: Only after this singularity, we expect things to get weird, and visible in economic data. Bit sad to see the paper as is. Nordhaus has written seminal contributions in integrated environmental-economic modelling in the resources/climate domain. And also for singularity questions, good economic analysis modelling explicitly substitutabilities between different types of productive capital, resources, labor, information processing, could be insightful, I believe, and at least I have not yet stumbled upon much in that regard. There is a difficulty to imagine a post-singularity world at all; but interesting scenarios could probably be created, trying to formalize more casual discussions.
Load More