Sequences

Decision Analysis

Wiki Contributions

Comments

What would be a better framing?

I talk about something related in self and no-self; the outward-flowing 'attempt to control' and the inward-flowing 'attempt to perceive' are simultaneously in conflict (something being still makes it easier to see where it is, but also makes it harder to move it to where it should be) and mutually reinforcing (being able to tell where something is makes it easier to move it precisely where it needs to be).

Similarly, you can make an argument that control without understanding is impossible, that getting AI systems to do what we want is one task instead of two. I think I agree the "two progress bars" frame is incorrect but I think the typical AGI developer at a lab is not grappling with the philosophical problems behind alignment difficulties, and is trying to make something that 'works at all' instead of 'works understandably' in the sort of way that would actually lead to understanding which would enable control.

Spoiler-free Dune review, followed by spoilery thoughts: Dune part 1 was a great movie; Dune part 2 was a good movie. (The core strengths of the first movie were 1) fantastic art and 2) fidelity to the book; the second movie doesn't have enough new art to carry its runtime and is stuck in a less interesting part of the plot, IMO, and one where the limitations of being a movie are more significant.)

Dune-the-book is about a lot of things, and I read it as a child, so it holds extra weight in my mind compared to other scifi that I came across when fully formed. One of the ways I feel sort-of-betrayed by Dune is that a lot of the things are fake or bad on purpose; the sandworms are biologically implausible; the ecology of Dune (one of the things it's often lauded for!) is a cruel trick played on the Fremen (see if you can figure it out, or check the next spoiler block for why); the faith-based power of the Fremen warriors is a mirage; the Voice seems implausible; and so on.

The sandworms, the sole spice-factories in the universe (itself a crazy setting detail, but w/e), are killed by water, and so can only operate in deserts. In order to increase spice production, more of Dune has to be turned into a desert. How is that achieved? By having human caretakers of the planet who believe in a mercantilist approach to water--the more water you have locked away in reservoirs underground, the richer you are. As they accumulate water, the planet dries out, the deserts expand, and the process continues. And even if some enterprising smuggler decides to trade water for spice, the Fremen will just bury the water instead of using it to green the planet.

But anyway, one of the things that Dune-the-book got right is that a lot of the action is mental, and that a lot of what differentiates people is perceptual abilities. Some of those abilities are supernatural--the foresight enabled by spice being the main example--but are exaggerations of real abilities. It is possible to predict things about the world, and Dune depicts the predictions as, like, possibilities seen from a hill, with other hills and mountains blocking the view, in a way that seems pretty reminiscent of Monte Carlo tree search. This is very hard to translate to a movie! They don't do any better a job of depicting Paul searching thru futures than Marvel did of Doctor Strange searching thru futures, and the climactic fight is a knife battle between a partial precog and a full precog, which is worse than the fistfight in Sherlock Holmes (2009).

And I think this had them cut one of my favorite things from the book, which was sort of load-bearing to the plot. Namely, Hasimir Fenring, a minor character who has a pivotal moment in the final showdown between Paul and the Emperor after being introduced earlier. (They just don't have that moment.)

Why do do I think he's so important? (For those who haven't read the book recently, he's the emperor's friend, from one of the bloodlines the Bene Gesserit are cultivating for the Kwisatz Haderach, and the 'mild-mannered accountant' sort of assassin.)

The movie does successfully convey that the Bene Gesserit have options. Not everything is riding on Paul. They hint that Paul being there means that the others are close; Feyd talks about his visions, for example.

But I think there's, like, a point maybe familiar from thinking about AI takeoff speeds / conquest risk, which is: when the first AGI shows up, how sophisticated will the rest of the system be? Will it be running on near-AGI software systems, or legacy systems that are easy to disrupt and replace?

In Dune, with regards to the Kwisatz Haderach, it's near-AGI. Hasimir Fenring could kill Paul if he wanted to, even after Paul awakes as KH, even after Paul's army beats the Sardaukar and he reaches the emperor! Paul gets this, Paul gets Hasimir's lonely position and sterility, and Paul is empathetic towards him; Hasimir can sense Paul's empathy and they have, like, an acausal bonding moment, and so Hasimir refuses the Emperor's request to kill Paul. Paul is, in some shared sense, the son he couldn't have and wanted to.

One of the other subtler things here is--why is Paul so constrained? The plot involves literal wormriding I think in part to be a metaphor for riding historical movements. Paul can get the worship of the Fremen--but they decide what that means, not him, and they decide it means holy war across the galaxy. Paul wishes it could be anything else, but doesn't see how to change it. I think one of the things preventing him from changing it is the presence of other powerful opposition, where any attempt to soften his movement will be exploited.

Jumping back to a review of the movie (instead of just their choices about the story shared by movie and book), the way it handles the young skeptic vs. old believer Fremen dynamic seems... clumsy? Like "well, we're making this movie in 2024, we have to cater to audience sensibilities". Paul mansplains sandwalking to Chani, in a moment that seems totally out of place, and intended to reinforce the "this is a white guy where he doesn't belong" narrative that clashes with the rest of the story. (Like, it only makes sense as him trolling his girlfriend, which I think is not what it's supposed to be / how it's supposed to be interpreted?) He insists that he's there to learn from the Fremen / the planet is theirs, but whether this is a cynical bid for their loyalty or his true feeling is unclear. (Given him being sad about the holy war bit, you'd think that sadness might bleed over into what the Fremen want from him more generally.) Chani is generally opposed to viewing him as a prophet / his more power-seeking moves, and is hopefully intended as a sort of audience stand-in; rooting for Paul but worried about what he's becoming. But the movie is about the events that make up Paul's campaign against the Harkonnen, not the philosophy or how anyone feels about it at more than a surface level.

Relatedly, Paul blames Jessica for fanning the flames of fanaticism, but this doesn't engage with that this is what works on them, or that it's part of the overall narrow-path-thru. In general, Paul seems to do a lot of "being sad about doing the harmful thing, but not in a way that stops him from doing the harmful thing", which... self-awareness is not an excuse?

I think open source AI development is bad for humanity, and think one of the good things about the OpenAI team is that they seem to have realized this (tho perhaps for the wrong reasons).

 

I am curious about the counterfactual where the original team had realized being open was a mistake from the beginning (let's call that hypothetical project WindfallAI, or whatever, after their charter clause). Would Elon not have funded it? Would some founders (or early employees) have decided not to join?

It doesn't present or consider any evidence for the alternatives. 

So, in the current version of the post (which is edited from the original) Roko goes thru the basic estimate of "probability of this type of virus, location, and timing" given spillover and lab leak, and discounts other evidence in this paragraph:

These arguments are fairly robust to details about specific minor pieces of evidence or analyses. Whatever happens with all the minor arguments about enzymes and raccoon dogs and geospatial clustering, you still have to explain how the virus found its way to the place that got the first BSL-4 lab and the top Google hits for "Coronavirus China", and did so in slightly less than 2 years after the lifting of the moratorium on gain-of-function research. And I don't see how you can explain that other than that covid-19 escaped from WIV or a related facility in Wuhan.

I don't think that counts as presenting it, but I do think that counts as considering it. I think it's fine to question whether or not the arguments are robust to those details--I think they generally are and have not been impressed by any particular argument in favor of zoonosis that I've seen, mostly because I don't think they properly estimate the probability under both hypotheses[1]--but I don't think it's the case that Roko is clearly making procedural errors here. [It seems to me like you're arguing he's making procedural errors instead of just combing to the wrong conclusion / using the wrong numbers, and so I'm focusing on that as the more important point.]

If it's not a lot of evidence

This is what numbers are for. Is "1000-1" a lot? Is it tremendous? Who cares about fuzzy words when the number 1000 is right there. (I happen to think 1000-1 is a lot but is not tremendous.)

 

  1. ^

    For example, the spatial clustering analysis suggests that the first major transmission event was at the market. But does their model explicitly consider both "transfer from animal to many humans at the market" and "transfer from infected lab worker to many humans at the market" and estimate probabilities for both? I don't think so, and I think that means it's not yet in a state where it can be plugged into the full Bayesian analysis. I think you need to multiply the probability that it was from the lab times the first lab-worker superspreader event happening at the market and compare that to the probability that it was from an animal times the first animal-human superspreader event happening at the market, and then you actually have some useful numbers to compare.

"I already tried this and it didn't work."

This post expresses a tremendous amount of certainty, and the mere fact that debate was stifled cannot possibly demonstrate that the stifled side is actually correct.

Agreed on the second half, and disagreed on the first. Looking at the version history, the first version of this post clearly identifies its core claims as Roko's beliefs and as the lab as being the "likely" origin, and those sections seem unchanged to today. I don't think that counts as tremendous certainty. Later, Roko estimates the difference in likelihoods between two hypotheses as being 1000:1, but this is really not a tremendous amount either.

What do you wish he had said instead of what he actually said?

It was terrible, and likely backfired, but that isn't "the crime of the century" being referenced, that would be the millions of dead people. 

As I clarify in a comment elsewhere, I think we should treat them as being roughly equally terrible. If we would execute someone for accidentally killing millions of people, I think we should also execute them for destroying evidence that they accidentally killed millions of people, even if it turns out they didn't do it.

My weak guess is Roko is operating under a similar strategy and not being clear enough on the distinction the two halves of "they likely did it and definitely covered it up". Like, the post title begins with "Brute Force Manufactured Consensus", which he feels strongly about in this case because of the size of the underlying problem, but I think it's also pretty clear he is highly opposed to the methodology.

There are two ways I can read this.

I mean a third way, which is that covering up or destroying evidence of X should have a penalty of roughly the same severity as X. (Like, you shouldn't assume they covered it up, you should require evidence that they covered it up.)

I feel like this is jumping to the conclusion that they're gullible

I think you're pushing my statement further than it goes. Not everyone in a group has to be gullible for the social consensus of the group to be driven by gullibility, and manufactured consensus itself doesn't require gullibility. (My guess is that more people are complicit than gullible, and more people are refusing-to-acknowledge ego-harmful possibilities than clear-mindedly setting out to deceive the public.)


To elaborate on my "courtier's reply" comment, and maybe shine some light on 'gullibility', it seems to me like most religions maintain motive force thru manufactured consensus. I think if someone points that out--"our prior should be that this religion is false and propped up by motivated cognition and dysfunctional epistemic social dynamics"--and someone else replies with "ah, but you haven't engaged with all of the theological work done by thinkers about that religion", I think the second reply does not engage with the question of what our prior should be. I think we should assume religions are false by default, while being open to evidence.

I think similarly the naive case is that lab leak is substantially more likely than zoonosis, but not so overwhelmingly that there couldn't be enough evidence to swing things back in favor of zoonosis. If that was the way the social epistemology had gone--people thought it was the lab, there was a real investigation and the lab was cleared--then I would basically believe the consensus and think the underlying process was valid.

So, from my perspective there are two different issues, one epistemic, and the one game-theoretic.

From the epistemic perspective, I would like to know (as part of a general interest in truth) what the true source of the pandemic was.

From the game-theoretic perspective, I think we have sufficiently convincing evidence that someone attempted to cover up the possibility that they were the source of the pandemic. (I think Roko's post doesn't include as much evidence as it could: he points to the Lancet article but not the part of it that's calling lab leak a conspiracy theory, he doesn't point to the released email discussions, etc.) I think the right strategy is to assume guilt in the presence of a coverup, because then someone who is genuinely uncertain as to whether or not they caused the issue is incentivized to cooperate with investigations instead of obstruct them.

That is, even if further investigation shows that COVID did not originate from WIV, I still think it's a colossal crime to have dismissed the possibility of a lab leak and have fudged the evidence (or, at the very least, conflicted the investigations).

I think it's also pretty obvious that the social consensus is against lab leak not because all the experts have watched the 17 hour rootclaim debate, but because it was manufactured, which makes me pretty unsympathetic to the "researching and addressing counter-arguments" claim; it reminds me of the courtier's reply.

If "they already tried it and it didn't work" they're real into that [Ray interpretation: as an excuse not to try more].

I think I've had this narrative in a bunch of situations. My guess is I have it too much, and it's like fear-of-rejection where it's worth running the numbers and going out on a limb more than people would do by default. But also it really does seem like lots of people overestimate how easy problems are to solve, or how many 'standard' solutions people have tried, or so on. [And I think there's a similar overconfidence thing going on for the advice-giver, which generates some of the resistance.]

It's also not that obvious what the correct update is. Like, if you try a medication for problem X and it fails, it feels like that should decrease your probability that any sort of medication will solve the problem. But this is sort of like the sock drawer problem,[1] where it's probably easy to overestimate how much to update.

  1. ^

    Suppose you have a chest of drawers with six drawers in it, and you think there's a 60% chance the socks are in the chest, and they're not in the first five drawers you look in. What's the chance they're in the last drawer?

When people found out about ACDC's previous ruling on Brent, many were appalled that ACDC had seen the evidence laid out in the Medium posts and ruled that it was okay for Brent to continue on like that

As I recall, the ACDC had in fact not seen the evidence laid out in the Medium posts. (One of the panelists sent an email saying that they had, but this turned out to be incorrect--there was new information, just not in the section he had read when he sent the email, and prematurely sending that email was viewed as one of the ACDC's big mistakes, in addition to their earlier ruling.)

Load More