Posts

Sorted by New

Wiki Contributions

Comments

I'm highly skeptical that it's even possible to create omnicidal machines. Can you point empirically to a single omnicidal machine that's been created? What specifically would an OAL-4 machine look like? Whatever it is, just don't do that. To the extent you do develop anything OAL-4, we should be fine so long as certain safeguards are in place and you encourage others not to develop the same machines. Godspeed.

Post hoc probability calculations like these are a Sisyphean task. There are infinite variables to consider, most can't be properly measured, even ballparked.

On (1), pandemics are arguably more likely to originate in large cities because population density facilitates spread, large wildlife markets are more likely, and they serve as major travel hubs. I'm confused why the denominator is China's population for (1) but all the world's BSL-4 labs in (3). I don't understand the calculation for (2)... that seems the opposite of "fairly easy to get a ballpark figure for." Ditto for (4).

Rootclaim sold the debate as a public good that would enhance knowledge but ultimately shirked responsibility to competently argue for one side of the debate, so it was a very one-sided affair that left viewers (and judges) to conclude this was probably natural origin. Several people on LW say the debate has strongly swayed them against lab leak. 

The winning argument (as I saw it) came down to Peter's presentation of case mapping data (location and chronology) suggesting an undeniable tie to the seafood market. Saar did little to undercut this, which was disappointing because the Worobey paper and WHO report have no shortage of issues. Meanwhile, Peter did his homework on basically every source Saar cited (even engaging with some authors on Twitter to better understand the source) and was quick to show any errors in the weak ones, leaving viewers with the impression that Rootclaim's case stood on a house of cards.

Peter was just infinitely more prepared for the debates, had counterpoints for anything Saar said, and seemingly made 10 logical arguments in the time it took Saar to form a coherent sentence. It wasn't exactly like watching Mayweather fight Michael Cera, but it wasn't not that. Didn't seem a fair fight. 

Zoonotic will win this debate because Peter outclassed Saar on all fronts, from research/preparation to intelligibly engaging with counterclaims and judge's questions. 

Saar seemed too focused on talking his book and presenting slides with conditional probability calculations. He was not well-versed enough in the debate topics to defend anything when Peter undercut a slide's assumptions, nor was he able to poke sufficient holes in Peter's main arguments. Peter relied heavily on case mapping data, and Saar failed to demonstrate the ascertainment bias inherent to that data. He even admitted he did no follow-up research after the initial presentation. 

I get the sense Saar either thought lab leak was so self-evident that showing the judges his probability spreadsheet would be a wrap, or he was happy to pony up $100k just to advertise Rootclaim. Maybe both.

For those reasons the Rootclaim verdict doesn't seem like a proper referendum on the truth of the matter. But I would be more sympathetic to people updating toward zoonotic on the basis of having watched that debate, rather than on the basis of these survey results.

Yes, by virtue of the alliance with the "top virologists".

In Feb 2020 Anthony Fauci convened a bunch of virologists to assess SARS-CoV-2 origins. The initial take from the group (revealed in private Slack messages via FOIA requests from 2023) was this was likely engineered. In Kristian Andersen of Scripps Research's view, it was "so friggin likely because they were already doing this work." 

The same month, Fauci held an off-the-record call with the group. After that, everyone's tunes changed and shortly after (in a matter of weeks) we got the Proximal Origins paper, with Kristian Andersen doing a 180 as the lead author. The paper posits that there is "strong evidence that SARS-CoV-2 is not the product of purposeful manipulation." I encourage you to read the paper to determine its merits. Their evidence as I understand it is a) the structure of the spike protein is not what a computer would have generated as optimally viral, and b) pangolins. Pangolins were ruled out as carriers shortly after the paper's release. (a) can be dismissed -- or at least mitigated -- by the fact that serial passage can naturally develop what a computer may not. Andersen's Scripps Research coincidentally got a multi-million dollar grant shortly after publishing Proximal Origins, but again, that's merely coincidence.

Some in the comments seem stuck on the fact this virus could have been obtained in the wild, and thus is zoonotic in origin. That is ignoring the substantial work the Wuhan lab undertook to take natural viruses and create chimeric viruses that were optimized for human contagiousness

Fauci took the Proximal Origins paper on his circuit of 60 Minutes interviews, NYTimes podcasts, and Congressional testimonies, declaring, "the leading virologists say this was most likely of natural origin". 

This rhetoric undoubtedly has a massive chilling effect on any "experts" who would otherwise posit that this could be lab origin. The High Minister of Science has declared it was zoonotic, definitive proof will probably never be established either way, so you better be on the side of the High Minister of Science.

If there were an omniscient arbiter of truth that could make markets on this issue, I would take lab leak >50% in a heartbeat, and have it closer to 85%. Alas, there never will be such an arbiter, and we'll have to rely on the experts who are heavily reliant on government research grants, gain of function research as the way of the future, and generally not rocking the boat. 

Answer by followthesilenceJan 11, 202430

The Metaculus point scoring system incentivizes* middling predictions that would earn you points no matter the outcome (or at least provide upside in one direction with no point downside if you're wrong) so that would encourage participants with no opinion/knowledge on the matter to blindly predict in the middle.

Harder to explain with real money markets, but Peter's explanation is a good one. Also, for contracts closing several months or years out where the outcome is basically known, they will still trade at a discount to $0.99 because time value of money and opportunity cost of tying up capital in a contract that has very low prospective ROI.

*Haven't been on the site in a while but this was at least true as of a few months ago.

Good post, thank you. I imagine to go undefeated, you must excel at things beyond the dark arts described (in my experience, some judges refuse to buy an argument no matter how poorly opponents respond)? How much of your success do you attribute to 1) your general rhetorical skills or eloquence, and 2) ability to read judges to gauge which dark arts they seem most susceptible to?

"Want" seems ill-defined in this discussion. To the extent it is defined in the OP, it seems to be "able to pursue long-term goals", at which point tautologies are inevitable. The discussion gives me strong stochastic parrot / "it's just predicting next tokens not really thinking" vibes, where want/think are je ne sais quoi words to describe the human experience and provide comfort (or at least a shorthand explanation) for why LLMs aren't exhibiting advanced human behaviors. I have little doubt many are trying to optimize for long-term planning and that AI systems will exhibit increasingly better long-term planning capabilities over time, but have no confidence whether that will coincide with increases in "want", mainly because I don't know what that means. Just my $0.02, as someone with no technical or linguistics background.

Not sure if this page is broken or I'm technically inept, but I can't figure out how to reply to qualiia's comment directly:

Primarily #5 and #7 was my gut reaction, but quailia's post articulates rationale better than I could. 

One useful piece of information that would influence my weights: what was OAI's general hiring criteria? If they sought solely "best and brightest" on technical skills and enticed talent primarily with premiere pay packages, I'd lean #5 harder. If they sought cultural/mission fits in some meaningful way I might update lower on #5/7 and higher on others. I read the external blog post about the bulk of OAI compensation being in PPUs, but that's not necessarily incompatible with mission fit.

Well done on the list overall, seems pretty complete, though aphyer provides a good unique reason (albeit adjacent to #2).

Load More