All of Jackson Wagner's Comments + Replies

(Cross-posting my comment from the EA Forum):

Considering how awesome your video on Prediction Markets is, I think it could be a great idea to make videos about some other institutional innovation ideas that are popular in the rationalist world -- charter cities / network states, alternate voting systems like approval voting and liquid democracy, and so forth.  (If you want to take things in an even more political direction, you could produce animated versions of the Bryan Caplan arguments for open borders or YIMBYism.)

For some more traditionally ratio... (read more)

1the gears to ascenscion12d
political is popular, especially if you can focus on policy suggestions error check as honorably friendly to people hatewatching. keep in mind, a lot of people expect most people to be lying when political topics come up, and you aren't going to get through to those people, but you can try to suggest things that they can tell aren't dangerous for their friends to hear, they're just disagreed with.

Matt Yglesias makes a very similar case for more holidays here, including some interesting references to literature about the benefits of weekends and etc:

Basically unrelated, but I really enjoy the niche rationalist holidays of the Solstices, and to a lesser extent Petrov Day, Giving Tuesday, etc. We ought to come up with more of these!

Some thoughts:

  • In your subsequent post, "Actually, All Nuclear Famine Papers are Bunk", you talk about the impressive, year-plus grain stores that farmers rack up each fall.  How much does this vary throughout the year?  Presumably a nuclear war that struck at the worst possible time (perhaps that May 1 that the Nature paper strategically chose?) would leave us with much reduced food stores.
  • The Nature paper seems to imply that protein would be the toughest thing to scrounge up in a nuclear winter scenario, rather than raw calories.  This is p
... (read more)

Maybe back during the Cold War, when the Soviets were seriously considering ground-bursting thousands of warheads across the American corn belt to knock out missile silos, but that's not the world we live in anymore. With modern C&C , they simply can't realistically expect to destroy those sites before launch.


Can you link me to any sources or other analysis backing up the idea that silos wouldn't be targeted in a modern-day US/Russia nuclear exchange?  (I would suspect that, even if the optimal strategy may have changed, outdated Russian nuc... (read more)

40 miles away, the only way the explosions can badly hurt you is via fallout. It is relatively easy for informed people with, say, 72 hours of warning of the attack to protect themselves from fallout. They will be spending most of those 72 hours digging a trench, topping the trench with plywood or felled trees or such, heaping about 18 inches (maybe make it 24 inches for people in your situation) of dirt on top of that wood, then putting a sheet of plastic on top of that so that rain cannot wash the fallout into the trench.

If the fallout is very heavy, you... (read more)

There is some ambient support for Phil-Tetlock-style forecasting stuff like Metaculus, and some ambient support for prediction markets, definitely. But the vision here tends to be limited, mostly focused on "let's get better forecasting done on EA relevant questions/topics", not "scale up prediction markets until they are the primary way that society answers important questions in many fields".

There isn't huge effort going into future generations bills from within EA (the most notable post is complaining about them, not advocating them! https://forum.effe... (read more)

I totally sympathize with and share the despair that many people feel about our governments' inadequacy to make the right decisions on AI, or even far easier issues like covid-19.

What I don't understand is why this isn't paired with a greater enthusiasm for supporting governance innovation/experimentation, in the hopes of finding better institutional structures that COULD have a fighting chance to make good decisions about AI.

Obviously "fix governance" is a long-term project and AI might be a near-term problem. But I still think the idea of improving inst... (read more)

Metaculus was created by EAs. Manifold Market was also partly funded by EA money. What EA money goes currently into "passing weaksauce "future generations" climate bills"?

Which countries will go to war with who?  Doesn't strike me as plausible that, eg, individual random countries in the tropics would literally declare war on much-richer countries far away.

I think you are confusing the interests of citizens in the tropics (who might be motivated to immigrate from eg the Middle East to Europe, or from Indonesia to New Zealand, or from Venezuela to Uruguay, just as the poor are always motivated to move to more prosperous lands) with diplomacy -- why would the leaders of places like Indonesia declare war on places like New Zealand?  We don't see countries in Central America trying to declare war on the USA today.

I never said it would be countries declaring wars. The reality will be a lot messier. People will get desperate and just go where they can survive. Imagine Mexicans trying to cross the US border, multiplied by 1000 times or more, all over the world. Can Canada accept tens of millions of refugees from the US? Can Russia deal with untold millions of Chinese, Indians and others trying to escape to the North?

None other than Peter Thiel wrote a huge essay about investing while under anthropic shadow, and I wrote a post analyzing said essay!  It is interesting, although pretty abstract in a way that probably makes it more relevant to organizations like OpenPhilanthropy than to most private individuals.  Some quotes from Thiel's essay:

Apocalyptic thinking appears to have no place in the world of money. For if the doomsday predictions are fulfilled and the world does come to an end, then all the money in the world — even if it be in the form of gold coin

... (read more)

For some more detail on what this plan might look like, my 2nd-place-winning entry in the Future of Life Institute's "A.I. world-building" competition, was all about how humanity uses prediction markets and other new institutional designs to increase its level of civilizational adequacy, becoming strong/wise enough to manage the safe development of transformative AI.

See my lesswrong post here (which focuses on the details of how AI development is controlled in my team's fictional scenario), or the whole entry here (which includes two great short stories by... (read more)

You are in luck; it would appear that Elizabeth has already produced some significant long-covid analysis of exactly this nature!

I was hoping more for something based on the current strains. I know that's a lot to ask for though, since they change so frequently.
Thank you Elizabeth!

You say:

[Under georgism,] there will be more pressure to use [land] in an economically viable way.

And then later you say:

If you want to reduce rents, all the usual methods apply – remove restrictions on land use, encourage higher density housing, and all that jazz.

I think that in my mind (and that of many Georgism advocates), one of the many benefits of Georgism would be that the increased pressure to use land in economically-optimal ways, will probably create increased incentives to build higher-density housing and increased political motivation remove ec... (read more)

I"m not sure about that as I think a lot of the land value is driven entirely by network effects related to the density of economic and social activity. Why wouldn't Georgism drive more "NIMBY" to keep the rents down -- and so possible drive up various externalities (e.g., factories locating in low rent areas resulting in high travel for both workers and products? [Edited to put the scare quotes in -- I don't think Georgism would be a strong driver or NIMBY activism by locals but could create the incentives to produce a pattern of activity that looks very similar. ]

An AI "warning shot" plays an important role in my finalist entry to the FLI's $100K AI worldbuilding contest; but civilization only has a good response to the crisis because my story posits that other mechanisms (like wide adoption of "futarchy"-inspired governance) had already raised the ambient wisdom & competence level of civilization.

I think a warning shot in the real world would probably push out timelines a bit by squashing the most advanced projects, but then eventually more projects would come along (perhaps in other countries, or in secret) a... (read more)

"We could simulate a bunch of human-level scientists trying to build nanobots."
This idea seems far-fetched:

  • If it was easy to create nanotechnology by just hiring a bunch of human-level scientists, we could just do that directly, without using AI at all.
  • Perhaps we could simulate thousands and thousands of human-level intelligences (although of course these would not be remotely human-like intelligences; they would be part of a deeply alien AI system) at accelerated speeds.  But this seems like it would probably be more hardware-intensive than just turn
... (read more)

I was interested to know if Matrix 4 (especially with its tech-company San Francisco setting) would be offering an updated perspective on some AI issues, but alas, in the end the movie seemed to be even less about AI than the original Matrix films.  But I nevertheless thought the movie was thematically interesting; see my essay about the film here.

Ostensibly, the plot of The Matrix 4 is about Neo breaking out of a prison of illusion and rediscovering the true reality. But the structure of the movie is the opposite of this! It starts out asking real-wo

... (read more)

2022 update: I too was interested to know if Matrix 4 (especially with its tech-company San Francisco setting) would be offering an updated perspective on some AI issues, but alas, in the end the movie was even less about AI than the original Matrix films.  And not really a very good movie either.  But interesting; see my essay about the film here.

Ostensibly, the plot of The Matrix 4 is about Neo breaking out of a prison of illusion and rediscovering the true reality. But the structure of the movie is the opposite of this! It starts out asking re

... (read more)

Your #2 motivation goes pretty far, so this is actually a much bigger exception to your bullet-bite than you might think.  The idea of "respecting the will of past generations to boost the chances that future generations will respect your will" goes far beyond sentimental deathbed wishes and touches big parts of how cultural & financial influence is maintained beyond death.  See my comment here.

As I wrote in a commentary on Gwern's "The Narrowing Circle", respect for ancestors is probably justified in some sort of acausal-negotiation sense, even if (as commenters Richard Kennaway and Dagon feel) we don't actually care about their values:

A drop in respect for ancestors might also directly cause a drop in concern for descendants -- it might be logical to disregard the lives of future generations if we assume that they (just like us) will ignore the wishes of their ancestors!

Consider: it's certainly important that we somewhat respect the financial w... (read more)

I think this reasoning is not truly acausal; if my kids weren't watching my behavior, I wouldn't expect how I treat my elders to affect my descendants' behavior towards me at all.

I follow the logic but also find myself amused by the thought that "simulate every possible unfriendly AI", which sounds like literally the worst civilizational policy choice ever (no matter how safe we think our containment plan might be), could possibly be considered a good idea.

If we already have a powerful friendly AI, say, of Galactic size, it could easily simulate millions of designs of UFAI on early stages, and replace human simulations with NPC, so there will be no sufferings.

As a fellow fan of Morris's book, you might enjoy my alternate-history Song Dynasty parody post, "Why the East Rules".

If you like entertaining reads and don't mind just-so stories, you yourself might enjoy my Song Dynasty alternate-history parody post in the style of Why the West Rules (I did enjoy the actual book).  Here's "Why the East Rules".

I agree that one of democracy's core features is that it reduces conflict, and any future system of government should be designed with that in mind. Nevertheless, this post strikes me as foolish for several reasons:

  • Democracy rewards the majority of the people. But is the side with more raw numbers (even if just 51%) really the group best positioned to win a coup? I propose a new form of government, Money-Democracy, where you get votes equivalent to LN(your_net_worth). Now the rich and powerful are properly weighted as being more influential. Wouldn

... (read more)
First: these are a lot of good points, some I did not think of as an intellectual test of my ideas beforehand. Any responses that I come up with now are a little suspect because I had not anticipated several of them. The majority coalition organizes conditions to defend itself from illegal coups, but that doesn't mean it's the most capable group to form one itself. The point is that by passing civil liberties the majority coalition ensures the system is stable. Numbers serves as a good enough proxy under a highly libertarian democracy for power that it tends to be able to reliably maintain order as the current incumbent, though not always! I agree that it's an imperfect abstraction to say that the more your system puts powerful people in charge the more things are stable, because obviously, some more overt ways of putting the powerful person in charge galvanizes opposition by offending people's moral or ideological sympathies. However, it's IMO the most important deciding component, and the other components tend to be technically surmountable by the ruling faction, once power is achieved (via propaganda, lying, etc.). They simply don't tend to be as much of a binding constraint on behavior in practice. I have no response to this. It actually just occurred to me that Futarchy can be packaged as a more effective way to enforce majority coalition values, even if it doesn't immediately enforce their beliefs about the necessary actions. In that sense it, and jury voting, can be seen as a little analogous to ranked voting. It's not necessarily true that you have to give literal power to the majority coalition to placate it. This throws the entire posts' thesis into question. This ignores the implementation details of acting against the Supreme Court overtly, from the perspective of a coalition. You of course still need some institution to resolve legal disputes about the laws passed by congress. What will that institution be? A designated party man, like Joe Biden? N

The EA movement does indeed have the ear of people like Musk, and especially people like Dustin Moskovitz, Sam Bankman-Fried, etc! I agree that getting billionaires on board with EA is a key objective, but at this point we've already done that a bit, and maybe the best way to get further billionaires on board is probably some combination of:

  • Doing great object-level stuff to show we're worth funding.
  • Thinking up scalable uses of additional funds that could absorb literal billions, while retaining at least a pretty strong level of effectiveness.
  • Trying to
... (read more)

Instead of the current, AI-based system of content moderation, Twitter could experiment with different methods of community governance and judicial review.

Imagine a system where AI auto-censorship decisions could be appealed by staking some karma-points on the odds that a community moderator would support the appeal if they reviewed it.  Others could then stake their own karma points for or against, depending on how they thought the community moderator would rule.  An actual community moderator would only have to be brought in for the most conten... (read more)

Kleros []provides a good model for how you can do community-based courts.

Twitter could implement a play-money prediction market just like metaculus or manifold markets -- they could even consider buying one of these teams.  Ideally, starting or voting on a prediction market would be as easy as running a Twitter poll.  (Reddit recently did something similar.)  Having large, metaculus-style prediction markets on newsworthy events might directly help important online conversations become more productive, more reality-based, and less polarized.  And in the long run, familiarizing people with how prediction marke... (read more)

A key of what distinguishes Metaculus from PredictionBook before it is that it's harder and not easier to start prediction markets. A good prediction market should have well defined questions. I would expect that having curated markets with well-defined questions would still be good. Twitter could hire a handful of people to write and score the predictions for the most important issues of the day. Then you could turn the system so that people who actually make predictions get their tweets that are related more often surfaced to the top than the average person.
Metaculus (unlike Manifold) is not a market and does not use play money except in the same sense that Tetris score is play money.
Yes, great idea. An epistemic golden age will result when it's possible to dunk on people by making better predictions than them.

Twitter could create an easy-to-use, secure voting infrastructure for use by student groups, nonprofits, small businesses, unions, and other relatively low-stakes situations where you mostly just want to get a reasonably trustworthy voting system up and running easily.  Twitter could use this platform to advertise the merits of designs like approval voting and quadratic voting, boosting interest in those types of voting and building legitimacy for them to be adopted in higher-stakes contexts.

(Cross-posting my EA Forum comment out of the Goodness of my Heart)

All investments go to zero in the case of existential risk, so it's hard to price it correctly... I thought that the article was great, but I would have appreciated a more comprehensive treatment.

You may be interested in reading my long-form (albeit meandering) thoughts on this exact issue!  Here is the post, wherein I analyze and respond to an essay by Peter Thiel on this same subject of the interaction between markets and apocalyptic risks that would end those very markets.  Amo... (read more)

Thanks Jackson!

For starters, why aren't we already offering the most basic version of this strategy as a workplace health benefit within the rationality / EA community?  For example, on their workplace benefits page, OpenPhil says:

We offer a family forming benefit that supports employees and their partners with expenses related to family forming, such as fertility treatment, surrogacy, or adoption. This benefit is available to all eligible employees, regardless of age, sex, sexual orientation, or gender identity.

Seems a small step from there to making "we cover IVF ... (read more)

I totally second this.  A couple facts about my own routine:

  • I've been using a Quest 2 for regular (2x-3x weekly) brief exercise sessions for about a year.  In combination with occasional (0.5x-1x weekly) traditional strength training routine and some jogging around the park, this is the most fun I've ever had exercising and the most consistent that I've ever been about it, although I still wish I was doing more.
  • I use this basic pair of weighted gloves when I play VR, which makes games like Beat Saber much more of a workout!  I was initially
... (read more)
Be careful with Hyperbolica—that game can suck you in! (The PC version at least) Accounting Plus is also really good as a short surrealist story-based game (I think it’s by the same folks who made Rick & Morty, which is a good proxy for if you’ll enjoy its style of humor or not)
1Yonatan Cale8mo
Thank you! I added those gloves to my cart :)

Incurring debt for negative votes is a hilarious image:  "Fool!  Your muddled, meandering post has damaged our community's norm of high-quality discussion and polluted the precious epistemic commons of the LessWrong front page -- now you must PAY for your transgression!!!"

It really isn't funny in the slightest. It means a clique of users can ostracize someone with real-world consequences.

On the subject of "maybe we should tolerate a little bit of Goodharting in the name of encouraging people to post", the EA Forum allows authors to view readership statistics for their posts.  I think this is a cool feature and it would be nice if LessWrong also adopted it.

Writing on LessWrong, I find myself missing the feature for a couple reasons:

  • While the Good Heart Project continues, clearly the number of posts being published is higher than average.  But are there also a higher than average number of readers?  Knowing if I'm getting more
... (read more)
As a counterpoint, knowing that the EA forums expose this significantly disincentivizes me, at the very least, from ever looking at or recommending the EA forums. There is no way to track these statistics in a way that isn't either inaccurate in adversarial scenarios or leaks far too much user information, or both. And there tends to be a certain cat-and-mouse game: 1. Initially there's something absolutely basic like a hit counter. 2. Someone writes a script that hammers a page from a single IP, to boost the seeming engagement. 3. A set cardinality estimator is added to e.g. filter by only a single hit per IP. 4. Someone writes a script that hammers a page from many IPs, to boost the seeming engagement. 5. The hit counter is modified to e.g. only work if Javascript is enabled. 6. The script is ported to use a JS interpreter, or to directly poke the backend. 7. The hit counter is modified to e.g. also fingerprint what browser is being used. 8. The script is ported to use headless Chrome or somesuch. 9. The hit counter is modified to e.g. only capture views from logged-in visitors. 10. The script is modified to automatically create accounts and use them. 11. Account creation is modified to include a CAPTCHA or similar. 12. The script is modified to include a tool to bypass CAPTCHAs[1] [#fnt470r5rkfh] 13. etc. Note that every one of these back-and-forths a) also drop or distort data, or otherwise make life harder, for legitimate users, and b) leak more and more information about visitors. I would not have too much of a problem with readership statistics if the resulting entropy was explicitly calculated, and if the forum precommitted to not in future making changes that continued the ratchet; without these I have serious concerns. 1. ^ [#fnreft470r5rkfh]Be it 'feeding audio captchas to a speech-to-text program', or 'just use Mechanical Turk'.

(heavy spoilers for ending of HPMOR):

HPMOR takes place in the 1990s, and importantly takes place before most people realized that the mysterious Quirrell was actually none other than the all-powerful nefarious amoral supergenius behind Lord Voldemort. Presumably, the exchange value of Quirrell points fluctuated over time -- low during periods when they only seemed useful for getting favors from an eccentric Defense Professor, high as the Defense Professor became increasingly well-known for his extreme competence and mysterious proximity to important events

... (read more)
4Ben Pace8mo
I have added spoiler-tags to your comment. Do not spoil the end of HPMOR on LW for unsuspecting readers!

The rule for the next round of the contest is:

  1. The most-upvoted suggestion will become the rules for the next round of the contest, subject to the constraint that:
  2. The winning suggestion for the next round must describe a contest that an ordinary person could reasonably enter with less than a day's effort; eg asking for people to write and submit a google doc rather than asking contestants to create a 1000-acre wildlife preserve or implement a set of rules that is clearly paradoxical/impossible.

Hello Guzey!  Your blog and your new organization are a big inspiration to me.  I greatly enjoyed this post; here is a grab-bag of thoughts which hopefully contains some useful info for you:

  • You might be interested to learn that some corners of Buddhism sometimes seem to have a strong anti-sleepiness agenda:
    • The Buddha himself warned against sleeping in excessively luxurious beds -- it is the last of a series of eight moral commandments that good monks should uphold.  But it's unclear if this teaching was intended to prevent people from oversl
... (read more)

The fasting analogy is interesting, as is the analogy with exercise -- some kinds of activities are beneficial in the long-run even when they are damaging/unpleasant in the short run.  But surely these are exceptions to the general rule, right?

  • Besides exercise, it's not good to repeatedly injure yourself and then have the wounds heal.  (Exercise is essentially the small, specific subtype of "injury" which is actually good for the body in the long term.)
  • Getting sick with a cold or flu is good at building immunity to that kind of virus when it come
... (read more)

I am much more sold on "variety is good for humans, and mild-moderate deprivation and excess is variety" than "humans should permanently run on much less sleep than they think they need" or "sleepiness is a lie".

One tricky thing here is humans aren't actually guaranteed to have a pareto optimim, or to have a path that gets all good things. It seems really plausible childhood illnesses damages development and IQ, and lack of childhood illness causes allergies and immune vulnerability later (I think think the hygiene/old friends hypothesis is largely correct... (read more)

0Martin Randall10mo
In the US, pregnant women get a glucose tolerance test to check for gestational diabetes, which involves drinking 100g of glucose on an empty stomach and seeing how quickly it gets processed. But that's once or twice per pregnancy.

I don't think this particular idea is much of a concern (people could already profit from assassinating CEOs just by shorting the company's stock on the ordinary market, yet they do not...), but I would be interested to see some metaculus questions (or just community discussion, as here) about some of the key cruxes I'm wondering about.  Like "how much would liquidity increase/decrease if funds were stored in an S&P 500 index instead of just sitting in cash?", or perhaps "Which country will be first to see more than $100 million of daily of volume (or whatever) on legal prediction platforms?"

Cross-posting my long comment from the EA Forum:

I always appreciate your newsletter, and agree with your grim assessment of prediction markets' long-suffering history.  Here is what I am left wondering after reading this edition:

Okay, so the USA has mostly dropped the ball on this for forty years.  But what about every other country?  China seems pretty ambitious and willing to make things happen in order to secure their place on the world stage -- where is the CCP-subsidized market hive-mind driving all the crucial central planning decision... (read more)

+1 for reworking this as a top-level post.
Copying my answer from the forum [] HYPOTHESES FOR THE GLOBAL LACK OF ADOPTION OF PREDICTION MARKETS/PROBABILISTIC METHODS From the hypothesis you outline, the ones that sound the most plausible, or like they hit more the nail in the head, are: * Null hypothesis! Governments aren't that competent. I have some thoughts on how "strong optimizers", e.g., a Machiavelli, a Bismark, just aren't that common, and are becoming less common. * We can see this happening in Britain, where Dominic Cummings pushed for prediction markets/forecasting tournaments for governmental decision-making, and this got translated into a totally inoffensive forecasting platform with totally milquetoast questions which don't affect decisions. * Prediction markets not being ready for prime-time But, to some extent, all your hypothesis have something to it. It's also not clear how one would go about differentiating between the different hypotheses. "Good judgment", sure, but we still don't really have the tools for thought to be reliably able to distinguish between these kinds of hypothesis, and I dislike punditry. PREDICTION MARKET EXPERIMENTS AND OTHER COOL THINGS More on this to come in the next edition of the newsletter! * Yes, but I haven't really done the math to compare to other interventions * Ditto. Also, last time I checked I think Hanson was still excited [] about it. I guess I'd be more excited about, e.g., prediction markets on topics of great importance to OpenPhil/EA, but that might be a bit myopic (?) * Yes, I just asked that yesterday to their head of engineering, and he seemed pretty receptive. No stock markets, though, and they still have to get their respective licenses at least on the US. * I think tha
Perhaps we should also have a meta-prediction market, where people would guess which perverse incentives will be created by which prediction. (For example, one could bet against a CEO, then give him some non-lethal poison.)
This comment is glorious. I'll take some time to answer, though.

You might be interested in this post on the EA Forum advocating for the potential of free/open-source software as an EA cause or EA-adjacent ally movement (see also my more skeptical take in the comments).

I also thought this other EA Forum post was a good overview of the general idea of altruistic software development, some of the key obstacles & opportunities, etc. It's mostly focused on near-term projects for creating software tool... (read more)

Eliezer's point is well-taken, but the future might have lots of different kinds of software! This post seemed to be mostly talking about software that we'd use for brain-computer interfaces, or for uploaded simulations of human minds, not about AGI. Paul Christiano talks about exactly these kinds of software security concerns for uploaded minds here:

In the traditional view a person is free. He is autonomous in the sense that his behavior is uncaused.

That view, together with its associated practices, must be re-examined when a scientific analysis reveals unexpected controlling relations between behavior and environment.  By questioning the control exercised by autonomous man and demonstrating the control exercised by the environment, a science of behavior also seems to question dignity or worth.  

A person is responsible for his behavior, not only in the sense that he may be justly blamed or p

... (read more)

You might be interested in this 80,000 Hours podcast about the extreme moral uncertainty created by our complex world, and the (tounge-in-cheek) "moral case against ever leaving the house". I agree that it can be dizzying to think about how our deep uncertainty about the future (which philosopher Hillary Greaves calls "moral cluelessness") seems to potentially undermine all our efforts -- not just our altruistic endeavors, but what we seek to accomplish in our jobs, in our personal relationships, etc.

But the logic of expected value maximization tells us t... (read more)

wonderful - thanks so much for the references! "moral case against leaving the house" is a nice example to have in the back pocket :)

It is a pretty big ask of individuals (who perhaps are making a blog post with a list of yearly predictions, in the style of Slate Star Codex, theZvi, Matt Yglesias, or others) to do all this math in order to generate and evaluate Normal Predictions of continuous variables. I think your post almost makes more sense as a software feature request -- more prediction markets and other platforms should offer Metaculus-style tools for tweaking a distribution as a way of helping people generate a prediction:

  • It would be awesome if a prediction market let me bet
... (read more)
4Jan Christian Refsgaard1y
The big ask is making normal predictions, calibrating them can be done automatically here is a quick example using google sheets: here is an example [] I totally agree with both your points, This comment [] From a Metaculus user have some good objections to "us" :)

"Suggestion is to use saliva for rapid tests to get that time back, but by the time we do that, I’m assuming the wave will already be over."

I'm confused; is this saying:

  • That we should use saliva-based PCR tests instead of nasal-swab rapid tests? (But PCRs are expensive and their turnaround time is often terrible?)
  • That we should manufacture new types of rapid tests which can use saliva?
  • That existing rapid tests can already use saliva, and we should just start popping rapid tests in our mouths like lollipops instead of sticking them up our noses??

I'm guessing the correct answer is #2, even though #3 would be by far the most fun and convenient thing to be true.

6Daniel V1y
Yeah the "ethical rules" linked tweet asks, since tests are available in the UK, what if we just had Londoners take two - one in the nose and one in the throat, to see if they work? (so a non-confident version of #3) It's more complicated too, not just #2 of developing tests we know work with saliva. From the linked preprint, the viral loads are somewhat higher in saliva than nasal earlier but nasal than saliva later (low sample size for this inference though). And those data are a bit sad as they show that regardless of the saliva/nasal viral loads, antigen struggles even for high viral loads until day 3 post-PCR-positive. PCR can detect pre-symptomatic cases (so day 0 PCR-positive can be...fudging a bit for the shorter serial interval, maybe day -3 of symptoms for omicron), which implies that taking an antigen test when you develop symptoms is right on day 3 post-PCR-positive. Maybe the FDA was getting signals of this (additional citations also included in the linked preprint) when they issued their cryptic statement about lower antigen sensitivity for omicron. It feels like we had more wiggle room on an informative antigen testing window for delta than we do for omicron. Antigen testing was weakly informative pre-symptoms for delta, but, based on this preprint, it seems antigen testing is wholly uninformative pre-symptoms for omicron.

How can it be decades away if a couple of random "transhumanist" couples are already doing it? Mass adoption might be decades away, but lesswrongers are weird people who are often interested in early-adopting new technologies (like cryptocurrency, cryonics, etc).

Shouldn't it be possible (with effort) to have polygenically-selected children right now? Companies like LifeView are already open for business. Yes, it requires expensive IVF, and you have to compute the intelligence scores yourself since LifeView is ostensibly only about health scoring. But neither of those friction points seems likely to change much in the next 5 years. So I think the answer might be, rather than wait until it gets easy and everyone does it, if you want polygenomic selection you should put in the work now and get the competitive ben... (read more)

See the similar comment here.

Personally, I think that we can do better than starting a nuclear war (which, after all, just delays the problem, and probably leaves civilization in an even WORSE place to solve alignment when the problem eventually rears its head again -- although your idea about disaster-proofing MIRI and other AI safety orgs is interesting), as I said in a reply to that comment. Trying to reduce Earth's supply of compute (including through military means), and do other things to slow down the field of AI (up to and including the kind of stu... (read more)

Trying to reduce the amount of compute risks increasing hardware overhang once that compute is rebuilt. I think trying to slow down capabilities research (e.g. by getting a job at an AI lab and being obstructive) is probably better. edit: meh idk. Whether or not this improves things depends on how much compute you can destroy & for how long, ml scaling, politics, etc etc. But the current world of "only big labs with lots of compute budget can achieve SOTA" (arguable, but possibly more true in the future) and less easy stuff to do to get better performance (scaling) both seem good.

I think there are some ways of flipping tables that offer some hope (albeit a longshot) of actually getting us into a better position to solve the problem, rather than just delaying the issue. Basically, strategies for suppressing or controlling Earth's supply of compute, while pressing for differential tech development on things like BCIs, brain emulation, human intelligence enhancement, etc, plus (if you can really buy lots of time) searching for alternate, easier-to-align AGI paradigms, and making improvements to social technology / institutional decis... (read more)

If discussed privately, that can also lead to pretty horrific scenarios where a small group of people do something incredibly dumb/dangerous without having outside voices to pull them away from such actions if sufficiently risky. Not sure if there is any “good” way to discuss such topics, though…

Agreed that in the long run, these kind of slow-rolling dysgenic effects are no big deal:

  • Polygenic selection and other genetic tech are already powerful enough to counter dysgenic effects, and will only become stronger with time.
  • Even if there was no ability to genetically fix dysgenic effects, our society is probably improving in other ways at a fast enough clip to overcome the decay (ie, medical tech advancing faster than our health declines; education & information technology more than making up for declines in intelligence, etc).
  • More generally,
... (read more)

I appreciate the structured, concise, almost fully bullet-point format of this post. Bullet points are underutilized as a viable writing style for presenting finished work!

Zvi, what are your thoughts on covid in the USA during the winter?

On the one hand:

  • The delta wave is ending, and there are no new variants on the horizon.
  • Vaccinations rates are high and slowly rising.
  • The overall rate of immunity (from vaccines + natural infection) is high and probably rising (although this is a fight between fading vaccine effectiveness vs natural infections & vaccine booster shots).

On the other hand, winter is traditionally the worst time for colds and flus, including the monster covid wave of 2020. It seems hard to believe we'l... (read more)

By late 2022 and much more probable by 2023 mRNA flu vaccines will be available and they will be just as effective as covid vaccines.
I actually think we should be mandating flu shots in addition to covid shots. Biden administration: "Mandates work! Get a covid shot or else!" Also Biden administration: "Pretty please, get your flu shot" (Seriously though, get your flu shot)
Load More