Does the company present the statistical uncertainty, or do you have to calculate it yourself?
the remaining third is split exactly in half on whether preventing AI x-risk feels like a Democratic or Republican issue
I expect this to change soon: there's a very large difference between the parties regarding trust to experts in general and academia specifically (and we know academia and industry have different opinions regarding AI risks).
And do you think you could poll on other AI risks you identified? I expect there to be a party difference there.
Also, maybe you could poll respondents for their political affiliation before asking the questions
The most common reason people stop counting as participating in the labor force is that they grow old and living off savings, passive income, pension and/or social benefits is better than continuing working, which we call a retirement. With global graying of the population, 50% of formerly working people will necessarily become permanently unemployable in this sense eventually even without the AI progress.
Also, note that Finland has ~10% unemployment rate and they are quite OK because of the social safety net. If AI was to be heavily taxed and these funds ...
I realized I'm not sure how you define "50% of people permanently unemployable". Surely it isn't about global population? Is it about global labor force (which is ~45% of global population) or about developed countries only?
As of 2019, about a quarter of global labor force worked in primary agricultural production (mostly smallholder farmers who might only be impacted by AI indirectly, such as natural gas going to data centers instead of fertilizer plants) and half as much were employed in "off-farm segments of agrifood systems". Surely people need to eat ...
the pace of conceptual work on AI algorithms is like >100x faster
In such a case I expect these AI researchers to pick all the low- and medium-hanging fruit at the then-current compute level/hardware technology, and then the algorithmic progress gets saturated until new-gen chips are produced in quality. Check this: https://www.lesswrong.com/posts/sGNFtWbXiLJg2hLzK
Why can’t Lockheed and Raytheon simply make way more of them?
The problem is not technological, it's political and economical. We know how to scale the production (it's really 20th century tech), the Congress just doesn't give the funds. Half a billion dollars for a new plant is not really that large a figure for a country which spends over a trillion dollars on defense annually, but the priorities are not there (or maybe I could have said "the lobbyists are not there" but I don't want to go deep into politics).
E.g., Raytheon claims to have the capacity to ...
Well, there are plenty of long takes on X which are obviously based on authors' ideas but LLM-generated (even before ones runs them through a detector) and still get pretty popular, audience not smelling an LLM. Do you count that as good or bad writing? I honestly don't enjoy reading them for some reason even when I agree the underlying ideas make sense, but on the other hand, these authors reached a wider audience than they would presumably have without an LLM
“dynamite” (no relation)
Really? I have always thought your nickname is a pun on this word!
Check this: https://www.lesswrong.com/posts/PiD8eS33umRrvGcMe/david-james-s-shortform?commentId=k4jpWmksetk3M9xdK
As long as there are only few nuclear states, absence of nuclear wars doesn't seem unusual or unexpected, but if the non-proliferation paradigm was to fall apart and multiple new states got bombs in a decade or two, the situation would be likely to worsen significantly
If a company mines crypto on scale and gets caught, what would be the punishment, if any?
A Manifold market: https://manifold.markets/MaxHarms/did-alibabas-rome-ai-try-to-break-f
Note that cryptocurrency mining is prohibited in China, although I was unable to find legal details (presumably it's punishable by fines proportional to scale).
See also https://www.astralcodexten.com/p/sakana-strawberry-and-scary-ai from 2024
entity, person or corporation, listed as owning the property with the tax
Why can't the land be owned by tax-exempt organizations such as churches, charities or universities and then rented to rich people? It seems for me your suggestion is as loophole-prone as other ones proposed in the past
I agree that the models served to civilian customers over API can't be realistically secured from the state adversaries, but if we are speaking about advanced AI R&D in the future like in AI 2027, than it looks feasible to conduct it on protected servers. Maybe I misunderstood author's opinion
US investors
I think the essay could have been significantly shorter if you concentrated on this issue alone. US VC investment reached $340B in 2025 (about 60% of the global capacity) while it was only $58B in Europe according to Crunchbase, and the visible part of the Chinese VC market is even smaller.
Lots of ink has been spilled on the reasons why, but suffice to say, it's nowhere near enough to train on scale in the second half of 2026, and European taxpayers don't want state-funded AI programs either
I believe these things are connected with each other: if the server and the software system in general is safe enough to work with lots of classified information on a regular basis, it's safe to store the weights as well
First of all, if share of Ls in the deck is higher than usual, you can always consult with the table what to do before the turn.
If you are a liberal president and you drew two Ls and an F, it's better to pass LF in the beginning of the game and in rare situations later in the game when you urgently need to find a liberal player. In this case the information on your chancellor you and the tam gets is likely more valuable than the risk of a fascist policy being adopted. If the chancellor chooses to discard an L, which is actually usually optimal for a regula...
After thinking about the recent viral Citrini thought experiment and a bit more research I think I was able to sharpen my thesis a bit!
Transaction costs were divided into three broad categories by Dahlman in 1979:
robustness to state-backed hacking programs was unachievable
How do you reconcile that with the fact that Claude has recently been used by the US Government to process classified information? Presumably they have a special version on special servers for that but still, this looks like some degree of robustness which might be achieved with a model not served to a wide audience
Post-9/11 laws provide NSA a legal authority to access messages of foreign users on American servers (see, for example, https://www.aclu.org/issues/national-security/privacy-and-surveillance/nsa-surveillance), so that shouldn't come as a surprise. But the domestic surveillance was limited by courts around the 1960s IIRC
Is there a statistically significant difference in how Democrats, Independents and Republicans rank different risks from AI?