Canada also uses FPTP, so this is not the example you should be using for examining alternatives.
Proportional representation, which is common in continental Europe, does result in a diversity of parties in practice.
There’s no bigger narrative than the one AI industry leaders have been pushing since before the boom: AGI will soon be able to do just about anything a human can do, and will usher in an age of superpowerful technology the likes of which we can only begin to imagine. Jobs will be automated, industries transformed, cancer cured, climate change solved; AI will do quite literally everything.
The article unfortunately does not seriously consider the possibility that AGI has the potential to automate most jobs in a few years. The large investments into AI would be justified in this case, even if current revenue is small! I think this is an important difference to past bubbles.
OpenAI, Anthropic, and the AI-embracing tech giants are burning through billions, inference costs haven’t fallen (those companies still lose money on nearly every user query), and the long-term viability of their enterprise programs are a big question mark at best.
The part about inference costs seems false, unless they mean total inference costs of all their instances.
Most[1] problems with unbounded utility functions go away if you restrict yourself to summable utility functions[2]. Summable utility functions can still be unbounded.
For example, if each planet in the universe gives you 1 utility, and for , then your utility function is unbounded but summable. In such a universe it would be very unlikely for a casino to hand out a large number of planets.
Your proof relies on the assumption
assuming that the casino has unbounded utility to hand out.
and this assumption would be wrong in my example.
Note that GWWC is shutting down their donor lottery, among other things: https://forum.effectivealtruism.org/posts/f7yQFP3ZhtfDkD7pr/gwwc-is-retiring-10-initiatives
Mid 2027 seems too late to me for such a candidate to start the official campaign.
For the 2020 presidential election, many democratic candidates announced their campaign in early 2019, and Yang already in 2017. Debates happened already in June 2019. As a likely unknown candidate, you probably need a longer run time to accumulate a bit of fame.
Also Musk's regulatory plan is polling well
What plan are you referring to? Is this something AI safety specific?
I wouldn't say so, I don't think his campaign has made UBI advocacy more difficult.
But an AI notkilleveryoneism campaign seems more risky. It could end up making the worries look silly, for example.
Their platform would be whatever version and framing of AI notkilleveryoneism the candidates personally endorse, plus maybe some other smaller things. They should be open that they consider the potential human disempowerment or extinction to be the main problem of our time.
As for the concrete policy proposals, I am not sure. The focus could be on international treaties, or banning or heavy regulation of AI models who were trained with more than a trillion quadrillion (10^27) operations. (not sure I understand the intent behind your question).
A potentially impactful thing: someone competent runs as a candidate for the 2028 election on an AI notkilleveryoneism[1] platform. Maybe even two people should run, one for the democratic primary, and one in the republican primary. While getting the nomination is rather unlikely, there could be lots of benefits even if you fail to gain the nomination (like other presidential candidates becoming sympathetic to AI notkilleveryoneism, or more popularity of AI notkilleveryoneism in the population, etc.)
On the other hand, attempting a presidential run can easily backfire.
A relevant previous example to this kind of approach is the 2020 campaign by Andrew Yang, which focussed on universal basic income (and downsides of automation). While the campaign attracted some attention, it seems like it didn't succeed in making UBI a popular policy among democrats.
Not necessarily using that name. ↩︎
I learned that undersea data centers are possible. Microsoft had Project Natick, but it looks like they abandoned it. There is also a chinese project and a western startup. The main benefit seems to be reduced cooling costs.