Of course it leaves things on the table. Presumably the authors judged that it'd be infeasible to directly attack party authority. If you're going to have party primaries (never mind first past the post), it makes some sense for them to have consequences. Approval/freedom voting ought to allow for dispensing with primaries altogther, but I totally respect that is likely too big of a bite right now.For my future reference: I don't see the rules you cite in the section of the Missouri constitution that the initiative would amend https://www.sos.mo.gov/CMSImages/Publications/CurrentMissouriConstitution.pdf?v=202212#page=124 so it must be in the statues; I see the 10k (for statewide offices) signature requirement for independent candidates in https://revisor.mo.gov/main/OneSection.aspx?section=115.321&bid=6088 winner of party primary to be only candidate of that party in general https://revisor.mo.gov/main/OneSection.aspx?section=115.343&bid=6100 and I'm not sure I see the requirement to register as an independent before the primary ends, but likely I'm just overlooking that, or it is implied by the various timing requirements? https://revisor.mo.gov/main/PageSearch.aspx?tb1=independent%20candidate&op=and&tb2=&idx=2&chapter=115
I guess you're commenting on https://www.sos.mo.gov/cmsimages/Elections/Petitions/2024-110.pdf
It seems like a very useful improvement over what I presume is current election law in Missouri. If there's any existing mechanism for a primary loser to get on the general election ballot (as an independent?) I don't see how this initiative overrides that. What am I missing?
Great cause, godspeed!Interestingly it appears Show Me Integrity and Missouri Agrees have rebranded approval voting as freedom voting.
Or if the rebranding already existed elsewhere, I'm idly curious about its origin. No mention yet in https://en.wikipedia.org/wiki/Approval_voting
In my model, we got AI in large part because we couldn’t get flying cars, or build houses where people want to live, or cure diseases
Would enjoy reading more about that. How does it work? Talent and investment for AI would be doing other cool things, if those cool things were allowed, moreso than the talent and investment for AI would be increased as a result of more people doing more cool things?
I enjoyed it, almost want to go for the ubercop idea. A serious alternative way to increase emergency vehicle utilization (by weight, size, cost or similar) is to make them smaller. They're unnecessarily monster sized, in the US anyway. #DefundPoliceSUVs
Same. Aging is bad, don't expect it to be solved (escape velocity reached) in my lifetime.
I also agree that nearish AGI excites and whether deemed good (I'd welcome it, bet worth taking, though scary) or doom, far AGI is relatively boring, and that may psychologically contribute to people holding shorter timelines.
Third "fact" at the top of the original post "We've made enormous progress towards solving intelligence in the last few years" is somewhat refuted by the rest: if it's a math-like problem, we don't know how much progress toward AGI we've made in the last few years. (I don't know enough to hold a strong opinion on this, but I hope we have! Increase variance, the animal-human experience to date is disgustingly miserable, brutal, and stupid, do better or die trying.)
I know this is an estimate for imminent global nuclear war (for which I'd give a lower estimate, but even if it were 100x lower, 0.17% -- and it isn't -- that would be wholly unacceptable) but I don't want global nuclear war in my lifetime.So it's necessary to also consider how various outcomes of the current war may have on the liklihood of global nuclear war in the next decades. In this vein, the best argument for ensuring that Ukraine wins in spite of Russian nuclear threats, is that allowing Russia to achieve a relatively favorable outcome on the basis of its nuclear threats incentives proliferation -- of nukes, and of nuke threats. This would surely increase the liklihood of global nuclear war in the next decades, whether through error or escalation.
The best outcome of the war in my opinion would be a restart of global nuclear weapon elimination. It's even conceivable (though I'm not wishing for it!) that escalation to use of nukes in Ukraine is the insult that the world needs to eliminate nukes, and any leader who makes any move toward getting them.
Security strikes me as a better word than safety, ethics, or responsibility. More prestigious, and the knowledge and methods of the existing security research field is probably more relevant to addressing AI x-risks than are those of existing safety, ethics, etc fields.
a new, updated booster...won’t make much difference even to those who take advantage of it at this point
I'm planning to take advantage, but would enjoy reading an analysis of how much/little difference it can be expected to make. I guess this has been done by many people, pointers idly wanted/appreciated.
I wonder if using the title research software engineer, posting on RSE job boards, networking in their communities, adhering to their norms, might help with matching. Idle/uninformed speculation; I only have become aware of the self-identified field/title in the last year, and gather it is only about 10 years old.