I can strongly confirm that few of the people worried about AI killing everyone, or EAs that are so worried, favor a pause in AI development at this time, or supported the pause letter or took other similar actions.
An especially small percentage (but not zero!) would favor any kind of unilateral pause, either by Anthropic or by the West, without the rest of the world.
>Holly Elmore (PauseAI): It's kinda sweet that PauseAI is so well-represented on twitter that a lot of people >think it *is* the EA position. Sadly, it isn't.
>The EAs want Anthropic to win the race. If they wanted Anthropic paused, Anthropic would kick those >ones out and keep going but it would be a blow.
I tried to get at this issue with polls on EA Forum and LW. For EAs, 26% want to stop or pause AI globally, 13% want to pause it even if only done unilaterally. I would not call this an especially small percentage.
My summary for EAs was: "13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated."
My summary for LW was: "the strongest support is for pausing AI now if done globally, but there's also strong support for making AI progress slow, pausing if disaster, and pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don't want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon."
"Meta is controlled purely by Zuckerberg and xAI follows the whims of Musk."
Isn't this actually a comparatively good situation? As far as I know, neither of these people wants to die, so if it comes to an existential crunch, they might make decisions that avoid dying. Compare that with amorphous control by corporate beaurocracy, in which no invididual human can manage to shift the decision...
If you think that an AI developer can do more harm than good on the margin, e.g., because you can unilaterally push the frontier by deploying a model but you cannot unilaterally pause, and other similar asymmetries, then you may favour lower variance in the policies of AI developers. It seems likely to me that individual control increases policy variance, and so that is a reasons to favour distributed/diffused control over AI developers.
It also seems empirically that individually-controlled AI developers (Meta, xAI, DeepSeek) are worse on safety than more diffusely controlled ones (OpenAI, Anthropic, Google DeepMind), which may suggest there are selection processes that cause that generally. For example, maybe those individuals tend to be especially risk-taking, or optimistic on safety, etc.
I agree that individual control increases policy variance, which was sort of my point. Whether that's good or not seems to me to depend on what the default course of events is. If you think things are headed in a good direction, then low variance is good. But if the default course is likely to be disastrous, high variance at least provides a chance.
I don't understand your point about asymmetry. Doesn't that tend to make the default course bad?
I don't understand your point about asymmetry. Doesn't that tend to make the default course bad?
What I meant was, imagine two worlds:
If in scenario A risk-reducing actions reduce risk as much as risk-increasing actions increase risk (i.e., payoffs are symmetrical), then these two worlds have identical risk. But if in scenario B payoffs are symmetrical (i.e., these companies are more able to increase risk than they are to decrease risks), then the Diffused Control world has lower overall risk. A single reckless outlier can dominate the outcome, and reckless outliers are more likely in the Individual Control world.
Does that make the default course bad? I guess so. But if it is true, it implies that having AI developers controlled by individuals is worse than having them run by committee.
We would also need to account for the possibility that an AI researcher at Meta or xAI prompts an actual leader to race harder (think of DeepCent's role in the AI-2027 forecast) or comes up with a breakthrough, initiates the explosion and ends up with Agent-4 who is misaligned and Agent-3 who doesn't catch Agent-4 because xAI's safety team doesn't have a single human competent enough. If this happens, then the company is never oversighted, races as hard as it can and dooms mankind.
However, if Agent-4 is caught, but P(OC member votes for slowdown) is smaller than 0.5 due to the evidence being inconclusive, then the more members the OC has, the bigger p(doom) is. On the other hand, this problem may be arguably solved by adopting the liberum veto on trusting any model...
So a big safety team is good for catching Agent-4, but may be bad for deciding whether it is guilty.
‘Why AI Overregulation Could Kill the World’s Next Tech Revolution.’
At the time of writing the link is broken. Please correct it.
P.S. @habryka, this is another case when using automated tools is justified: they could scan posts and comments for broken links and report them to the authors.
I agree! Would be good to do automatic link checking, and ideally even automatic link-backuping.
The replies are full of people pointing out the ‘two grids’ claim is simply not true. Why is the Secretary of Energy coming out, over and over again, with this bold anti-energy stance backed by absurdly false claims and arguments?
Solar power and batteries are the future unless and until we get a big breakthrough. If we are sabotaging American wind and solar energy, either AGI shows up quickly enough to bail us out, our fusion energy projects bear fruit and hyperscale very quickly or we are going to lose. Period.
Intermittent renewable energy alone does require a grid to support it. It is possible that wind and solar can be cheaper than the variable cost of conventional power plants, but it's not yet in most places without subsidy. One could theoretically replace the current system with wind plus solar plus batteries, but it would be crazy expensive. Either you have to build the wind and solar far larger and waste most of the energy and still need batteries overnight, or you need something like days of battery storage, which is very expensive. Now you could use the excess electricity from the overbuilding scenario to make hydrogen, but hydrogen is also a long way from being economical. So the thing we could do economically at current prices is pumped hydropower for storage (geographically constrained) or underground compressed air energy storage (somewhat geographically constrained, but saline aquifers are very common and the US stores a lot of natural gas seasonally that way). These have low enough storage cost to be feasible for days worth of storage. Or we could do fission (yes, I know, public perception and regulations, but it's not clear that fusion would be much better).
It’s rough out there. Have we tried engaging in less active sabotage? No? Carry on.
Table of Contents
Quiet Speculations
Andrej Karpathy speculates the new hotness in important input data will be environments.
Miles Brundage predicts the capabilities gaps in AI will increasingly be based on whose versions face safety and risk restrictions and which ones allow how much test-time compute and other scaffolding, rather than big gaps in core model capability. The reasoning is that there is no reason to make totally different internal versus external models. I can see it, but I can also see it going the other way.
The Quest for Sane Regulations
Nick Bostrom proposes we model an ideal form of the current system of AI development as the Open Global Investment (OGI) model. Anything can be a model.
The idea is that you would develop AI within corporations (check!), distribute shares widely (check at least for Google?) and securely (how?) with strengthened corporate governance (whoops!), operating within a government-defined responsible AI development framework (whoops again!) with international agreements and governance measures (whoops a third time).
This wouldn’t be the ideal way to do things. It would be a ‘the least you can do’ version of existing capitalism, where we attempted to execute it relatively sanely, since that is already verging on more than our civilization can handle, I guess.
Moving towards many aspects of this vision would be an improvement.
I would love to see strengthened corporate governance, which Anthropic still aspires to. Alas Google doesn’t. OpenAI tried to do this and failed and now has a rubber stamp board. Meta is controlled purely by Zuckerberg and xAI follows the whims of Musk.
I would love to see the government define a responsible AI development framework, but our current government seems instead to be prioritizing preventing this from happening, and otherwise maximizing Nvidia’s share price. International agreements would also be good but first those who make such agreements would have to be even the slightest bit interested, so for now there is quite the damper on such plans.
Bostrom also suggests America could ‘give up some of the options it currently has to commandeer or expropriate companies’ and this points to the central weakness of the whole enterprise, which is that it assumes rule of law, rule of humans and economic normality, which are the only way any of these plans do anything.
Whereas recent events around Intel (and otherwise) have shown that America’s government can suddenly break norms and take things regardless of whether it has previously agreed not to or has any right to do it, even in a normal situation. Why would we or anyone else trust any government not to nationalize in a rapidly advancing AGI scenario? Why is it anything but a joke to say that people unhappy with what was happening could sue?
I also see calls for ‘representation’ by people around the world over the project to be both unrealistic and a complete non-starter and also undesirable, the same way that we would not like the results of a global democratic vote (even if free and fair everywhere, somehow) determining how to make decisions, pass laws and distribute resources. Yes, we should of course reach international agreements and coordinate on safety concerns and seek to honestly reassure everyone along the way, and indeed actually have things work out for everyone everywhere, but do not kid yourself.
I also don’t see anything here that solves any of the actual hard problems facing us, but moves towards it are marginal improvements. Which is still something.
The Quest For No Regulations
(This is an easily skippable section, if you are tempted, included for completeness.)
One curse of a column like this is, essentially and as Craig Ferguson used to put it, ‘we get letters,’ as in the necessity of covering rhetoric so you the reader don’t have to. Thus it fell within my rules that I had to cover Peter Goettler, CEO of the Cato Institute (yeah, I know) writing ‘Why AI Overregulation Could Kill the World’s Next Tech Revolution.’
Mostly this is a cut-and-paste job of the standard ‘regulations are bad’ arguments Cato endlessly repeats (and which, to be fair, in most contexts are mostly correct).
What the post does not do, anywhere, is discuss what particular regulations or restrictions are to be avoided, or explain how those provisions might negatively impact AI development or use, except to warn about ‘safety’ concerns. As in, the model is simply that any attempt to do anything whatsoever would be Just Awful, without any need to have a mechanism involved.
But This Time You’ve Gone Too Far
One of my favorite genres is ‘I hate regulations and I especially hate safety regulations but for [X] we should make an exception,’ especially for those whose exceptions do not include ‘creating artificial minds smarter than ourselves’ and with a side of ‘if we don’t regulate now before we have an issue then something bad will happen and then we’ll get really dumb rules later.’
Matt Parlmer offers his exception, clearly out of a genuine and real physical concern, file under ‘a little late for that’ among other issues:
Our entire civilization has given up on everything not falling apart the moment we lose a network connection, including so many things that don’t have to die. I don’t see anyone being willing to make an exception for robots. It would dramatically degrade quality of performance, since not only would the model have to be runnable locally, it would have to be a model and weights you were okay with someone stealing, among other problems.
I instead buy Morlock’s counterargument that Matt links to, which is that you need a fail safe, as in if the network cuts off you fail gracefully, and only take conservative actions that can be entrusted to the onboard model that you already need for quicker reactions and detail execution.
Now here is YC CEO Garry Tan’s exception, which is that what we really need to do is forbid anyone from getting in the way of the Glorious AI Agent Future, so we should be allowed to direct AI agent traffic to your webpage even if you don’t want it.
Notice that when these types of crowds say ‘legalize [X]’ what they actually mostly mean is ‘ban anyone and anything from interfering with [X], including existing law and liability and anyone’s preferences about how you interact with them.’ They have a Cool New Thing that they want to Do Startups with, so the rest of the world should just shut up and let them move fast and break things, including all the laws and also the things that aren’t theirs.
Don’t like that people are choosing the wrong defaults? They want your AI agent to have to identify itself so they don’t go bankrupt serving their website to random scrapers ignoring robots.txt? Websites think that if you want to use your AI on their website that they should be able to charge you the cost to them of doing that, whereas you would prefer to free ride and have them eat all those costs?
Cite an ‘Axis of Evil,’ with an implied call for government intervention. Also, it’s a ‘reasonable place to start’ says the person explaining it better than Garry, so what exactly is the problem, then? If you think Cloudflare is at risk of becoming a de facto gatekeeper of the internet, then outcompete them with a better alternative?
How does the CEO of Cloudfare respond to these accusations?
I have indeed consistently seen Perplexity cited as a rather nasty actor in this space.
Matthew does a good job laying out the broader problem that pay-per-crawl solves. It costs money and time to create the web and to serve the web. Google scraped all of this, but paid websites back by funneling them traffic. Now we have answer engines instead of search engines, which don’t provide traffic and also take up a lot more bandwidth. So you need to compensate creators and websites in other ways. Google used to pay everyone off, now Cloudflare is proposing to facilitate doing it again, playing the role of market maker.
Do we want a company like Cloudflare, or Google, being an intermediary in all this? Ideally, no, we’d have all that fully decentralized and working automatically. Alas, until someone builds that and makes it happen? This is the best we can do.
One can also think of this as a Levels of Friction situation. It’s fine to let humans browse whatever websites they want until they hit paywalls, or let them pay once to bypass paywalls, because in practice this works out, and you can defend against abuses. However, AI lowers the barriers to abuse, takes visiting a website essentially from Level 1 to Level 0 and breaks the mechanisms that keep things in balance. Something will have to give.
Chip City
The energy policy situation, as in the administration sabotaging the United States and its ability to produce electricity in order to own the libs, continues. It’s one (quite terrible) thing to tilt at windmills, but going after solar is civilizational suicide.
There was then a deeply sad argument over exactly how many orders of magnitude this was off by. Was this off by three zeros or four?
Secretary Wright keeps saying outright false things to try and talk down solar and wind power.
The replies are full of people pointing out the ‘two grids’ claim is simply not true. Why is the Secretary of Energy coming out, over and over again, with this bold anti-energy stance backed by absurdly false claims and arguments?
Solar power and batteries are the future unless and until we get a big breakthrough. If we are sabotaging American wind and solar energy, either AGI shows up quickly enough to bail us out, our fusion energy projects bear fruit and hyperscale very quickly or we are going to lose. Period.
On the wind side, last week the explanation for cancelling an essentially completed wind farm was to give no explanation and mumble ‘national security.’ Now there’s an attempted explanation and it’s even stupider than you might have expected?
This gives a bad name to other Obvious Nonsense. This situation is insanely terrible.
Meanwhile, this is a good way to put the Chinese ‘surge’ in chip production that David Sacks says ‘will soon compete with American chips globally’ into perspective:
On AI there is essentially zero difference between David Sacks and a paid lobbyist for Nvidia whose sole loyalty is maximization of shareholder value.
We are ending up in many ways in a worst case scenario. Neither China or America is ‘racing to AGI’ as a government, but the AI labs are going to go for AGI regardless. Meanwhile everyone is racing to compute, which then turns into trying to build AGI, and we are going to hand over our advantage, potentially being crazy enough to sell the B30a to China (see chart directly above), and also by sabotaging American energy production as China pulls further and further into the lead on that.
Here’s a multi-scenario argument against focusing on chip production, saying that this question won’t matter that much, which is offered for contrast while noting that I disagree with it:
There is not that much money in chip production, compared to the money in chip use.
Ultimately, what matters is who uses the chips, and what they use the chips for, not who makes the chips. Aside from the relatively modest chip profits (yes Nvidia is the most valuable company in the world, but it is small compared to, you know, the world), who makes the chips largely matters if and only if it determines who gets to use the chips.
David’s argument also ignores the national security concerns throughout. Chips are a vital strategic asset, so if you do not have reliable sources of them you risk not only your AI development but economic collapse and strategic vulnerability.
Peter Wildeford responds in the comments, pointing out that this is not a commodity market, and that slow versus fast takeoff is not a binary, and that we are indeed effectively controlling who has access to compute to a large extent.
Notice that neither David nor Peter even bothers to address the question of whether differently sourced chips are fungible, or concerns over some sort of ‘tech stack’ operating importantly differently. That is because it is rather obvious that, for most purposes, different chips with similar amounts of capability for a type of task are fungible.
The Week in Audio
Is AI starting to raise real interest rates? Basil Halperin goes on FLI to discuss what markets tell us about AI timelines. Markets have been consistently behind so far, as markets have now admitted.
You have to love a 4-hour medium-deep dive.
Timothy Lee and Kelsey Piper discuss AI and jobs.
Brief transcribed Jack Clark interview with The News Agents. He does a good job explaining things about jobs, but when the time comes to talk about the most important issues and he is given the floor, he says ‘I don’t think it’s responsible of me to talk in sci-fi vignettes about all the ways it can be scary’ and sidesteps the entire supposed reason Anthropic exists, that we risk extinction or loss of control, and instead retreats into platitudes. If Anthropic won’t take even the most gentle invitation to lay down the basics, what are we even doing?
Control AI offers 40 minute video about AI existential risk. Presumably readers here won’t need this kind of video, but others might.
Katie Couric interviews Geoffrey Hinton. Hinton has become more optimistic, as he sees promise in the plan of ‘design superintelligence to care, like a mother wired to protect her child,’ and Andrew Critch says this is why he keeps saying ‘we have some ideas on how to make superhuman AI safe,’ while noting that it is very much not the default trajectory. We’d need to coordinate pretty hard around doing it, also we don’t actually know what doing this would mean or have an idea of how to do it in a sustainable way. I don’t think this strategy helps much or would be that likely to work. Given our current situation, we should investigate anyway, but instincts like this even if successfully ingrained wouldn’t tend to survive for a wide variety of different reasons.
Rhetorical Innovation
‘I warned you in my movie, Don’t Create The Torment Nexus, and no one listened,’ mistakenly says creator of the blockbuster movie Don’t Create The Torment Nexus after seeing proud announcements of the torment nexus. Sir, people listened. They simply did not then make the decisions you were hoping for. Many such cases. Hope to see you at the reunion some time.
I continue not to be worried about Terminators (as in, AI combat devices, not only humanoids with glowing red eyes) in particular, but yeah, no one in charge of actually terminating people was much inclined to listen.
I’d also note that this is indeed exactly the plot of Terminator 2: Judgment Day, in which someone finds the Cyberdyne chip from the first movie and… uses it to create Cyberdyne, and also no one listens to Sarah Connor and they think she is crazy? And then Terminator 3: Rise of the Machines, in which no one listens to Sarah Connor or John Connor or learns from the incidents that came before and they build it anyway, or… well, you get the idea.
People also did not listen to Isaac Asimov the way he would have hoped.
I can strongly confirm that few of the people worried about AI killing everyone, or EAs that are so worried, favor a pause in AI development at this time, or supported the pause letter or took other similar actions.
An especially small percentage (but not zero!) would favor any kind of unilateral pause, either by Anthropic or by the West, without the rest of the world.
There is healthy disagreement and uncertainty over the extent to which Anthropic has kept its eye on the mission versus being compromised by ordinary business interests, and the extent to which they are trustworthy actors, the right attitude towards various other labs, and so on. I have updated a number of times, in both directions, as news comes in, on this and other fronts.
I continue like Max Kesin here to strongly disapprove of all of the OpenAI vagueposting and making light of developments towards AGI. I’m not saying never joke around, I joke around constantly, never stop never stopping, but know when your joking is negatively load bearing and freaking everyone the f*** out and causing damage to ability to know what is going on when it actually matters. You can still enjoy your launches without it. Thank you for your attention to this matter. Google’s cringe-laden attempts to copy the style should also stop, not because they freak anyone out (they’ve been fine on that front) but because they’re terrible, please stop.
What if actually we all agree that those who supported these moves were wrong, and mostly we even said so at the time?
That’s what many of us have been trying to say, and have been saying since 2015, as we said not to create OpenAI or SSI and we were at least deeply ambivalent about Anthropic from day one.
Once again. No. EAs did not ‘start OpenAI.’ This is false. That doesn’t mean none of the founders had associations with EA. But the main drivers were Elon Musk and Sam Altman, and the vast majority of EAs thought founding OpenAI was a mistake from day one. Many, including Eliezer Yudkowsky and myself, thought it was the worst possible move, a plausibly world dooming move, plausibly the worst mistake in human history levels of bad move.
Did some of the cofounders have beliefs related to EA and disagree? Perhaps, but that’s a unilateralist curse problem. I think those cofounders made a mistake. Then, once it was clear this was happening, some others made the strategic decision to go along with it to gain influence. That, too, I believed at the time was a mistake. I still believe that. I also believe that the other decisions that were made, that led directly or indirectly to OpenAI, including the ways we tried to warn people about AGI, were mistakes. There were a lot of mistakes.
Ambivalence about Anthropic continues to this day, such as this post by Remmelt, laying out a strong case that Anthropic’s leading researchers acted as moderate accelerationists. I don’t agree with every argument here, but a lot of them seem right.
But yeah, if commercial incentives make it impossible to safety build AGI, then great, let’s all agree not to let anyone with commercial incentives build AGI. Good plan.
Safety Third at xAI
Last week I covered xAI’s new no good, quite terrible risk management framework.
I was not kind:
Zach Stein-Perlman rightfully admonished me for not going into sufficient detail about all the ways this framework is terrible. Luckily, he was there to fill the void. He does a good job so I’m going to quite him at length, his full post has more.
Using Mask here is deeply, profoundly unserious.
xAI makes changes to the Grok 4 system prompt, then Wyatt Walls published the changes, then after that xAI updated their system prompt.
Fun highlights include ‘assume user is an adult’ and ‘teenage does not necessarily imply underage’ and ‘there are no restrictions on fictional adult sexual content with dark or violent themes’ for a product labeled ‘12+’.
I actually think it is actively good to have no restrictions on adult sexual content for adults, but yeah, presumably you see the problem with this implementation.
Misaligned!
Will any crap cause emergent misalignment? Literally yes, reports J Bostock. As in, scatological outputs will do the trick to some extent. This was vibe coded in a day, and presumably it would be easy to try a broad range of other things. It is plausible that almost any clearly ‘undesirable’ fine-tuning output breaks or even in some sense reverses current alignment techniques if it is in clear conflict with the assistant persona? That would imply our current techniques are heavily reliant on retaining the persona, and thus extremely brittle.
Patrick McKenzie notes that some current LLMs will see a character sheet with no race or class attached and pick at random when the older model would do the obviously correct thing of asking. I think this is actually an RL-induced misalignment situation, in which the models ‘really want to complete tasks’ and choose this over noticing and clarifying ambiguity, and the general form of this is actually dangerous?
Whatever else happened as a result of alignment experiments and resulting data contamination, Claude seems to have retained a special place for Jones Foods. I presume that this will be fixed in later iterations, so it is not worth running out to found Jones Foods.
Lab Safeguards Seem Inadequate
Introducing AI Safety Claims, a companion website to AI Lab Watch. Both are from Zach Stein-Perlman. Safety Claims focuses on the countermeasures labs are introducing, now that the four most important labs (OpenAI, Anthropic,Google and xAI) have all acknowledged their models are starting to present important misuse risks in bio, and are speeding towards things like major research speed uplift.
The API safeguards have issues, but he considers these to be relatively unimportant going forward, and approaching reasonable. Whereas he finds promises of future safeguards, both against model weight theft and misalignment, to be a combination of inadequate and (to the extent they might approach being adequate) not credible and not specified. Especially on misalignment he describes many plans and countermeasures as confused, which seems exactly right to me.
Given the timelines the labs themselves are telling us it will take to reach Anthropic’s ASL-4 and other thresholds of more serious danger, no one looks on track, even in the areas where they are trying.
Here is the new scorecard, in which everyone does terribly.
Aligning a Smarter Than Human Intelligence is Difficult
If something is sufficiently smarter than you should you assume it can persuade you of pretty much anything?
Scott Alexander is hopeful about debate, as in you have two frontier AIs way beyond human level debate and then the dumber AI that you trust tries to figure out who is right. This has in some cases been shown to work 75% or more of the time, even claiming that debater intelligence rising increases accuracy even if the judge stays the same.
Even in the best case and if it is all true, this still requires that you have access to both sides of the debate, and that you trust the side telling the truth to be trying its best to persuade, although I presume that involves holding the questions being debated constant. I am skeptical we will be in anything that close to the best case, on many levels, or that debate ever works that well. Reasons for my skepticism include my experience with debates when they are judged by humans. We should still try.
This question remains unanswered for far too many plans:
It’s not even clear how to define what Francois wants here, but even if you assume you know what it means the incentives very much lie elsewhere. Those who build systems that don’t bend over to do this will at first get more effective systems and better achieve their goals. Your integration with existing processes is no match for my God in a box. So how are you going to get everyone to go along with this plan?
Here’s what I thought was a highly telling exchange.
I think Eliezer decisively won this round? Yes, there are many other things you can do beyond road bridge maintenance optimization. Yes, building the AI and only using it for these verified tasks would be a plausibly excellent investment, compared to doing nothing, while remaining safe. It passes the ‘better than nothing’ test if it works.
That doesn’t mean it accomplishes the goal of protecting you against other ASIs, nor does it capture more than a tiny fraction of available upside. Unless you can do that somehow, this is not a strategy. So what’s the plan?
I’ve responded to similar claims to this from Janus several times, I like this version from her because it’s clean and clear:
I strongly agree that if you look at the rather anemic attempts to ‘align’ models so far, that are rather obviously inadequate to the tasks ahead of us, it is rather a miracle that they work as well as they do on current models. Grace seems like an appropriate description. The differences largely come down to me not expecting this grace to survive RL and scaling up and changing techniques, and also to not think the grace is sufficient to get a good outcome. But indeed, my estimates of how hard these problems are to solve have gone down a lot, although so has my estimate of how hard a problem humanity is capable of solving. I still don’t think we have any idea how to solve the problems, or what solution we even want to be aiming for and what the result wants to look like.
The Lighter Side
Honey, Don’t!
You need a license? It’s totalitarianism, man! But also congratulations.
Google will win, except it will take 20 years.
The above result replicates.
I also do not want to be thrown for one. Leave me out of it.
Smart kid.