I am currently a nuclear engineer with a focus in nuclear plant safety and probabilistic risk assessment. I am also an aspiring EA, interested in X-risk mitigation and the intersection of science and policy.
If the NRO had Sentient in 2012 then it wasn't even a deep learning system. Probably they have something now that's built from transformers (I know other government agencies are working on things like this for their own domain specific purposes). But it's got to be pretty far behind the commercial state of the art, because government agencies don't have the in house expertise or the budget flexibility to move quickly on large scale basic research.
Those are... mostly not AI problems? People like to use kitchen-based tasks because current robots are not great at dealing with messy environments, and because a kitchen is an environment heavily optimized for the specific physical and visuospatial capabilities of humans. That makes doing tasks in a random kitchen seem easy to humans, while being difficult for machines. But it isn't reflective of real world capabilities.
When you want to automate a physical task, you change the interface and the tools to make it more machine friendly. Building a roomba is ten times easier than building a robot that can navigate a house while operating an arbitrary stick vacuum. If you want dishes cleaned with minimal human input, you build a dishwasher that doesn't require placing each dish carefully in a rack (eg https://youtube.com/watch?v=GiGAwfAZPo0).
Some people have it in their heads that AI is not transformative or is no threat to humans unless it can also do all the exact physical tasks that humans can do. But a key feature of intelligence is that you can figure out ways to avoid doing the parts that are hardest for you, and still accomplish your high level goals.
"Unaligned AGI doesn't take over the world by killing us - it takes over the world by seducing us."
Por que no los dos?
Thanks, some of those possibilities do seem quite risky and I hadn't thought about them before.
It looks like in that thread you never replied to the people saying they couldn't follow your explanation. Specifically, what bad things could an AI regulator do that would increase the probability of doom?
How does this work?
Extreme regulation seems plausible if policy makers start to take the problem seriously. But no regulations will apply everywhere in the world.
That's fair, I could have phrased it more positively. I meant it more along the lines of "tread carefully and look out for the skulls" and not "this is a bad idea and you should give up".
I suspect (though it's not something I have experience with) that a successful new policy think tank would be started by people with inside knowledge and connections to be able to suss out where the levers of government are. When the public starts hearing a lot about some dumb thing the government is doing badly (at the federal level), there are basically three possibilities: 1) it's well on its way to being fixed, 2) it's well on its way to becoming partisan and therefore subject to gridlock, or 3) it makes a good story but there isn't much substance to it, e.g. another less tractable factor is the real bottleneck. So you'd want to be in the position of having a thorough gears-level understanding of a particularly policy area that lets you be among the first to identify mistakes/weaknesses and how they could be fixed. Needless to say, this is tough to do in a whole bunch of policy areas at once.
How would a language model determine whether it has internet access? Naively, it seems like any attempt to test for internet access is doomed because if the model generates a query, it will also generate a plausible response to that query if one is not returned by an API. This could be fixed with some kind of hard coded internet search protocol (as they presumably implemented for Bing), but without it the LLM is in the dark, and a larger or more competent model should be no more likely to understand that it has no internet access.