This is a fun idea! I was recently poking at field line reconnection myself, in conversation with Claude.
I don't think the energy balance turns out in the idea's favor. Here are the heuristics I considered:
All of that being said, Claude and ChatGPT both respond well to sanity checking. You can say directly something like: "Sanity check: is this consistent with thermodynamics?"
I also think that ChatGPT misleadingly treated the magnetic fields and electric fields as being separate because it was using an ideal MHD model, where this is common due to the simplifications the model makes. In my experience at least Claude catches a lot of confusion and oversights by asking specifically about the differences between the physics and the model.
Regarding The Two Cultures essay:
I have gained so much buttressing context from reading dedicated history about science and math that I have come around to a much blunter position than Snow's. I claim that an ahistorical technical education is technically deficient. If a person reads no history of math, science, or engineering than they will be a worse mathematician, scientist, or engineer, full stop.
Specialist histories can show how the big problems were really solved over time.[1] They can show how promising paths still wind up being wrong, and the important differences between the successful method and the failed one. They can show how partial solutions relate to the whole problem. They can show how legendary genius once struggled with the same concepts that you now struggle with.
Usually - usually! As in a majority of the time! - this does not agree with the popular narrative about the problem.
I would like to extend this slightly by switching perspective to the other side of the coin. The drop-in remote worker is not a problem of anthropomorphizing AI, so much as it is anthropomorphizing the need in the first place. Companies create roles with the expectation people will fill them, but that is the habit of the org, not the threshold of the need.
Adoption is being slowed down considerably by people asking for AI to be like a person, so we can ask that person to do some task. Most companies and people are not asking more directly for an AI to meet a need. Figuring out how to do that is a problem to solve by itself, and there hasn't been much call for it to date.
Why don’t you expect AGIs to be able to do that too?
I do, I just expect it to take a few iterations. I don't expect any kind of stable niche for humans after AGI appears.
I agree that the economic principles conflict; you are correct that my question was about the human labor part. I don't even require that they be substitutes; at the level of abstraction we are working in, it seems perfectly plausible that some new niches will open up. Anything would qualify, even if it is some new-fangled job title like 'adaptation engineer' or something that just preps new types of environments for teleoperation before moving onto the next environment like some kine of meta railroad gang. In this case the value of human labor might stay sustainably high in terms of total value, but the amplitude of the value would sort of slide into the few AI relevant niches.
I think this cashes out as Principle A winning out and Principal B winning out looking the same for most people.
Obviously, at least one of those predictions is wrong. That’s what I said in the post.
Does one of them need to be wrong? What stops a situation like only one niche, or a few niches, being high value and the rest not providing enough to eat? This pretty much exactly like how natural selection operates, for example.
I agree fake pictures are harder to threaten with. But consider that the deepfake method makes everyone a potential target, rather than only targeting the population who would fall for the relationship side of the scam.
There are other reasons I think it would be grimly effective, but I am not about to spell it out for team evil.
He also claims that with the rise of deepfakes you can always run the Shaggy defense if the scammer actually does pull the trigger.
With the rise of deepfakes, the scammers can skip steps 1-3, and also more easily target girls.
Chip fabs and electricity generation are capital!
Yes, but so are ice cream trucks and the whirligig rides at the fair. Having “access to capital” is meaningless if you are buying an ice cream truck, but means much if you have a rare earth refinery.
My claim is that the big distinction now is between labor and capital because everyone had about an equally hard time getting labor; when AI replacement happens and that goes away, the next big distinction will be between different types of what we now generically refer to as capital. The term is uselessly broad in my opinion: we need to go down at least one level towards concreteness to talk about the future better.
I like this effort, and I have a few suggestions: