I understand that definition, which is why I’m confused for why you brought up the behavior of bacteria as evidence for why bacteria has experience. I don’t think any non-animals have experience, and I think many animals (like sponges) also don’t. As I see it, bacteria are more akin to natural chemical reactions than they are to humans.
I brought up the simulation of a bacteria because an atom-for-atom simulation of a bacteria is completely identical to a bacteria - the thing that has experience is represented in the atoms of the bacteria, so a perfect simulation of a bacteria must also internally experience things.
If bacteria have experience, then I see no reason to say that a computer program doesn’t have experience. If you want to say that a bacteria has experience based on guesses from its actions, then why not say that a computer program has experience based on its words?
From a different angle, suppose that we have a computer program that can perfectly simulate a bacteria. Does that bacteria have experience? I don’t see any reason why not, since it will demonstrate all the same ability to act on intention. And if so, then why couldn’t a different computer program also be conscious? (If you want to say that a computer can’t possibly perfectly simulate a bacteria, then great, we have a testable crux, albeit one that can’t be tested right now.)
If you look far enough back in time, humans are are descended from animals akin to sponges that seem to me like they couldn’t possibly have experience. They don’t even have neurons. If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience. But at some point along the line, animals developed the ability to have experience. If you believe in a higher being, then maybe it introduced it, or maybe some other metaphysical cause, but otherwise it seems like qualia has to arise spontaneously from the evolution of something that doesn’t have experience - with possibly some “half conscious” steps along the way.
From that point of view, I don’t see any problem with supposing that a future AI could have experience, even if current ones don’t. I think it’s reasonable to even suppose that current ones do, though their lack of persistent memory means that it’s very alien to our own, probably more like one of those “half conscious” steps.
Nit: "if he does that then Caplan won't get paid back, even if Caplin wins the bet" misspells "Caplan" in the second instance.
Cable companies are forcing you to pay for channels you don’t want. Cable companies are using unbundling to mislead customers and charge extra for basic channels everyone should have.
I think this would be more acceptable if either everything was bundled or nothing was. But generally speaking companies bundle channels that few people want, to give the appearance of a really good deal, and unbundle the really popular channels (like sports channels) to profit. So you sign up for a TV package that has "hundreds of channels", but you get lots of channels that you don't care about and none of the channels you really want. You're screwed both ways.
I think you're totally spot on about ChatGPT and near term LLMs. The technology is still super far away from anything that could actually replace a programmer because of all of the complexities involved.
Where I think you go wrong is looking at the long term future AIs. As a black box, at work I take in instructions on Slack (text), look at the existing code and documentation (text), and produce merge requests, documentation, and requests for more detailed requirements (text). Nothing there requires some essentially human element - the AI just needs to be good at guessing what requirements the product team and customers want and then asking questions and running tests to further divine how the product should work. If specifying a piece of software in English is a nightmare, then your boss's job is already a nightmare, since that's what they do. The key is that they can give a specification, answer questions about the specification, and review implementations of that specification along the way, and those are all things that an AI could do.
I'm already an intelligence that takes in English specifications and produces code, and there's no fundamental reason that my intelligence can't be replaced by an artificial one.
Interestingly, it apparently used to be Zebra, but is now Zulu. I'm not sure why they switched over, but it seems to be the predominant choice since the early 1950s.