Chad Nauseam

Wiki Contributions


What's the problem with oracle AIs? It seems like if you had a safe oracle AI that gave human-aligned answers to questions, you could then ask "how do I make an aligned AGI?" and just do whatever it says. So it seems like the problem of "how to make an aligned agentic AGI" is no harder than "how to make an aligned orcale AI", which I understand to still be extremely hard, but surely it's easier than making an aligned agentic AGI from scratch?

While I’m here I guess I may as well post that i’ve been using your rational breaks idea for about two weeks now and it’s worked very well for me. It’s not very polished but I made this little website to help track my time in work mode or break mode: . Don’t refresh the page because it doesn’t persist your state haha. Also it starts you out in break mode with a few seconds of example time in each category but I’ll change that soon

I'm reading Being you and they make a similar point to the one you make about H20 and XYZ. It's part of their argument against p-zombies:

Here’s why the zombie idea is supposed to provide an argument against physicalist explanations of consciousness. If you can imagine a zombie, this means you can conceive of a world that is indistinguishable from our world, but in which no consciousness is happening. And if you can conceive of such a world, then consciousness cannot be a physical phenomenon.

And here’s why it doesn’t work. The zombie argument, like many thought experiments that take aim at physicalism, is a conceivability argument, and conceivability arguments are intrinsically weak. Like many such arguments, it has a plausibility that is inversely related to the amount of knowledge one has.

Can you imagine an A380 flying backward? Of course you can. Just imagine a large plane in the air, moving backward. Is such a scenario really conceivable? Well, the more you know about aerodynamics and aeronautical engineering, the less conceivable it becomes. In this case, even a minimal knowledge of these topics makes it clear that planes cannot fly backward. It just cannot be done.

It’s the same with zombies. In one sense it’s trivial to imagine a philosophical zombie. I just picture a version of myself wandering around without having any conscious experiences. But can I really conceive this? What I’m being asked to do, really, is to consider the capabilities and limitations of a vast network of many billions of neurons and gazillions of synapses (the connections between neurons), not to mention glial cells and neurotransmitter gradients and other neurobiological goodies, all wrapped into a body interacting with a world which includes other brains in other bodies. Can I do this? Can anyone do this? I doubt it. Just as with the A380, the more one knows about the brain and its relation to conscious experiences and behavior, the less conceivable a zombie becomes.