I think that you miss possibility of leveraging AGI authority here. I.e., if your military/natsec-aligned AI will scream at your superiors "ATTACK NOW YOU FOOLS", maybe your superiors will be more inclined to listen to plans to melt all GPUs.
Read your comment, donated 10$.
For me, "enjoying life" means "enjoying good things that happen in life", while everything in-between is neutral at best. Many people seem to put positive value on "in-between" space.
If I try to point out, some people enjoy just the fact of being embodied.
After yet another news about decentralized training of LLM, I suggest to declare assumption "AGI won't be able to find hardware to function autonomously" outdated.
My example is when people say "I enjoy life" they mean actually enjoying life-as-whole and not like "I'm glad my life is net-positive" or whatever.
Okay, I kinda understood where I am wrong spiritually-intuitively, but now I don't understand where I'm wrong formally. Like which inference in chain
not Consistent(ZFC) -> some subsets of ZFC don't have a model -> some subsets of ZFC + not Consistent(ZFC) don't have a model -> not Consistent(ZFC + not Consistent(ZFC))
is actually invalid?
Completeness theorem states that consistent countable FO theory has a model. Compactness theorem states that FO theory has a model iff every finite subset of FO theory has a model. Both theorems are provable in ZFC.
Therefore:
Consistent(ZFC) <-> all finite subsets of ZFC have a model ->
not Consistent(ZFC) <-> some finite subsets of ZFC don't have a model ->
some finite subsets of ZFC + not Consistent(ZFC) don't have a model <->
not Consistent(ZFC + not Consistent(ZFC)),
proven in ZFC + not Consistent(ZFC)
You are making an error here: ZFC + not Consistent(ZFC)
!= ZFC
.
Assuming ZFC + not Consistent(ZFC)
we can prove Consistent(ZFC)
, because inconsistent systems can prove everything and ZFC + not Consistent(ZFC) + Consistent(ZFC)
is, in fact, inconsistent. But it doesn't say anything about consistency of ZFC itself, because you can freely assume any sufficiently powerful system instead of ZFC. If you assume inconsistent system, then system + not Consistent(system)
is still inconsistent, if you assume consistent system, then system + not Consistent(system)
is inconsistent for reasoning above, so it can't prove whether assumed system is consistent or not.
There are no properties of brain which define that brain is "you", except for the program that it runs.
I meant by "authority" not brainwashing but, like, the fact that AGIs are smarter and know better: "If you believe you can survive without taking over the world, please consult the graph". If China hawks believe themselves to be correct, they expect AGIs to actually prove that.
I agree this scenario is terrible.