robo

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
robo30

I want to love this metaphor but don't get it at all.  Religious freedom isn't a narrow valley; it's an enormous Shelling hyperplane.  85% of people are religious, but no majority is Christian or Hindu or Kuvah'magh or Kraẞël or Ŧ̈ř̈ȧ̈ӎ͛ṽ̥ŧ̊ħ or Sisters of the Screaming Nightshroud of Ɀ̈ӊ͢Ṩ͎̈Ⱦ̸Ḥ̛͑..  These religions don't agree on many things, but they all pull for freedom of religion over the crazy *#%! the other religions want.

robo50

Suppose there were some gears in physics we weren't smart enough to understand at all.  What would that look like to us?

It would look like phenomena that appears intrinsically random, wouldn't it?  Like imagine there were a simple rule about the spin of electrons that we just. don't. get.  Instead noticing the simple pattern ("Electrons are up if the number of Planck timesteps since the beginning of the universe is a multiple of 3"), we'd only be able to figure out statistical rules of thumb for our measurements ("we measure electrons as up 1/3 of the time").

My intuitions conflict here.  One the one hand, I totally expect there to be phenomena in physics we just don't get.  On the other hand, the research programs you might undertake under those conditions (collect phenomena which appear intrinsically random and search for patterns) feel like crackpottery.

Maybe I should put more weight on superdetermism.

robo93

Humans are computationally bounded, Bayes is not.  In an ideal Bayesian perspective:

  • Your prior must include all possible theories a priori.  Before you opened your eyes as a baby, you put some probability of being in a universe with Quantum Field Theory with  gauge symmetry and updated from there.
  • Your update with unbounded computation.  There's not such thing as proofs, since all poofs are tautological.

Humans are computationally bounded and can't think this way.

(riffing)

"Ideas" find paradigms for modeling the universe that may be profitable to track under limited computation.  Maybe you could understand fluid behavior better if you kept track of temperature, or understand biology better if you keep track of vital force.  With a bayesian-lite perspective, they kinda give you a prior and places to look where your beliefs are "mailable".

"Proofs" (and evidence) are the justifications for answers.  With a bayesian-lite perspective, they kinda give you conditional probabilities.

"Answers" are useful because they can become precomputed, reified, cached beliefs with high credence inertial you can treat as approximately atomic.  In a tabletop physics experiment, you can ignore how your apparatus will gravitationally move the earth (and the details of the composition of the earth).  Similarly, you can ignore how the tabletop physics experiment will move you belief about the conservation of energy (and the details of why your credences about the conservation of energy are what they are).

robo-40

Statements made to the media pass through an extremely lossy compression channel, then are coarse-grained, and then turned into speech acts.

That lossy channel has maybe one bit of capacity on the EA thing.  You can turn on a bit that says "your opinions about AI risk should cluster with your opinions about Effective Altruists", or not.  You don't get more nuance than that.[1]

If you have to choose between outputting the more informative speech act[2] and saying something literally true, it's more cooperative to get the output speech act correct.

(This is different from the supreme court case, where I would agree with you)

  1. ^

    I'm not sure you could make the other side of the channel say "Dan Hendrycks is EA adjacent but that's not particularly necessary for his argument" even if you spent your whole bandwidth budget trying to explain that one message.

  2. ^
robo12

If someone wants to distance themselves from a group, I don't think you should make a fuss about it.  Guilt by association is the rule in PR and that's terrible.  If someone doesn't want to be publicly coupled, don't couple them.

Answer by robo30

I think the classic answer to the "Ozma Problem" (how to communicate to far-away aliens what earthlings mean by right and left) is the Wu experiment.  Electromagnetism and the strong nuclear force aren't handed, but the weak nuclear force is handed.  Left-handed electrons participate in weak nuclear force interactions but right-handed electrons are invisible to weak interactions[1].

(amateur, others can correct me)

  1. ^

    Like electrons, right-handed neutrinos are also invisible to weak interactions.  Unlike electrons, neutrinos are also invisible to the other forces*[2].  So the standard model basically predicts there should invisible particles wizzing around everywhere that we have no way to detect or confirm exist at all.  

  2. ^

    Besides gravity

robo30

Can you symmetrically put the atoms into that entangled state?  You both agree on the charge of electrons (you aren't antimatter annihilating), so you can get a pair of atoms into  |↑,↑⟩, but can you get the entangled pair to point in opposite directions along the plane of the mirror?

Edit Wait, I did that wrong, didn't I?  You don't make a spin up atom by putting it next to a particle accelerator sending electrons up.  You make a spin up atom by putting it next to electrons you accelerate in circles, moving the electrons in the direction your fingers point when a (real) right thumb is pointing up.  So one of you will make a spin-up atom and the other will make a spin-down atom.

robo54

No, that's a very different problem.  The matrix overlords are Laplace's demon, with god-like omniscience about the present and past.  The matrix overlords know the position and momentum of every molecule in my cup of tea.  They can look up the microstate of any time in the past, for free.

The future AI is not Laplace's demon.  The AI is informationally bounded.  It knows the temperature of my tea, but not the position and momentum of every molecule.  Any uncertainties it has about the state of my tea will increase exponentially when trying to predict into the future or retrodict into the past.  Figuring out which water molecules in my tea came from the kettle and which came from the milk is very hard, harder than figuring out which key encrypted a cypher-text.

robo30

Oh, wait, is this "How does a simulation keep secrets from the (computationally bounded) matrix overlords?"

robo10

I don't think I understand your hypothetical.  Is your hypothetical about a future AI which has:

  • Very accurate measurements of the state of the universe in the future
  • A large amount of compute, but not exponentially large
  • Very good algorithms for retrodicting* the past

I think it's exponentially hard to retrodict the past.  It's hard in a similar way as encryption is hard.  If an AI isn't power enough to break encryption, it also isn't powerful enough to retrodict the past accurately enough to break secrets.

If you really want to keep something secret from a future AI, I'd look at ways of ensuring the information needed to theoretically reconstruct your secret is carried away from the earth at the speed of light in infrared radiation.  Write the secret in sealed room, atomize the room to plasma, then cool the plasma by exposing it to the night sky.

*predicting is using your knowledge of the present to predict the state of the future.  Retrodicting is using your knowledge of the present to predict retrodict the state of the past

Load More