Wiki Contributions

Comments

(my english is bad, please use your usual numeric assistant to improve the langage if you care about such things)

Homework assignment 1: Explicate Descartes’ worries about being deceived by a malignant being in these terms.

Descartes thought experiment was the opposite of a worry. It was a reinsurance, a certitude so high that he would construct his whole philosophy upon it: the certitude that any conscious thought is a proof you must exist, otherwise you couldn’t have this thought, right? In modern time, he would have said something like:

« First, notice that I’m using social media on purpose because I want to talk directly to my fandom base about this elegant truth I demonstrated. We exist! Indeed, suppose Elon is using some neuralink sh*t to mess with your perception while you’re actually a brain freely floating in some vat. You can’t disprove that! But even in this extreme (how would that help Elon go to mars?) scenario, you could still disprove that you don’t exist! From there I can prove God, or disprove it maybe, but on first principles always. …but why can’t I remember which one it was? Oh wait, see? It’s silly, I can’t even remember if my philosophy proves or disproves God, and I have no idea why I can’t remember that (as if I was a literary creation ah ah!) but still, as I can think I’m still sure I do exist! See? Now try explain that to whoever you want to seduce, and like me if that worked! »

Using your term (actually the correct technical term from a neuroscience/psy perspective): I perceive thoughts, then I can confabulate it’s me who thought that, while not noticing that « me » is a catch 22 for importing questionable assumptions, such like « I » is a solid entity rather than a fluid story.

If that sounds like imagining superintelligence must act as a single actor with a single set of values, you got my point.

Interesting thoughts and model, thanks.

I think all of these questions are significantly underpriced.

Small nitpick here: price and probability are two things. One could well agree that you’re right on the probability but still don’t buy because the timeline is too long, or because a small chance of winning in one scenario is more important than a large chance of winning in the opposite scenario.

AIs are not the enemies of humanity, they're the offspring of humanity.

Maybe that should have been your main point? Of course present AIs need us. Of course future AIs may not. Of course we can’t update on evidences everybody agree upon.

« Good parents don’t try to align their children » seems a much better intuition pump if your aim is to help a few out of the LW-style intellectual ratchet.

That said, you may overestimate both how many need that and how many of those who’d need it can get this signal from a newcomer. 😉

Which is as good as saying that if you want to make anything happen, you should pray to God.

Actually the point is: if one can place rocks at will then their computing power is provably as large as any physically realistic computer. But yes, if one can’t place rocks at will then it might be better to politely ask the emulator.

wait that's just Dust Theory

Actually that’s less, because in Dust theory we don’t even need to place the rocks. 😉

suppose you have an oracle that given the relevant information can instantly return the optimal strategy to achieve your goal, how well does that oracle perform?

I guess CT experts (e.g. not me) would say it either depends on boring details or belong to one of three possibilities:

  • if you only care about « probably approximately correct » solutions, then it’s probably in BPP
  • if you care about « unrealistically powerfull but still mathematically checkable » solutions, then it’s as large as PSPACE (see interactive proofs)
  • if you only care about convincing yourself and don’t need formal proof, then it’s among Turing degrees, because one could show it’s better than you for compressing most strings without actually proving it’s performing hypercomputation.

Main point: yes neurons are not the best building block for communication speed, but you shouldn’t assume that increasing it would necessarily increase fitness. Muscles are much slower, retina even more, and even the fastest punch (mantris shrimps) is several time slower than communication speed in myelinated axons. That’s said, the overall conclusion that using neurons is an evolutionary trap is probably sound, as we know most of our genetic code is for tweaking something in the brain, without much change in how most neurons work and learn.

Tangent point: you’re assuming that 250IQ is a sound concept. If you were to pass an IQ test well designed for mouses, would you expect to reach 250? If you were to measure IQ in slim molt, what kind of result would you expect, and would that IQ level help predict the behavior below?

https://www.discovermagazine.com/planet-earth/brainless-slime-mold-builds-a-replica-tokyo-subway

It can't really be both.

Yes it can. Like any bistable perception can be both one percept for someone and a very different percept for someone else. That means they don’t have the same kernels.

Yes, I’m on board with all that. In my grokking that was making a nice fit with altruistic alleles ideas:

  • half of the construction plans for the duck-rabbit child are from each parent, which means their brains may be tuned to recognize subtle idiosyncrasies (aka out-of-distribution categorizations) that match their own construction plan, while being blind to the same phenomenon in their partner, or at least to the non overlapping ood features they don’t share.
  • when my beloved step parents, who never argue with anyone about anything, argue about what color is this or that car,, that’s why it feels so personal that the loved one don’t see the same: because that’s basically a genetic marker of how likely their genes would make their host collaborate.

Ok maybe that’s a tad way too speculative. Back down to earth, the cat/dog is indeed good demonstration subtle changes can have large impacts on human perception, which is arguably among the most striking aspects of adversarial pictures. Thanks for the discussion and insight!

While we’re at it, what’s your take on the « We need to rethink generalization » papers?

Load More