Liliet B


Sorted by New

Wiki Contributions


Mirrors are useful even though you don't expect to see another person in them.

Sometimes you need a person to be a mirror to your thoughts.

why they haven't been able to solve it yet?

the magic part.

Bad / insufficiently curiosed-through advice is often infuriating because the person giving it seems to be assuming you're an idiot / have come to them as soon as you noticed the problem. Which is very rarely true! Generally, between spotting the problem and talking to another person about it, there's a pretty fucking long solution-seeking stage. Where "pretty fucking long" can be anything between ten minutes ("i lost my pencil and can't find it )=") (where actually common sense suggestions MIGHT be helpful - you might not have through up all the checklist yet) and THE PERSON'S ENTIRE LIFETIME (anything relating to a disability, for example).

An advice-giver who doesn't understand why you still have the problem is going to have a lot more advice to give, and they're also often going to be SO patronizing and idiocy-assuming and invalidating sounding.

As opposed to the person who is at the point of "ok yeah that does sound like a problem" first, before they might move on with "hmm but what to do though" along with you.

(You might well be ahead of them anyway, but at least they've listened first!)

As an ADHD person for whom "reduce impulsiveness" is about as practical a goal as "learn telekinesis", reducing delay is actually super easy. Did you know people feel good about completing tasks and achieving goals? All you have to do to have a REALLY short delay between starting the task and an expected reward is explicitly, in your own mind, define a sufficiently small sub-task as A Goal. Then the next one, you don't even need breaks in-between if it goes well - even if what you're doing is as inherently meaningless as, I dunno, filling in an excel table from a printed one, you can still mentally reward yourself for each page or whatever.

The first salesman guy could set himself a task of "make three cold calls" regardless of success, and then feel good about having done them. The third guy could make a checklist at the start where tasks are listed in order and enjoy an uninterrupted checkmark row when he's not behind on anything. The student could feel really proud for making the front page, then the next part, etc.

Prior probabilities with no experience in a domain at all is an incoherent notion, since that implies you don't know what the words you're using even refer to. Priors include all prior knowledge, including knowledge about the general class of problems like the one you're trying to eyeball a prior for.

If you're asked to perform experiments on finding out what tapirs eat - and you don't know what tapirs even are, except that they eat something apparently, judging by the formulation of the problem - you're already going to assign a prior of ~0 of 'they eat candy wrappers and rocks and are poisoned by everything and anything else, including non-candy-wrapper plastics and objects made of stone', because you have prior information on what 'eating' refers to and how it tends to work. You're probably going to assign a high prior probability to the guess that tapirs are animals, and on the basis of that assign a high prior probability to them being either herbivores, omnivores or carnivores - or insectivores, unless you include that as carnivores - since that's what you know most animals are.

Priors are all prior information. It would be thoroughly irrational of you to give the tapirs candy wrappers and then when they didn't eat them, assume it was the wrong brand and start trying different ones.

For additional clarification on what priors mean, imagine that if you didn't manage to give the tapirs something they actually are willing to eat within 24 hours, your family is going to be executed.

In that situation, what's the rational thing to do? Are you going to start with metal sheets, car tires and ceramic pots, or are you going to start trying different kinds of animal food?

Ordinary language includes mathematics.

"One, two, three, four" is ordinary language. "The thing turned right" is ordinary language (it's also multiplication by -i).

Feynman was right, he just neglected to specify that the ordinary language needed to explain physics would necessarily include the math subset of it.

"Many worlds can be seen as a kind of non-local theory, as the nature of the theory assumes a specific time line of "simultaneity" along which the universe can "split" at an instant."

As I understand, no it doesn't. The universe split is also local, and if at a difference at point A preserves the same particles at point B, then at point B we only have the same universe (where at point A we have multiple). The configurations merge together. It's more like vibration than splitting into paths that go into different directions. Macroscopic physics is inherently predictable, meaning that all the multiple worlds ultimately end up doing roughly the same thing!

Except for that one hypothetical universe where I saw a glass of boiling water spontaneously freeze into an ice block.

I'm going to guess the fact I'm not in that universe and as far as we know no-one has ever been, has something to do with the Born probabilities.

As far as ethical implications go, the vibration visualization helps me sort it out. The other existing me's are not more ethically distinct from each other than 'me a second ago' is ethically distinct from 'me a second later'. They are literally the same person, me. Any other me would do the same thing this me is doing, because there's no reason for it to be otherwise (if quantum phenomena had random effects on macroscopic scale, the world would be a lot more random and a lot less predictable on the everyday level), so we're still overlapping. All the uncountable other me's are sitting in the same chair I am (also smeared/vibrating), typing the same words I am, and making typos and quickly backspacing to erase them on the same smeared/vibrating keyboard.

All of the smearing has absolutely no effect a lightyear away from me, because the year it would take for any effect from my vibration over here to get to there hasn't passed yet. It has its own vibration, and I'm not affected by that one either.

"Many worlds" but same universe.

When I worldbuild with magic, this is somehow automatically intuitive - so I always end up assuming (if not necessarily specifying explicitly) a 'magic field' or smth that does the thermodynamic work and that the bits of entropy are shuffled over to. Kind of like how looking something up on the internet is 'magic' from an outside observer's POV if people only have access nodes inside their heads and cannot actually show them to observers, or like how extracting power from the electricity grid into devices is 'magic' under the same conditions.

Only people didn't explicitly invent and build the involved internet and the electricity grid first. So more like how speech is basically telepathy, as Eliezer specified elsewhere~

I would propose an approximation of the system where each node has a terminal value of its own (which can be 0 for completely neutral nodes, but actually no they cannot - reinforcement mechanisms of our brain inevitably give something like 0.0001 because I heard someone say it was cool once or -0.002 because it reminds me of a sad event in my childhood)

As a simple example, consider eating food when hungry. You get a terminal value on eating food - the immediate satisfaction the brain releases in the form of chemicals as a response to recognition of the event, thanks to evolution - and an instrumental value on eating food, which is that you get to not starve for a while longer.

Now let's say that while you are a sentient optimization process that can reason over long projections of time, you are also a really simple one, and your network actually doesn't have any other terminal values than eating food, it's genuinely the only thing you care about. So when you calculate the instrumental value of eating food, you get only the sum of getting to eat more food in the future.

Let's say your confidence in getting to eat food next time after this one decreases with a steady rule. For example, p(i+1)=p(i)*0.5. If your confidence that you are eating food right now is 1, then your confidence that you'll get to eat again is 0.5, and your confidence that you'll get to eat the time after that is 0.25 and so on.

So the total instrumental value of eating food right now is limit of Sum(p(i) * T(food)) where i starts from 0 and approaches infinity (no I don't remember enough math to write this in symbols).

So the total total value of eating food is T(food) + Sum (p(i)*T(food)). It's always positive, because T(food) is positive and p(i) is positive and that's that. You'll never choose not to eat food you see in front of you, because there are no possible reasons for that in your value network.

Then let's add the concept of 'gross food', and for simplicity's sake ignore evolution and suggest that it exists as a totally arbitrary concept that is not actually connected to your expectation of survival after eating it. It's just kinda free floating - you like broccoli but don't like carrots, because your programmer was an asshole and entered those values into the system. Also for simplicity's sake, you're a pretty stupid reasoning process that doesn't actually anticipate seeing gross food in the future. In your calculation of instrumental value there's only T(food) which is positive, and T(this_food) which can be positive or negative depending on the specific food you're looking at appears ONLY while you're actually looking at it. If it's negative, you're surprised every time (but don't update your values because you're a really stupid sentient entity and don't have that function).

So now the value of eating food you see right now is T(this_food) + Sum (p(i)*T(food)). If T(this_food) is negative enough, you might choose to not eat food. Of course this assumes we're comparing to zero, ie you assume that if you don't eat right now you'll die immediately and also that's perfectly neutral and you don't have opinions on that (you only have opinions on eating food). If you don't eat the food you're looking at right now, you'll NEVER EAT AGAIN, but it might be that it's gross enough that it's worth it! More logically, you're comparing T(this_food) + Sum (p(i)*T(food)) to Sum(p(i)*T(food)) * p(not starving immediately). The outcome depends on how high the grossness of the food is and how high you evaluate p(not starving immediately) to be.

(If the food's even a little positive, or even just neutral, eating it wins every time, since p(not starving immediately) is <1 and not having it there wins automatically)

Note that the grossness of food and probability of starving are already not linear in how they correlate in their influence on the outcome. And that's just for the idiot AI that knows nothing except tasty food and gross food! And if we allow it to compute T(average_food) based on how much of what food we've given it, it might choose to starve rather than eat gross things it expects to eat in the future! Look, I've simulated willful suicide in all three simplifications so far! No wonder evolution didn't produce all that many organisms that could compute instrumental values.

Anyway, it gets more horrifically complex when you consider bigger goals. So our brain doesn't compute the whole Sum( Sum(p(i)*T(outcome(j)))) every time. It gets computed once and then stored as a quasi-terminal value instead. QT(outcome) = T(outcome) + Sum( Sum(p(i)*T(outcome(j)))), and it might get recomputed sometimes, but most of the time it doesn't. And recomputing it is what updating our beliefs must involve. For ALL outcomes linked to the update.

...Yeah, that tends to take a while.

The ultimate prior is maximum entropy, aka "idk", aka "50/50: either happens or not". We never actually have it, because we start gathering evidence for how the world is before our brains even form enough to make any links between it.

Load More