JohnBuridan

Comments

JohnBuridan's Shortform

Thinking out loud here about inference.

Darwin's original theory relied on three facts: overproduction of offspring, variability of offspring, and inheritance of traits. These facts were used to formulate a mechanism: the offspring best adapted to the environment for reproduction would, on average, displace population of those less so adapted. Overproduction ensured that there was selection pressure, or at least group stasis on average (and not dysgenics), variability allowed for positive mutations, heritability allowed for persistence. Call it natural selection for short.

What I'm interested in the mistake Darwin made in his next step. He assumes that because the process of natural selection tends on average towards fitness the evolution of species can only be a super imperceptible gradual process.

This is incorrect: evolution can happen alarmingly fast.

What I'm interested in why Darwin thought this. And whether the error is general enough that we can learn something about inferential reasoning that would apply to other cases. At the time geologists disagreed about the rate of major geological transitions in earth's history. Darwin through himself entirely behind Charles Lyell's slow change puritanism. What to give Darwin credit he thought this had to be the way it was because his law of averages requires a law of large numbers, and you can't get large numbers of populations without any immense number of years.

I think the big mistake Darwin made was placing too high a prior upon gradual change, even though he knew there was insufficient evidence for gradual change based on the geological record. His explanation for this lack of evidence was that the evidence had been destroyed for the most part through time, that the geological record we had was a tiny fragment of an immense story which we only can pick the pieces up from. "Absence of evidence is not evidence of absence."

But he should have mapped out the other hypothetical world too, even if just a bit. For Darwin's theory of evolution is not in fact dependent on gradual change, but can accommodate times of stasis followed by times of chaos and development.

To me the lesson is to be as clear as possible about what aspects of your model are essential and which are reasonable extensions.

Is Success the Enemy of Freedom? (Full)

And freedom is a terrible master. I was far more free from college to college + 3 years, but freedom is something you spend. It's a currency which you exchange for some type of life. Now I have very little slack, but I have an endless supply of good places to devote my energy. And that's freedom to do good, the highest form of freedom.

Is Success the Enemy of Freedom? (Full)

I play StarCraft 1 month a year, and it's true, I stick with Protoss. Although now that you mention it, next time I play I'll play Terran to see what happens...

But I also learn bits of languages frequently and maintain 2 foreign languages, and although there is always some switching cost with languages, it's not competitive and so the costs to switching are low.

Is Success the Enemy of Freedom? (Full)

I want to keep being successful despite the costs to my freedom, but that's because I view my success as a service (hence I get paid for it), not as a source of my own happiness. Esse quam videri.

Can we hold intellectuals to similar public standards as athletes?

Here is a quick list of things that spring to mind when I evaluate intellectuals. Any score does not necessarily need to cash out in a ranking. There are different types of intellectuals that play different purposes in the tapestry of the life of the mind. 

How specialized is this person's knowledge?
What are the areas outside of specialization that this person has above average knowledge about?
How good is this person at writing/arguing/debating in favor of their own case?
How good is this person at characterizing the case of other people?
What are this person's biggest weaknesses both personally and intellectually?
 

JohnBuridan's Shortform

https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/

Just a reminder to self that I wrote this, but need to write a counterargument to it based upon a new insight about what a good "popular book" can do.

Clarifying Power-Seeking and Instrumental Convergence

Ah! Thanks so much. I was definitely conflating farsightedness as discount factor and farsightedness as vision of possible states in a landscape.

And that is why some resource increasing state may be too far out of the way, meaning NOT instrumentally convergent, - because the more distant that state is the closer its value is to zero, until it actually is zero. Hence the bracket.

Clarifying Power-Seeking and Instrumental Convergence

You say:

"most agents stay alive in Pac-Man and postpone ending a Tic-Tac-Toe game", but only in the limit of farsightedness (γ→1)

I think there are two separable concepts at work in these examples, the success of an agent and the agent's choices as determined by the reward functions and farsightedness.

If we compare two agents, one with the limit of farsightedness and the other with half that, farsightedness (γ→1/2), then I expect the first agent to be more successful across a uniform distribution of reward functions and to skip over doing things like Trade School, but the second agent in light of more limited farsightedness would be more successful if it were seeking power. As Vanessa Kosoy said above,

... gaining is more robust to inaccuracies of the model or changes in the circumstances than pursuing more "direct" paths to objectives.

What I meant originally is that if an agent doesn't know if γ→1, then is it not true that an agent "seeks out the states in the future with the most resources or power? Now, certainly the agent can get stuck at a local maximum because of shortsightedness, and an agent can forgo certain options as result of its farsightedness.

So I am interpreting the theorem like so:

An agent seeks out states in the future that have more power at the limit of its farsightedness, but not states that, while they have more power, are below its farsightedness "rating."

Note: Assuming a uniform reward function.

Clarifying Power-Seeking and Instrumental Convergence

If an agent is randomly placed in a given distribution of randomly connected points, I see why there are diminishing returns on seeking more power, but that return is never 0, is it?

This gives me pause.

The Gears of Impact

You said, "Once promoted to your attention, you notice that the new plan isn't so much worse after all. The impact vanishes." Just to clarify, you mean that the negative impact of the original plan falling through vanishes, right?

When I think about the difference between value impact and objective impact, I keep getting confused.

Is money a type of AU? Money both functions as a resource for trading up (personal realization of goals) AND as a value itself (for example when it is an asset).

If this is the case, then any form of value based upon optionality violates the "No 'coulds' rule," doesn't it?

For example, imagine I have a choice between hosting a rationalist meetup and going on a long bike ride. There's a 50/50 chance of me doing either of those. Then something happens which removes one of those options (say a pandemic sweeps the country or something like that). If I'm interpreting this right, then the loss of the option has some personal impact, but zero objective impact.

Is that right?

Let's say an agent works in a low paying job that has a lot of positive impact for her clients - both by helping them attain their values and helping them increase resources for the world. Does the job have high objective impact and low personal impact? Is the agent in bad equilibrium when achievable objective impact mugs her of personal value realization?

Let's take your examples of sad person with (P, EU):

Mope and watch Netflix (.90, 1). Text ex (.06, -500). Work (.04, 10). If suddenly one of these options disappeared is that a big deal? Behind my question is the worry that we are missing something about impact being exploited by one of the two terms which compose it and about whether agents in this framework get stuck in strange equilibria because of the way probabilities change based on time.

Help would be appreciated.

Load More