LESSWRONG
LW

1646
Bunthut
362Ω69101410
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Enlightenment AMA
Bunthut6d*10

You mean discursive thinking like that voice in your head? Yeah, we do that in shikantaza. After you turn it off what's left is the sensation of the breath, the sound of wind chimes outside and, if your eyes are open, the image of the wall in front of you. It takes me about 30 minutes of deliberate intention in a peaceful room to get into this state.

I think getting rid of the voice in my head temporarily is very easy. Trivially, by replacing it with a loud repeating dum dum dum sound in my head, though Im not sure that counts. But I've also just done 30s of no auditory imagination while looking around in my non-blank room, and it took maybe 5s to get there. Is this one of those buddhist terms of art where it actually means way more than a layperson would reasonably think it does?

Reply
Enlightenment AMA
Bunthut6d10

According to predictive coding, believing you'll take an action just is how you take it, and believing you'll achieve a goal just is how you intend it. This would mean if you desire more than you can achieve, you experience prediction error, but if you desire less than you can achieve, you just underachieve with no psychological warning.

Reply
Futarchy's fundamental flaw
Bunthut3mo*10

Suppose b is the true bias of the coin (which the supercomputer will compute). Then your expected return in this game is

𝔼[max(b, 0.50)] = 0.50 + 𝔼[max(b-0.50, 0)]

No. That formula would imply that, if the coin is 30% for sure and you buy it for 0.3, you make 0.2 in expectation, which you don't, you make 0 regardless of what price you buy at.

Note that this kind of problem has also shown up in decision theory more generally. This is a good place to start. In particular, it seems like your problem can be fixed with epsilon exploration (if it doesn't do so automatically, as per Soares), both the EDT and CDT variant should work.

Reply
Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low
Bunthut3mo10

A simple version of this is done for panoramic photos. If he looked at the city from a consistent direction throughout the flight, that's all that's needed. If the direction varied, it can't have varied a lot - he had to at least see the sides of the building he was drawing, if maybe from a different angle, and not all the buildings would have been parallel. That kind of rotation seems doable with current image transformers (and that's only neccesary if the drawing has accurate angles even over long distances).

In any case, I don't think it matters to my argument if current ML can do it. All the parts that might be difficult for the computer are doable even for normal humans, and therefore not magical. The only thing that's added to the normal human skill here is perfect memory, which we know is easy for computers and have known for a long time.

Reply
Are superhuman savants real?
Bunthut3mo10

To clarify the question: I agree that there is variation in talent and that some very talented people can do things most could never. My question is, if you look at the distribution of talent among normal people, and then check how many standard deviations  out our savant candidate is, then what's the chance at least one person with that talent would exist? Basically, is this just the normal right tail that's expected from additive genetic reshuffling, or an "X-man".

Reply
Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low
Bunthut3mo1-1

Example 3: Stephen Wiltshire. He made a nineteen-foot-long drawing of New York City after flying on a helicopter for 20 minutes, and he got the number of windows and floors of all the buildings correct.

I think ~everyone understands that computers can do this. The "magical" part is doing it with a human brain, not doing it at all. Similarly, blindfolded chess is not more difficult than normal chess for computers. That may take a little knowledge to see. And "doing it faster" is again clear. So the threshold for magic you describe is not the one even the most naive use for AI.

Reply
Why Have Sentence Lengths Decreased?
Bunthut5mo*10

Sentence lengths have declined.

Data: I looked for similar data on sentence lengths in german, and the first result I found covering a similar timeframe was wikipedia referencing Kurt Möslein: Einige Entwicklungstendenzen in der Syntax der wissenschaftlich-technischen Literatur seit dem Ende des 18. Jahrhunderts. (1974), which does not find the same trend:

Yearwps
177024,50
180025,54
185032,00
190023,58
192022,72
194019,60
196019,90

This data on scientific writing starts lower than any of your english examples from that time, and increases initially, but arrives in the same place (insofar as wps are comparably across languages, which I think is fine for english and german).

Reply
LessWrong has been acquired by EA
Bunthut6mo10

6 picolightcones as well, don't think that changed.

Reply
LessWrong has been acquired by EA
Bunthut6mo10

Before logging in I had 200 LW-Bux, and 3 virtues. Now I have 50 LW and 8 virtues, and I didn't do anything. Whats that? Is there any explanation of how this stuff works?

Reply
Genetic fitness is a measure of selection strength, not the selection target
Bunthut6mo30

I think your disagreement can be made clear with more formalism. First, the point for your opponents:

When the animals are in a cold place, they are selected for a long fur coat, and also for IGF, (and other things as well). To some extent, these are just different ways of describing the same process. Now, if they move to a warmer place, they are now selected for a shorter fur instead, and they are still selected for IGF. And there's also a more concrete correspondence to this: they have also been selected for "IF cold long fur, ELSE short fur" the entire time. Notice especially that there are animals actually implementing this dependent property - it can be evolved just fine, in the same way as the simple properties. And in fact, you could "unroll" the concept of IGF into a humongous environment-dependent strategy, which would then always be selected for, because all the environment-dependence is already baked in.

Now on the other hand, if you train an AI first on one thing, and then on another, wouldn't we expect it to get worse at the first again? Indeed, we would also expect a species living in the cold for very long to lose those adaptations relevant to the heat. The reason for this in both cases are, broadly speaking, limits and penalties to complexity. I'm not sure very many people would have bought the argument in the previous paragraph - we all know unused genetic code decays over time. But in the behavioral/cognitive version with intentionally maximizing IGF that makes it easy to ignore the problems, we're not used to remembering the physical correlates of thinking. Of course, a dragonfly couldn't explicitly maximize IGF, because its brain is to small to even understand what that is, and developing that brain has demands for space and energy incompatible with the general dragonfly life strategy. The costs of cognition are also part of the demands of fitness, and the dragonfly is more fit the way it is, and similarly I think a human explicitly maximizing IGF would have done worse for most of our evolution[1] because the odds you get something wrong are just too high with current expenditure on cognition, better to hardcode some right answers..

I don't share your optimistic conclusion however. Because the part about selecting for multiple things simultanuously, that's true. You are always selecting for everything thats locally extensionally equivalent to the intended selection criteria. There is not a move you could have done in evolutions place, to actually select for IGF instead of [various particular things], this already is what happens when you select for IGF, because it's the complexity, rather than different intent, that lead to the different result[2]. Similarly, reinforcement learning for human values will result is whatever is the simplest[3] way to match human values on the training data.

 

  1. ^

    and even today, still might if sperm donations et al weren't possible

  2. ^

    I don't think you've tried to come up with what that different move might look like for evolution, but it's strongly implied they exist for both it and the AI situation.

  3. ^

    in the sense of that architecture

Reply
Load More
15Are superhuman savants real?
Q
3mo
Q
4
17Identifiability Problem for Superrational Decision Theories
Ω
4y
Ω
16
14Phylactery Decision Theory
Ω
4y
Ω
6
24Learning Russian Roulette
Ω
4y
Ω
38
11Fisherian Runaway as a decision-theoretic problem
Ω
4y
Ω
0
24A non-logarithmic argument for Kelly
Ω
5y
Ω
10
14Learning Normativity: Language
Ω
5y
Ω
4
18Limiting Causality by Complexity Class
Ω
5y
Ω
10
10What is the interpretation of the do() operator?
QΩ
5y
QΩ
6
6Towards a Formalisation of Logical Counterfactuals
Ω
5y
Ω
2
Load More