LESSWRONG
LW

Alex_Altair
4820Ω385762787
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Entropy from first principles
7Alex_Altair's Shortform
Ω
3y
Ω
34
Thermodynamic entropy = Kolmogorov complexity
Alex_Altair11h20

Note that being finite in spatial extent is different from being finite in number of possible states. A Turing machine needs the latter but not the former. Finite spatial extent isn't a blocker if your states can have arbitrarily fine precision. As an intuition pump, you could imagine a Turing machine where the nth cell of the tape has a physical width of 1/2^n. Then the whole tape has length 1 (meter?) but can fit arbitrarily long binary strings on it.

Separately, when I've looked into it, it seemed far from consensus whether the universe was likely to be spatially infinite or not, as in the global curvature was still consistent with being flat (= infinite space). Although, I think the "accessible" universe has finite diameter.

Lastly, I claim that whether or not something is best modeled as a Turing machine is a local fact, not a global one. A Turing machine is something that will compute the function if you give it enough tape. If the universe is finite (in the state space way) then it's a Turing machine that just didn't get fed enough tape. In contrast, something is better modeled as a DFA if you can locally observe that it will only ever try to access finitely many states.

Reply
Alex_Altair's Shortform
Alex_Altair1mo260

Is speed reading real? Or is it all just trading-off with comprehension?

I am a really slow reader. If I'm not trying, it can be 150wpm, which is slower than talking speed. I think this is because I reread sentences a lot and think about stuff. When I am trying, it gets above 200wpm but is still slower than average.

So, I'm not really asking "how can I read a page in 30 seconds?". I'm more looking for something like, are there systematic things I could be doing wrong that would make me way faster?

One thing that confuses me is that I seem to be able to listen to audio really fast, usually 3x and sometimes 4x (depending on the speaker). It feels to me like I am still maintaining full comprehension during this, but I can imagine that being wrong. I also notice that, despite audio listening being much faster, I'm still not really drawn to it. I default to finding and reading paper books.

Reply
Alex_Altair's Shortform
Alex_Altair2mo200

I just went through all the authors listed under "Some Writings We Love" on the LessOnline site and categorized what platform they used to publish. Very roughly;

Personal website: 
IIIII-IIIII-IIIII-IIIII-IIIII-IIIII-IIIII-IIII (39)
Substack: 
IIIII-IIIII-IIIII-IIIII-IIIII-IIIII- (30)
Wordpress: 
IIIII-IIIII-IIIII-IIIII-III (23)
LessWrong: 
IIIII-IIII (9)
Ghost: 
IIIII- (5)
A magazine: 
IIII (4)
Blogspot: 
III (3)
A fiction forum: 
III (3)
Tumblr: 
II (2)

"Personal website" was a catch-all for any site that seemed custom-made rather than a platform. But it probably contained a bunch of sites that were e.g. Wordpress on the backend but with no obvious indicators of it.

I was moderately surprised at how dominant Substack was. I was also surprised at how much marketshare Wordpress still had; it feels "old" to me. But then again, Blogspot feels ancient. I had never heard of "Ghost" before, and those sites felt pretty "premium".

I was also surprised at how many of the blogs were effectively inactive. Several of them hadn't posted since like, 2016.

Reply
eggsyntax's Shortform
Alex_Altair2mo20

Oh, sure, I'm happy to delete it since you requested. Although, I don't really understand how my comment is any more politically object-level than your post? I read your post as saying "Hey guys I found a 7-leaf clover in Ireland, isn't that crazy? I've never been somewhere where clovers had that many leaves before." and I'm just trying to say "FYI I think you just got lucky, I think Ireland has normal clovers."

Reply
eggsyntax's Shortform
Alex_Altair2mo*20

[Deleted on request]

Reply
Alex_Altair's Shortform
Alex_Altair2mo353

Rediscovering some math.

[I actually wrote this in my personal notes years ago. Seemed like a good fit for quick takes.]

I just rediscovered something in math, and the way it came out to me felt really funny.

I was thinking about startup incubators, and thinking about how it can be worth it to make a bet on a company that you think has only a one in ten chance of success, especially if you can incubate, y'know, ten such companies.

And of course, you're not guaranteed success if you incubate ten companies, in the same way that you can flip a coin twice and have it come up tails both times. The expected value is one, but the probability of at least one success is not one.

So what is it? More specifically, if you consider ten such 1-in-10 events, do you think you're more or less likely to have at least one of them succeed? It's not intuitively obvious which way that should go.

Well, if they're independent events, then the probability of all of them failing is 0.9^10, or

(1−110)10≈0.35.

And therefore the probability of at least one succeeding is 1−0.35=0.65. More likely than not! That's great. But not hugely more likely than not.

(As a side note, how many events do you need before you're more likely than not to have one success? It turns out the answer is 7. At seven 1-in-10 events, the probability that at least one succeeds is 0.52, and at 6 events, it's 0.47.)

So then I thought, it's kind of weird that that's not intuitive. Let's see if I can make it intuitive by stretching the quantities way up and down — that's a strategy that often works. Let's say I have a 1-in-a-million event instead, and I do it a million times. Then what is the probability that I'll have had at least one success? Is it basically 0 or basically 1?

...surprisingly, my intuition still wasn't sure! I would think, it can't be too close to 0, because we've rolled these dice so many times that surely they came up as a success once! But that intuition doesn't work, because we've exactly calibrated the dice so that the number of rolls is the same as the unlikelihood of success. So it feels like the probability also can't be too close to 1.

So then I just actually typed this into a calculator. It's the same equation as before, but with a million instead of ten. I added more and more zeros, and then what I saw was that the number just converges to somewhere in the middle.

1−(1−11000000)1000000=0.632121

If it was the 1300s then this would have felt like some kind of discovery. But by this point, I had realized what I was doing, and felt pretty silly. Let's drop the "1−", and look at this limit;

limn→∞(1−1n)n

If this rings any bells, then it may be because you've seen this limit before;

e=limn→∞(1+1n)n

or perhaps as

ex=limn→∞(1+xn)n

The probability I was looking for was 1−1e, or about 0.632.

I think it's really cool that my intuition somehow knew to be confused here! And to me this path of discovery was way more intuitive that just seeing the standard definition, or by wondering about functions that are their own derivatives. I also think it's cool that this path made e pop out on its own, since I almost always think of e in the context of an exponential function, rather than as a constant. It also makes me wonder if 1/e is more fundamental than e. (Similar to how 2π is more fundamental than π.)

Reply
The Internal Model Principle: A Straightforward Explanation
Alex_Altair3mo74

we only label states as 'different' if they actually result in different controller behaviour at some point down the line.

This reminds me a lot of the coarse-graining of "causal" states in comp mech.

Reply1
Announcing ILIAD2: ODYSSEY
Alex_Altair3mo80

I got a ton of value from ILIAD last year, and strongly recommend it to anyone interested!

Reply1
Consider showering
Alex_Altair3mo20

IYKYK

Reply
Towards a formalization of the agent structure problem
Alex_Altair5mo40

For anyone reading this comment thread in the future, Dalcy wrote an amazing explainer for this paper here.

Reply
Load More
26Report & retrospective on the Dovetail fellowship
4mo
3
24Come join Dovetail's agent foundations fellowship talks & discussion
5mo
0
29Towards building blocks of ontologies
5mo
0
59Work with me on agent foundations: independent fellowship
10mo
5
79Quick look: applications of chaos theory
11mo
51
23[Talk transcript] What “structure” is and why it matters
1y
0
101A simple model of math skill
1y
16
35Empirical vs. Mathematical Joints of Nature
1y
1
46New intro textbook on AIXI
1y
8
55Towards a formalization of the agent structure problem
Ω
1y
Ω
6
Load More
Cellular automata
3y
(+306)
Dynamical systems
3y
Quantilization
3y
Solomonoff induction
3y
Kolmogorov Complexity
3y
(-37)
Kolmogorov Complexity
3y
(+42/-25)
Agent
13y
(+5/-42)
Computing Overhang
13y
(+111/-1)