All of Gram Stone's Comments + Replies

This Year I Tried To Teach Myself Math. How Did It Go?

A genuine congratulations for learning the rare skill of spotting and writing valid proofs.

Graham’s Number I see as ridiculous, apparently one of the answers to his original problem could be as low as a single digit number, why have power towers on power towers then?

Graham's number is an upper bound on the exact solution to a Ramsey-type problem. Ramsey numbers and related generalizations are notorious for being very easy to define and yet very expensive to compute with brute-force search, and many of the most significant results in Ramsey theory are proof... (read more)

Shulman and Yudkowsky on AI progress

Now that we clarified up-thread that Eliezer's position is not that there was a giant algorithmic innovation in between chimps and humans, but rather that there was some innovation in between dinosaurs and some primate or bird that allowed the primate/bird lines to scale better

 

Where was this clarified...? My Eliezer-model says "There were in fact innovations that arose in the primate and bird lines which allowed the primate and bird lines to scale better, but the primate line still didn't scale that well, so we should expect to discover algorithmic i... (read more)

Soares, Tallinn, and Yudkowsky discuss AGI cognition

If information is 'transmitted' by modified environments and conspecifics biasing individual search, marginal fitness returns on individual learning ability increase, while from the outside it looks just like 'cultural 'evolution.''

Yudkowsky and Christiano discuss "Takeoff Speeds"

If I take the number of years since the emergence of Homo erectus (2 million years) and divide that by the number of years since the origin of life (3.77 billion years), and multiply that by the number of years since the founding of the field of artificial intelligence (65 years), I get a little under twelve days. This seems to at least not directly contradict my model of Eliezer saying "Yes, there will be an AGI capable of establishing an erectus-level civilization twelve days before there is an AGI capable of establishing a human-level one, or possibly a... (read more)

6Matthew Barnett2moFWIW when I use the word discontinuous in these contexts, I'm almost always referring to the definition Katja Grace uses [https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/] , This is quite different than the mathematical definition of continuous.

That was a pretty good Eliezer model; for a second I was trying to remember if and where I'd said that.

7Robbo2moI agree with this, and I think that you are hitting on a key a reason that these debates don't hinge on what the true story of the human intelligence explosion ends up being. Whichever of these is closer to the truth a) the evolution of individually smarter humans using general reasoning ability was the key factor b) the evolution of better social learners and the accumulation of cultural knowledge was the key factor ...either way, there's no reason to think that AGI has to follow the same kind of path that humans did. I found an earlier post on the Henrich model of the evolution of intelligence, Musings on Cumulative Cultural Evolution and AI [https://www.lesswrong.com/posts/K686EFdXysfRBdob2/musings-on-cumulative-cultural-evolution-and-ai] . I agree with Rohin Shah's takeaway [https://www.lesswrong.com/posts/K686EFdXysfRBdob2/musings-on-cumulative-cultural-evolution-and-ai?commentId=q93pfgjFx3PQM5jjs#AI_Forecasting_and_Development] on that post :
Quick general thoughts on suffering and consciousness

I have an alternative hypothesis about how consciousness evolved. I'm not especially confident in it.

In my view, a large part of the cognitive demands on hominins consists of learning skills and norms from other hominins. One of a few questions I always ask when trying to figure out why humans have a particular cognitive trait is “How could this have made it cheaper (faster, easier, more likely, etc.) to learn skills and/or norms from other hominins?” I think the core cognitive traits in question originally evolved to model the internal state of conspecifi... (read more)

Progress, Stagnation, & Collapse

Your argument has a Holocene, sedentary, urban flavor, but I think it applies just as well to Pleistocene, nomadic cultures; I think of it as an argument about population size and 'cognitive capital' as such, not only about infrastructure or even technology. Although my confidence is tempered by mutually compatible explanations and taphonomic bias, my current models of behavioral modernity and Neanderthal extinction essentially rely on a demographic argument like the one made here. I don't think this comment would be as compelling without a reminder that a... (read more)

2abramdemski6moFascinating!
1Algernoq6moYeah. I hope Youtube knows what it's doing.
Four factors that moderate the intensity of emotions

For those wondering about the literature, although Kahneman and Tversky coined no term for it, Kahneman & Tversky (1981) describes counterfactual-closeness and some of its affective consequences. This paper appears to be the origin of the missed flight example. Roese (1997) is a good early review on counterfactual thinking with a section on contrast effects, of which closeness effects are arguably an instance.

5Ruby3yThanks for surfacing these! I've now edited the post to mention these sources and your comment.
Incorrect hypotheses point to correct observations

Succubi/incubi and the alien abduction phenomenon point to hypnagogia, and evo-psych explanations of anthropomorphic cognition are often washed down with arguments that anthropomorphism causes good enough decisions while being technically completely false; there's an old comment by JenniferRM talking about how surprisingly useful albeit wrong it would be to model pathogens as evil spirits.

Topological Fixed Point Exercises

An attempt at problem #1; seems like there must be a shorter proof.

The proof idea is "If I flip a light switch an even number of times, then it must be in the same state that I found it in when I'm finished switching."

Theorem. Let e a path graph on ertices with a vertex oloring uch that if hen Let s bichromatic Then s odd.

Proof. By the definition of a path graph, there exists a sequence ndexing An edge s bichromatic iff A... (read more)

7Gurkenglas3yTurning each node but the last blue from left to right conserves the parity of the bichromatic edge count at each step until it is still odd at the end.
What To Do If Nuclear War Seems Imminent

I see that New Zealand is also a major wood exporter. In case of an energy crisis, wood gas could serve as a renewable alternative to other energy sources. Wood gas can be used to power unmodified cars and generators. Empirically this worked during the Second World War and works today in North Korea. Also, FEMA once released some plans for building an emergency wood gasifier.

Making a Difference Tempore: Insights from 'Reinforcement Learning: An Introduction'

The Lahav and Mioduser link in section 14 is broken for me. Maybe it's just paywalled?

1TurnTrout4ySorry about that - should be fixed now?
Anthropics made easy?

Just taking the question at face value, I would like to choose to lift weights for policy selection reasons. If I eat chocolate, the non-Boltzmann brain versions will eat it too, and I personally care a lot more about non-Boltzmann brain versions of me. Not sure how to square that mathematically with infinite versions of me existing and all, but I was already confused about that.

The theme here seems similar to Stuart's past writing claiming that a lot of anthropic problems implicitly turn on preference. Seems like the answer to your decision problem easily depends on how much you care about Boltzmann brain versions of yourself.

Does Thinking Hard Hurt Your Brain?

The closest thing to this I've seen in the literature is processing fluency, but to my knowledge that research doesn't really address the willpower depletion-like features that you've highlighted here.

Learn Bayes Nets!

It's also a useful analogy for aspects of group epistemics, like avoiding double counting as messages pass through the social network.

Fake Causality contains an intuitive explanation of double-counting of evidence.

7abramdemski4yYeah, and it uses the same analogy for understanding belief propagation as Pearl himself uses, and a reference to Pearl, and a bit more discussion of Bayes nets as a good way to understand things. But, I think, a lot of people didn't derive the directive "Learn Bayes nets!" from that example of insight derived from Bayes nets (and would benefit from going and doing that). I do think there are some other intuitions lurking in Bayes net algorithms which could benefit from a similar write-up to Fake Causality, but which went "all the way" in terms of describing Bayes nets, rather than partially summarizing.
Set Up for Success: Insights from 'Naïve Set Theory'

Re: proof calibration; there are a couple textbooks on proofwriting. I personally used Velleman's How to Prove It, but another option is Hammack's Book of Proof, which I haven't read but appears to cover the same material at approximately equal length. For comparison, Halmos introduces first-order logic on pages 6 and 7 of Naive Set Theory, whereas Velleman spends about 60 pages on the same material.

It doesn't fit my model of how mathematics works technically or socially that you can really get very confident but wrong about your math k... (read more)

7Qiaochu_Yuan4ySlight nitpick: it means "prove that this set is a subset of every set with this property, and has this property." This sort of thing is terrible; I learned most of it from the internet (MathOverflow, Wikipedia, the nLab, blogs), for what it's worth.
2TurnTrout4yThanks, that’s very helpful! I appreciate the offer; let me see how I feel after the next book.
Set Up for Success: Insights from 'Naïve Set Theory'

Re: category-theory-first approaches; I find that most people think this is a bad idea because most people need to see concrete examples before category theory clicks for them, otherwise it's too general, but a few people feel differently and have published introductory textbooks on category theory that assume less background knowledge than the standard textbooks. If you're interested, you could try Awodey's Category Theory (free), or Lawvere's Conceptual Mathematics. After getting some more basics under your belt, you could give either... (read more)

The Monthly Newsletter as Thinking Tool

I'd be very interested in reading about EverQuest as an exemplar of Fun Theory, if you're willing to share.

5moridinamael4ySure. I've cut out my lengthy and indulgent love letter to EQ and put it here: https://www.evernote.com/shard/s79/sh/c867cd85-1d03-467a-a8b2-cab8d6fc61b8/4185c7120d9365f453d042f16949bc96 edit: If anybody tells me they enjoyed it, I'll probably just post it as a "blog" post here.
Against Instrumental Convergence

I think proponents of the instrumental convergence thesis would expect a consequentialist chess program to exhibit instrumental convergence in the domain of chess. So if there were some (chess-related) subplan that was useful in lots of other (chess-related) plans, we would see the program execute that subplan a lot. The important difference would be that the chess program uses an ontology of chess while unsafe programs use an ontology of nature.

2zulupineapple4yFirst, Nick Bostrom has an example where a Riemann hypothesis solving machine converts the earth into computronium. I imagine he'd predict the same for a chess program, regardless of what ontology it uses. Second, if instrumental convergence was that easy to solve (the convergence in the domain of chess is harmless), it wouldn't really be an interesting problem.
A LessWrong Crypto Autopsy

It seems like a good idea to collect self-reports about why LessWrongers didn't invest in Bitcoin. For my own part, off the top of my head I would cite:

  • less visible endorsement of the benefits than e.g. cryonics;
  • vaguely sensing that cryptocurrency is controversial and might make my taxes confusing to file, etc.;
  • and reflexively writing off articles that give me investment advice because most investment advice is bad and I generally assume I lack some minimal amount of wealth needed to exploit the opportunity.

So something like We Agree: Get Froze, might... (read more)

I didn't invest in Bitcoin because I don't invest in things that I don't understand well enough to be confident that the Efficient Market Hypothesis doesn't apply. I continue to believe this is a rational choice-- okay, sure, this one time I might have made a lot of money, but most of the time I would waste a bunch of money/time/other resources. And no one writes blog posts about how they could have lost a lot of money but didn't, so the availability heuristic is going to overweight successes.

5Error4yFor my part, it was one part trivial inconveniences, one part that it read like woo. I was aware it existed through other avenues (I wasn't a Less Wronger then), and aware of what it was trying to do, and I had the technical acumen to get in on it if I had so chosen. Given that, I'm a little bitter that I didn't do so. I could retire today if I had. I could get into it today, of course, but now that everybody knows it's a magic money making machine I suspect a bubble is well underway. I don't want to be in when it breaks. I'm a little worried about Bitcoin's externalities. The mining process consumes more and more energy, and professional miners are driving up hardware costs. Which might be fine if most transactions were, well, transactions, i.e. if we're getting human value out of the work. But I get the impression that the vast majority of the network's effort goes towards playing musical chairs with money, and that seems bad. Bitcoin doesn't feel woo-ish, anymore, but it's starting to feel paperclippy instead.
Examples of Mitigating Assumption Risk

Other things equal, choose the reversible alternative.

The Archipelago Model of Community Standards

In particular, thank you for pointing out that in social experiments, phenomenal difficulty is not much Bayesian evidence for ultimate failure.