Recent Discussion

It's fall and that means it's time for another Less Wrong Darwin Game. This year, you'll be designing up to ten species that will compete for food including (sometimes) eating each other.

Click here to participate You have one week from September 23 to design your species. Submit them by September 30th or earlier.

Each player starts with a population of organisms. Each round each of your organisms will be randomly paired with another organism. At this point, one of two things will happen:

  • If one organism can eat the other organism then it will do so.
  • If nobody gets eaten then both organisms get an opportunity to forage for plants.

After everyone has eaten, each organism will attempt to reproduce. The more an organism eats you eat the most descendents an...

2cousin_it22mThe "get in the way of predation" mechanic seems strange. In reality, if I'm a giraffe hunter and there are a lot of giraffes, adding a lot of songbirds (who don't interact with me or with giraffes) shouldn't make my giraffe hunting any less efficient. Maybe replace with some other mechanic?

OTOH I have seen my dog bark at a turtle for a long time and also my dog's weapons definitely do not beat the turtle's armor.

1hath3hFrom the code, yes.
8hath5hInvitation to any potential collaborators who want to design species that will be able to survive in equilibrium with each other; PM me for details.

Warning: this is a long post, split over two parts. Part II is here.

David Reinstein points me at a 2016 exchange between Bryan Caplan and Scott Alexander over a fine point of nomenclature: is the culture that is taking over the world “Western” or “universal”? Here’s Scott Alexander’s key point:

I worry that Caplan is eliding the important summoner/demon distinction. This is an easy distinction to miss, since demons often kill their summoners and wear their skin. But in this case, he’s become hopelessly confused without it.

I am pretty sure there was, at one point, such a thing as Western civilization. I think it included things like dancing around maypoles and copying Latin manuscripts. At some point Thor might have been involved. That civilization is dead. It summoned

...
3David Hugh-Jones10hI think shared is too broad. You like Coke, I like Coke - we share that. But it's shared because we both have sugar-loving taste buds. To be cultural, you need something more. Hence the biologists' emphasis on the transmission mechanism via learning. Does it matter? My argument is that a lot of what gets called "Western culture" is really just "stuff that is appealing to human taste buds", in a broad sense. So yes, it is spreading, but no cultural learning is required. Coca Cola sells Coke, people in India like it and buy it; but this doesn't have implications for things that are actually cultural, such as attitudes to gender, political values, etc.
2David Hugh-Jones10hI think there are two phenomena: (1) General Westernization. That certainly still takes place, as you point out. The question is how deep that Westernization is - to put it crudely, is it at "Magna Carta" or "Magna Mac" level? (2) The emergence of "hardened" subcultures which are resistant to Westernization and which have high birth rates. The evidence from Kaufmann is pretty persuasive about (2).
4quanticle2hI am still thoroughly unpersuaded. Birth rates are one thing. Retention rates are quite another. As we've seen from the evidence of Quiverfull, and other Evangelical Christian communities in the US, most children do not remain in the community and continue its practices. The Middle East is experiencing high population growth but is also the most rapidly secularizing region in the world [https://www.dw.com/en/middle-east-are-people-losing-their-religion/a-56442163]. Kaufmann seems to be making the mistake of assuming that because many Middle Eastern countries mandate Islam as a state religion, that means that the people residing in those countries are necessarily devout. Finally with regards to the "depth" of Westernization, I would argue that changes to marriage practices and family structure are an even deeper form of Westernization than adoption of particular political values.

I'm not sure Kaufmann does making that mistake. He focuses on extreme sects within each religion, not on Islam as a whole, and mostly on Western countries rather than the Middle East. You could say I'm making the mistake, because I discuss the probability of non-Westerners buying into Western values. Yeah, that could be. But I also would distinguish between secularization (and other kinds of modernization) and Westernization. (Japan did the one but not the other, for example.)

You're right that marriage and family structure are "deep". A friend of mine sugg... (read more)

As I read through Rationality: A-Z, I kept seeing similarities to David Deutsch's worldview. Deutsch pioneered quantum computation in the 1970s, motivated by the possibility of gaining a deeper grasp of quantum physics and as a potential way to test many-worlds.

This post is adapted from my review of The Beginning of Infinity. I read it a couple of years ago, and it is among the most influential books I have read, up there with The Precipice by Ord and The Scout Mindset by Galef. It has profoundly transformed my thinking.

Overview

We have a great deal of knowledge about the vast and unfamiliar reality that causes our observations and the elegant, universal laws governing that reality. This knowledge consists of explanations: assertions about what is out there beyond appearances...

As I read through Rationality: A-Z, I kept seeing similarities to David Deutsch’s worldview.

Deutsch is really opposed to induction , though.

Cross-posted here on the EA Forum.

Transformative AI and Compute - A holistic approach - Part 1 out of 4

This is part one of the series Transformative AI and Compute - A holistic approach. You can find the sequence here and the summary here.

This work was conducted as part of Stanford’s Existential Risks Initiative (SERI) at the Center for International Security and Cooperation, Stanford University. Mentored by Ashwin Acharya (Center for Security and Emerging Technology (CSET)) and Michael Andregg (Fathom Radiant).

This post attempts to:

  1. Introduce a simplified model of computing which serves as a foundational concept (Section 1).
  2. Discuss the role of compute for AI systems (Section 2).
    • In Section 2.3 you can find the updated compute plot you have been coming for.
  3. Explore the connection of compute trends and more
...

AlexNet was the first publication that leveraged graphical processing units (GPUs) for the training run

Do you mean the first of the data points on the chart? The GPU was used for DL long before AlexNet. References: [1], [2], [3], [4], [5].

Financial status: This is independent research supported by a grant. I welcome additional support.

Epistemic status: Reflections from personal experience.

Outline

What can we ultimately trust as a foundation for our reasoning?

As we go about our lives, we often trust reasoning that is based in empirical, mathematical, and ethical frameworks. Trust in these systems seems well justified. But what exactly is it justified by, and does this question have any practical relevance to our lives, or is it merely the domain of frivolous dorm-room discussion?

In this essay I am going to focus on the question of practical relevance. I will not ask you to take radical skepticism more seriously than you have. I will actually ask you not to take it seriously, but to take seriously the question of why...

universalizability of compressions in light of them being bound to intentionality on the part of the one doing the compression. The closest we get to universal compressions are when the intent is more upstream of other intents like survival and reproduction.

1justinpombrio3hAs you said, very often a justification-based conversation is looking to answer a question, and stops when it's answered using knowledge and reasoning methods shared by the participants. For example, Alice wonders why a character in a movie did something, and then has a conversation with Bob about it. Bob shares some facts and character-motivations that Alice didn't know, they figure out the character's motivation together, and the conversation ends. This relied on a lot of shared knowledge (about the movie universe plus the real universe), but there's no reason for them to question their shared knowledge. You get to shared ground, and then you stop. If you insist on questioning everything, you are liable to get to nodes without justification: * "The lawn's wet." / "Why?" / "It rained last night." / "Why'd that make it wet?" / "Because rain is when water falls from the sky." / "But why'd that make it wet?" / "Because water is wet." / "Why?" / "Water's just wet, sweetie.". A sequence of is-questions, bottoming out at a definition. (Well, close to a definition: the parent could talk about the chemical properties of liquid water, but that probably wouldn't be helpful for anyone involved. And they might not know why water is wet.) * "Aren't you going to eat your ice cream? It's starting to melt." / "It sure is!" / "But melted ice cream is awful." / "No, it's the best." / "Gah!". This conversation comes to an end when the participants realize that they have fundamentally different preferences. There isn't really a justification for "I dislike melted ice cream". (There's an is-ought distinction here, though it's about preferences rather than morality.) Ultimately, all ought-question-chains end at a node without justification. Suffering is just bad, period. And I think if you dig too deep, you'll get to unjustified-ish nodes in is-question-chains too. For example, direct experience, or the belief that the past informs the future, or t
2alexflint3hThank you for sharing this. In my own experience, there are moments where I see something that I haven't seen before, such as what is really going on in a certain relationship in my life, or how I have been unwitting applying a single heuristic over and over, or how I have been holding tension my body, and it feels like a big gong has just rung with truth. But I think what's really going on is that I was seeing things in one particular way for a long time, and then upon seeing things in just a slightly different way, I let go of some unconscious tightness around the previous way of seeing things, and that letting go frees up my mind to actually think, and that's such a big relief that I feel this gong ringing with truth. It seems that letting go of seeing things one particular way is what the energetic release is about, rather than the particular new way of seeing things. I mention this just because it's the thing that seems closest in my own experience to the direct experience of self-evident truth. It seems that when I see that I have been holding to one particular way of seeing things, it is self-evident that it's better to make a conscious choice about how to see things rather than just being unconsciously stuck. But it does not seem to me that there is any self-evident truth in any particular replacement way of seeing things.
2alexflint3hRight. But it's notable that almost no-one in the world is stuck in an actual infinite why-regress, in that there don't seem to be many people sitting around asking themselves "why" until they die, or sitting with a partner asking "why" until one person dies. (I also don't think this is what is happening for monks or other contemplative folks.) I guess in practice people escape by shifting attention elsewhere. But sometimes that is a helpful thing to do, such as when stuck in a rut, and sometimes it is an unhelpful thing to do, such as when already overwhelmed with information. Furthermore some people at very good at shifting their attention around in a way that leads to understanding. Chaitin strikes me as exactly such a person and discusses allocation of attention in that talk (thank you for the lovely link btw - really delightful read!). So what actually is our attentional mechanism and in what way can we trust it? Interested in any thoughts you may have. Hope you are well.

Economics is the social science that studies how humans and other agents interact in a universe with scarce resources. It deals with topics such as trade, specialization of labor, accumulation of capital, technology, and resource consumption. Agents in economics are generally assumed to have utility functions, which they try to maximize under various constraints.

Economics is usually separated into microeconomics and macroeconomics. Microeconomics concerns the behavior of agents as they interact in a market. More narrowly, it studies the price mechanism, a decentralized system of allocating goods and services based on an evolving system of prices and trade, which all actors in a market economy contribute towards. The price mechanism is closely related to the concept of the invisible hand, first introduced by Adam Smith. Game theory is the mathematical study of rational agency, which formalizes many standard results in microeconomics.

Macroeconomics concerns the aggregate behavior of entire economies. For example, it studies economic growth, inflation, international trade and unemployment. An ongoing debate concerns to what extent the impacts of artificial intelligence should be viewed through the lens of economics.

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with

This is a crosspost from my site: It's Good Enough -- A Party Game.

"One hundred and forty-three thousand, two hundred and twenty steps.

It was doable." -- "Worm, Interlude 29" by Wildbow

I had this idea after a friend and I talked about games and competitiveness. The objective of the game is to be the most average person at being average.

Rules

The game requires at least three players.

Each player presents an integer from 0 to 100.[1]

All players must present their number at the same time.[2]

The game lasts for at least three rounds. The number of rounds to be played is set before the game starts.[3]

The player who presents a number that is the median of all numbers presented is awarded a point. If there are an even number of players,...

Language Models

Language Models are a class of AI trained on text, usually to predict the next word or a word which has been obscured. They have the ability to generate novel prose or code based on an initial prompt, which gives rise to a kind of natural language programming called prompt engineering. The most popular architecture for very large language models is called a transformer, which follows consistent scaling laws with respect to the size of the model being trained, meaning that a larger model trained with the same amount of compute will produce results which are better by a predictable amount (when measured by the 'perplexity', or how surprised the AI is by a test set of human-generated text).(Read More)

1plex5hI think this should be under "Other" in the AI category. Is it possible for regular users to categorize tags?

This is a good tag! Users can't usually add things, but I can set you up to have the ability.

PASTA: Process for Automating Scientific and Technological Advancement.

Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio"

This is the first of four posts summarizing hundreds of pages of technical reports focused almost entirely on forecasting one number. It's the single number I'd probably most value having a good estimate for: the year by which transformative AI will be developed.1

By "transformative AI," I mean "AI powerful enough to bring us into a new, qualitatively different future." The Industrial Revolution is the most recent example of a transformative event; others would include the Agricultural Revolution and the emergence of humans.2

This piece is going to focus on exploring a particular kind of AI I believe could be transformative: AI systems that can essentially automate all of...

When you say "the year of PASTA", you probably mean the year than AI appears with 50 per cent probability. But why "50 per cent probability"? 10 per cent seems to be more important. For example, when we say "human life expectancy is 75 years", it means that in the half of the worlds I will die before 75.  The same way, by using the median year as a measure of AI timing, you already accept the loss of the half of human future when AI will appear before that date.

More generally, speaking about the "year of AI" is meaningful only if the dispersion of the... (read more)

2Daniel Kokotajlo18hThis is not very important, but: What was your thought process behind the acronym PASTA? It sounds kinda silly, and while I don't mind that myself I feel like that makes it harder to pitch to other people new to the topic. You could have said something like "R&D Automation."
1JBlack11hI think the rather small image gives the game [https://en.wikipedia.org/wiki/Flying_Spaghetti_Monster] away there.
2HoldenKarnofsky19h1. Of course, the answer could be "A kajillion years from now" or "Never."