It's fall and that means it's time for another Less Wrong Darwin Game. This year, you'll be designing up to ten species that will compete for food including (sometimes) eating each other.
Click here to participate You have one week from September 23 to design your species. Submit them by September 30th or earlier.
Each player starts with a population of organisms. Each round each of your organisms will be randomly paired with another organism. At this point, one of two things will happen:
After everyone has eaten, each organism will attempt to reproduce. The more an organism eats you eat the most descendents an...
Warning: this is a long post, split over two parts. Part II is here.
David Reinstein points me at a 2016 exchange between Bryan Caplan and Scott Alexander over a fine point of nomenclature: is the culture that is taking over the world “Western” or “universal”? Here’s Scott Alexander’s key point:
I worry that Caplan is eliding the important summoner/demon distinction. This is an easy distinction to miss, since demons often kill their summoners and wear their skin. But in this case, he’s become hopelessly confused without it.
I am pretty sure there was, at one point, such a thing as Western civilization. I think it included things like dancing around maypoles and copying Latin manuscripts. At some point Thor might have been involved. That civilization is dead. It summoned
As I read through Rationality: A-Z, I kept seeing similarities to David Deutsch's worldview. Deutsch pioneered quantum computation in the 1970s, motivated by the possibility of gaining a deeper grasp of quantum physics and as a potential way to test many-worlds.
This post is adapted from my review of The Beginning of Infinity. I read it a couple of years ago, and it is among the most influential books I have read, up there with The Precipice by Ord and The Scout Mindset by Galef. It has profoundly transformed my thinking.
We have a great deal of knowledge about the vast and unfamiliar reality that causes our observations and the elegant, universal laws governing that reality. This knowledge consists of explanations: assertions about what is out there beyond appearances...
Cross-posted here on the EA Forum.
Transformative AI and Compute - A holistic approach - Part 1 out of 4
This work was conducted as part of Stanford’s Existential Risks Initiative (SERI) at the Center for International Security and Cooperation, Stanford University. Mentored by Ashwin Acharya (Center for Security and Emerging Technology (CSET)) and Michael Andregg (Fathom Radiant).
This post attempts to:
Financial status: This is independent research supported by a grant. I welcome additional support.
Epistemic status: Reflections from personal experience.
What can we ultimately trust as a foundation for our reasoning?
As we go about our lives, we often trust reasoning that is based in empirical, mathematical, and ethical frameworks. Trust in these systems seems well justified. But what exactly is it justified by, and does this question have any practical relevance to our lives, or is it merely the domain of frivolous dorm-room discussion?
In this essay I am going to focus on the question of practical relevance. I will not ask you to take radical skepticism more seriously than you have. I will actually ask you not to take it seriously, but to take seriously the question of why...
This is a crosspost from my site: It's Good Enough -- A Party Game.
"One hundred and forty-three thousand, two hundred and twenty steps.
It was doable." -- "Worm, Interlude 29" by Wildbow
I had this idea after a friend and I talked about games and competitiveness. The objective of the game is to be the most average person at being average.
The game requires at least three players.
Each player presents an integer from 0 to 100.
All players must present their number at the same time.
The game lasts for at least three rounds. The number of rounds to be played is set before the game starts.
The player who presents a number that is the median of all numbers presented is awarded a point. If there are an even number of players,...
Language Models are a class of AI trained on text, usually to predict the next word or a word which has been obscured. They have the ability to generate novel prose or code based on an initial prompt, which gives rise to a kind of natural language programming called prompt engineering. The most popular architecture for very large language models is called a transformer, which follows consistent scaling laws with respect to the size of the model being trained, meaning that a larger model trained with the same amount of compute will produce results which are better by a predictable amount (when measured by the 'perplexity', or how surprised the AI is by a test set of human-generated text).(Read More)
Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio"
This is the first of four posts summarizing hundreds of pages of technical reports focused almost entirely on forecasting one number. It's the single number I'd probably most value having a good estimate for: the year by which transformative AI will be developed.1
By "transformative AI," I mean "AI powerful enough to bring us into a new, qualitatively different future." The Industrial Revolution is the most recent example of a transformative event; others would include the Agricultural Revolution and the emergence of humans.2
This piece is going to focus on exploring a particular kind of AI I believe could be transformative: AI systems that can essentially automate all of...