Book 5 of the Sequences Highlights

To understand reality, especially on confusing topics, it's important to understand the mental processes involved in forming concepts and using words to speak about them.

First Post: Taboo Your Words

Recent Discussion

For this month's open thread, we're experimenting with Inline Reacts as part of the bigger reacts experiment. In addition to being able to react to a whole comment, you can apply a react to a specific snippet from the comment. When you select text in a comment, you'll see this new react-button off to the side (currently only designed to work well on desktop. If it goes well we'll put more polish into getting it working on mobile)

Right now this is enabled on a couple specific posts, and if it goes well we'll roll it out to more posts.


Meanwhile, the usual intro to Open Threads:

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the...

I went and created the AI Rights and Welfare tag.

"AI alignment" has the application, the agenda, less charitably the activism, right in the name. It is a lot like "Missiology" (the study of how to proselytize to "the savages") which had to evolve into "Anthropology" in order to get atheists and Jews to participate. In the same way, "AI Alignment" excludes e.g. people who are inclined to believe superintelligences will know better than us what is good, and who don't want to hamstring them. You can think we're well rid of these people. But you're still excluding people and thereby reducing the amount of thinking that will be applied to the problem.

"Artificial Intention research" instead emphasizes the space of possible intentions, the space of possible minds, and stresses how intentions that are not natural (constrained by...

1dr_s19m
I'm not sure what can someone who essentially thinks there is no problem contribute to its solution. That said, I get the gist of the argument and you do have a point IMO about stressing the two complementary aspects of a mind. Maybe Artificial Volition? Intention feels to me like it alliterates so much with Intelligence it circles back from catchiness to being confusing.
1Jay Bailey3h
""AI alignment" has the application, the agenda, less charitably the activism, right in the name." This seems like a feature, not a bug. "AI alignment" is not a neutral idea. We're not just researching how these models behave or how minds might be built neutrally out of pure scientific curiosity. It has a specific purpose in mind - to align AI's. Why would we not want this agenda to be part of the name?
1Archimedes3h
"Artificial Intention" doesn't sound catchy at all to me, but that's just my opinion. Personally, I prefer to think of the "Alignment Problem" more generally rather than "AI Alignment". Regardless of who has the most power (humans, AI, cyborgs, aliens, etc.) and who has superior ethics, conflict arises when participants in a system are not all aligned.

I think that's better called simply coordination or cooperation problem. Alignment has the unfortunate implication of coming off as one party wanting to forcefully change the others. With AI it's fine because if you're creating a mind from scratch it'd be the height of stupidity to create an enemy.

I made this clock, counting down the time left until we build AGI:

It uses the most famous Metaculus prediction on the topic, inspired by several recent dives in the expected date. Updates are automatic, so it reflects the constant fluctuations in collective opinion.

Currently, it’s sitting in 2028, i.e. the end of the next presidential term. The year of the LA Olympics. Not so far away.

There were a few motivations behind this project:

  1. Civilizational preparedness. Many people are working on making sure this transition is a good one. Many more probably should be. I don’t want to be alarmist, but the less abstract we can make the question, the better. In this regard, it’s similar to the Doomsday Clock.
  2. Personal logistics. I frequently find myself making decisions about long-term projects
...

Hey! Sorry the site was down so long; I accidentally let the payment lapse. It's back up and should stay that way, if you'd still like to use it. I also added a page where you can toggle between the predictions.

But no worries if you're happy with your own. :)

EDIT: I just realized you only posted this 2 days ago! I didn't see your version before fixing the site; I had actually just logged in to update anyone who cared. :P

1River Lewis20m
I added the ability to switch to the strong question! You can do it here [https://aicountdown.com/settings].
This is a linkpost for https://dynomight.net/aliens/

Some suggest there might be alien aircraft on Earth now. The argument goes something like this:

(1) A priori, there’s no reason there shouldn’t be alien aircraft. Earth is 4.54 billion years old, but the universe is 13.7 billion years old, and within a billion light years of Earth there are something like 5 × 10¹⁴ stars. Most of those stars have planets, and if an alien civilization arose anywhere and built a von Neumann probe, those probes would spread everywhere.

(2) We have tons of observations that would be more likely if there were alien aircraft around than if there weren’t. These include:

  • Vast numbers of anecdotal reports from pilots.
  • Videos that appear to show objects with flight characteristics far beyond known human capabilities.
  • Senators—with access to classified information—raising concerns about
...

Anyone who is confident no ufos are truly anomalous, please feel free to extend me odds for a bet here https://www.lesswrong.com/posts/t5W87hQF5gKyTofQB/ufo-betting-put-up-or-shut-up

I have already paid out to two betters so far, and would like some more

1jam_brand1h
Also perhaps of interest might be this discussion [https://www.reddit.com/r/slatestarcodex/comments/mbww69/a_reasoned_case_for_bigfoot/] from the SSC subreddit awhile back where someone detailed their pro-Bigfoot case.
2M. Y. Zuo2h
If by 'very unlikely' you think the likelihood is <1% you can get nearly free money by betting against: https://www.lesswrong.com/posts/t5W87hQF5gKyTofQB/ufo-betting-put-up-or-shut-up [https://www.lesswrong.com/posts/t5W87hQF5gKyTofQB/ufo-betting-put-up-or-shut-up] I think the user is still willing to send out a few thousand dollars.
1Gesild Muka5h
‘Dimension hopping’ or ‘dimension manipulation’ could be a solution to the Fermi paradox. The universe could be full of intelligent life that remain silent and (mostly) invisible behind advanced spatial technology. (the second type refers to more limited hypothetical dimension technology such as creating pocket dimensions, for example, rather than accessing other universes)

“There is a single light of science. To brighten it anywhere is to brighten it everywhere.” – Isaac Asimov

You cannot stand what I’ve become
You much prefer the gentleman I was before
I was so easy to defeat, I was so easy to control
I didn’t even know there was a war

– Leonard Cohen, There is a War

“Pick a side, we’re at war.”

– Steven Colbert, The Colbert Report

Recently, both Tyler Cowen in response to the letter establishing consensus on the presence of AI extinction risk, and Marc Andreessen on the topic of the wide range of AI dangers and upsides, have come out with posts whose arguments seem bizarrely poor.

These are both excellent, highly intelligent thinkers. Both clearly want good things to happen to humanity and the world. I am...

2Gerald Monroe12h
That's never happened historically and aging treatments isn't immortality, it's just approximately a life expectancy of 10k years. Do you know who is richer than any CEO you name? Medicare. I bet they would like to stop paying all these medical bills, which would be the case if treated patients had the approximate morbidity rate of young adults. You also need such treatments to be given at large scales to find and correct the edge cases. A rejuvenation treatment "beta tester" is exactly what it sounds, you will have a higher risk of death but get earlier access. Going to need a lot of beta testers. The rational, data driven belief is that aging is treatable and that ASI systems with the cognitive capacity to take into account more variables than humans are mentally capable of could be built to systematically attack the problem. Doesn't mean it will help anyone alive today, there are no guarantees. Because automated systems found whatever treatments are possible, automated systems can deliver the same treatments at low cost. If you don't think this is a reasonable conclusion, perhaps you could go into your reasoning. Arguments like you made above are unconvincing. While it is true that certain esoteric treatments for aging like young blood transfusions are inherently limited in who can benefit, they don't even work that well and de aged hemopoietic stem cells can be generated in automated laboratories and would be a real treatment everyone can benefit. The wealthy are not powerful enough to "hoard" treatments, because Medicare et al represent the government, which has a monopoly on violence and incentives to not allow such hoarding.
3dr_s9h
That's naive. If a private has obedient ASI, they also have a monopoly on violence now. If labour has become superfluous, states have lost all incentive to care about the opinion of people.
2Gerald Monroe6h
I think worlds with the tools to treat most causes of human death ranks strictly higher than a world without those tools. In the same way that a world with running water ranks above worlds without it. Even today not everyone benefits from running water. If you could go back in time would you campaign against developing pipes and pumps because you believed only the rich would ever have running water? (Which was true for a period of time)

Running water doesn't create the conditions to permanently disempower almost everyone, AGI does. What I'm talking about isn't a situation in which initially only the rich benefit but then the tech gets cheaper and trickles down. It's a permanent trap that destroys democracy and capitalism as we know them.

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with

Consider two claims:

  • Any system can be modeled as maximizing some utility function, therefore utility maximization is not a very useful model
  • Corrigibility is possible, but utility maximization is incompatible with corrigibility, therefore we need some non-utility-maximizer kind of agent to achieve corrigibility

These two claims should probably not both be true! If any system can be modeled as maximizing a utility function, and it is possible to build a corrigible system, then naively the corrigible system can be modeled as maximizing a utility function.

I exp... (read more)

Sometimes I have an internal desire different to do something different than what I think should be done (for example, I might desire to play a game while also thinking the better choice is to read). I've been experimenting with using randomness to mediate this. I keep a D20 with me, give each side of the dispute some odds proportional to the strength of its resolve, and then roll the die.

In theory, this means neither side will overpower the other, and even a small resolve still has a chance. I'm not sure how useful this is, but it's fun, and can sort of g... (read more)

This post is crossposted from my blog. If you liked this post, subscribe to Lynette's blog to read more -- I only crosspost about half my content to other platforms.

If you’re going into surgery, you want the youngest operating surgeon available.

This is a slight exaggeration – you don’t want a doctor in their first year out of medical school.[1] After that, it’s less clear. One review found thirty-two studies indicating that the older a doctor was, the worse their medical outcomes; that review only found one study indicating that all outcomes got better with increasing age.[2] Other analyses suggest that middle-aged doctors might do better than younger doctors (though the effect is not statistically significant)[3], but older doctors are still clearly worse than middle-aged doctors.[4]

It’s not like doctors...

How in-depth have you looked at the studies about declining performance in doctors with age? An obvious alternative hypothesis is that doctors gain skill as they age, and therefore tend to take on higher-risk patients and procedures with worse outcomes. I am not saying that's what's going on here - I'd just like to know if this is something you've looked into.

2romeostevensit3h
I have found a lot of online summaries of deliberate practice frustratingly vague. So I bought a well reviewed out of print manual on deliberate practice in music called The Practiceopedia. The chapter headings give some ideas about the sort of resolution being gone for. I might do a book review at some point. Chapter guide Beginners: curing your addiction to the start of your peace Blinkers: shutting out the things you shouldn't be working on Boot camp: where you need to send passages that won't behave Breakthroughs diary: keeping track of your progress Bridging: smoothing the bumps between sections Bug spotting: because you can't fix what you don't know about Campaigns: connecting your daily practice to the big picture Cementing: locking in the version you want to keep Chaining: getting to full speed one segment at a time Clearing obstacles: finding what causes tricky bits to be tricky Clock Watchers: curing the unhealthy obsession with time Closure: knowing when you can safely stop practicing something Color coding: a whole new dimension to marking your score Coral reef mistakes: detecting invisible trouble spots Cosmetics: minimizing the impact of weak capacities on concert day Countdown charts: factoring your deadlines into your practice Designer scales: choosing technical work to support your pieces Details trawl: ensuring you know what's really in the score Dress rehearsals: setting up your own concert simulator Engaging autopilot: the dangers of practicing without thinking Exaggerating: overstating key ideas to embed them Excuses and ruses: why you'll never really fool your teacher if you haven't practiced Experimenting: testing different interpretation options Fire drills: training to cope gracefully with onstage mistakes Fitness training: behind the scenes practice to help all your pieces Fresh photocopies: creating your own custom scores tailored for practicing Horizontal versus vertical: knowing when to change your practice Di