LESSWRONG
LW

732
Raemon
61188Ω7915038816311
Message
Dialogue
Subscribe

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Please, Don't Roll Your Own Metaethics
Raemon3hΩ140

To preempt a possible misunderstanding, I don't mean "don't try to think up new metaethical ideas", but instead "don't be so confident in your ideas that you'd be willing to deploy them in a highly consequential way, or build highly consequential systems that depend on them in a crucial way".

I think I had missed this, but, it doesn't resolve the confusion in my #2 note. (like, still seems like something is weird about saying "solve metaphilosophy such that every can agree is correct" is more worth considering than "solve metaethics such that everyone can agree is correct". I can totally buy that they're qualitatively different and maybe have some guesses for why you think that. But I don't think the post spells out why and it doesn't seem that obvious to me)

Reply
Please, Don't Roll Your Own Metaethics
Raemon7hΩ128

Hmm, I like #1. 

#2 feels like it's injecting some frame that's a bit weird to inject here (don't roll your own metaethics... but rolling your own metaphilosophy is okay?)

But also, I'm suddenly confused about who this post is trying to warn. Is it more like labs, or more like EA-ish people doing a wider variety of meta-work?

Reply
Please, Don't Roll Your Own Metaethics
Raemon8hΩ93410

What are you supposed to do other than roll your own metaethics?

Reply
Legible vs. Illegible AI Safety Problems
Raemon1d40

I agree there's a lot of bad signs, but, I think it is kind of the case that their current releases just aren't that dangerous and if I never thought they were going to be come more dangerous, I don't know that I'd be that worked up about the current thing.

Reply
A Simple Sing-along Solstice
Raemon1d40

Thanks for doing this!

I'm not sure if there'd be enough data for it to be interesting, but, a thought I had looking over the "most popular songs" is you might want to somehow count "across cities" as opposed to "multiple times within a city" (NYC I think has the most stable repertoir and I'm guessing some songs are showing up at the top due to consistency, which, maybe is correct, but I'd at least be interested in the other numbers)

Reply
A Simple Sing-along Solstice
Raemon1d70

I'll leave a Generic Smolstice I made here too I guess. (This was specifically designed to be done outside, and has some random opinionated aesthetic takes and wasn't trying to feature the most popular highlights)

https://docs.google.com/document/d/15D8SP-XslTv7OhwZPoxkbigscqMtC0B_jeo1-2lULFE/edit?tab=t.0

The Ritual Arc

Prelude: Bold Orion

I. The Past

  1. Meditation on the Past
  2. Circle Singing
  3. Bring the Light
  4. Bitter Wind Blown
  5. Move the World
  6. Chasing Patterns
  7. Stardust

II. The Present

  1. Meditation on the Present
  2. When I Die
  3. Time Wrote the Rocks
  4. Do You Realize
  5. Story: Into the Darkness
  6. Blowing in the Wind
  7. Hymn to the Breaking Strain
  8. You Are Not Alone

III. The Darkness

  1. Meditation on the Winter
  2. Silent Reflection
  3. The Gathering Hymn
  4. The Darkness and the Light

IV. The Future

  1. Brighter Than Today
  2. Endless Light
  3. What a Wonderful World
  4. Story: This is a Dawn
  5. Here Comes the Sun
  6. Meditation on the Future
  7. Five Thousand Years

 

Epitaph: The Road to Wisdom

Reply
Berkeley Solstice Weekend
Raemon1d20

Can you DM me the email you used to pay?

Reply
Berkeley Solstice Weekend
Raemon2d31

The deal is, there's another event here (Vision Weekend by Foresight Institute) on Sunday, that has agreed that people who purchase rooms at Lighthaven can hang out (although if you want to go to all the talks you should get a ticket for that). So, I expect there to be a fair amount of Solsticers around, but not organized in an official capacity.

Reply
Problems I've Tried to Legibilize
Raemon2d*Ω120

Mostly this has only been a sidequest I periodically mull over in the background. (I expect to someday focus more explicitly on it, although it might be more in the form of making sure someone else is tackling the problem intelligently).

But, I did previously pose this as a kind of open question re What are important UI-shaped problems that Lightcone could tackle? and JargonBot Beta Test (this notably didn't really work, I have hopes of trying again with a different tack). Thane Ruthenis replied with some ideas that were in this space (about making it easier to move between representations-of-a-problem)

https://www.lesswrong.com/posts/t46PYSvHHtJLxmrxn/what-are-important-ui-shaped-problems-that-lightcone-could

I think of many Wentworth posts as relevant background:

  • Why Not Just... Build Weak AI Tools For AI Alignment Research?
  • Why Not Just Outsource Alignment Research To An AI?
  • Interfaces as a Scarce Resource

My personal work so far has been building a mix of exobrain tools that are more, like, for rapid prototyping of complex prompts in general. (This has mostly been a side project I'm not primarily focused on atm)

Reply
Problems I've Tried to Legibilize
Raemon2dΩ120

FYI, normally when I'm thinking about this, it's through the lens "how do we help the researchers working on illegible problems", moreso than "how do we communicate illegibleness?".

This post happened to ask the question "can AI advisers help with the latter" so I was replying about that, but, for completeness, normally when I think about this problem I resolve it as "what narrow capabilities can we build that are helpful 'to the workflow' of people solving illegible problems, that aren't particularly bad from a capabilities standpoint".

Reply
Load More
Step by Step Metacognition
Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Load More (9/10)
51One Shot Singalonging is an attitude, not a skill or a song-difficulty-level*
4d
10
42Solstice Season 2025: Ritual Roundup & Megameetups
6d
6
42Being "Usefully Concrete"
8d
4
59"What's hard about this? What can I do about that?"
10d
0
129Re-rolling environment
11d
2
50Mottes and Baileys in AI discourse
16d
9
20Early stage goal-directednesss
23d
8
77"Intelligence" -> "Relentless, Creative Resourcefulness"
1mo
28
155Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most "classic humans" in a few decades.
1mo
19
56</rant> </uncharitable> </psychologizing>
1mo
13
Load More
22Raemon's Shortform
Ω
8y
Ω
710
AI Consciousness
3 months ago
AI Auditing
3 months ago
(+25)
AI Auditing
3 months ago
Guide to the LessWrong Editor
7 months ago
Guide to the LessWrong Editor
7 months ago
Guide to the LessWrong Editor
7 months ago
Guide to the LessWrong Editor
7 months ago
(+317)
Sandbagging (AI)
8 months ago
Sandbagging (AI)
8 months ago
(+88)
AI "Agent" Scaffolds
8 months ago
Load More