LESSWRONG
LW

62
Ruby
14977Ω13717917391003
Message
Dialogue
Subscribe

LessWrong Team

 

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Open Thread Autumn 2025
Ruby11h30

and would appreciate guidance about norms re: how many questions are appropriate per comment, whether there are better venues for specific sorts of questions, etc.

 

If you still have such questions, feel free to ask and I, one of the other site mods, or someone else will answer! Though just seeing what other people do is a pretty good guide.

Reply
Open Thread Autumn 2025
Ruby11h30

Please don't throw your mind away is a relevant and good post. Not quite about decision-making entirely, but I recently wrote A Slow Guide to Confronting Doom and accompanying A collection of approaches to confronting doom, and my thoughts on them.

My guiding question is "how do I want to have lived in the final years where humans could shape the universe?" It's far from zero enjoying life but probably more 1:3 ratio.

Getting concrete on the practical level, my financial planning horizon is ten years. No 401k, and I'm happy with loans with much longer payback times, e.g. my solar panel loan is 25 years which is great, my home equity line of credit is interest-only payments for ten years. Mostly I know I can use money now. Still, I care about maintaining some capital because possibly human labor becomes worth nothing and capital is all that matters. I'm not happy about having a lot of wealth in my house because I really don't know what happens to house prices in the next decade. In contrast, I expect stocks to rise in expectation so long as society is vaguely functional.

Reply
The skills and physics of high-performance driving, Pt. 1
Ruby17h30
  1. Some racetracks have more "run-off": area on the sides of the track where you can go off without hitting anything. For example, Thunderhill is in a large field essentially, when you go off, you're just in a field. Not zero risk (could still roll). But I've gone off dozens of time and it's ok.
    1. Other race tracks are much less forgiving, e.g. Sonoma Raceway has a lot of concrete.
  2. Risk of critical injury is pretty low with modern safety equipment. You can damage your car and cost you thousands of dollars, but serious injury and deaths are rare.
  3. You can go over the limit slightly and have your tires lose traction for a bit, slide a bit, notice this, and correct. This is essential skill.
    1. When aiming to get faster, advice is to very slowly increase. E.g. the first time you go through the turn is at 80mph, the next time 81mph, see what happens. Don't jump up to 85mph. So when you go over, you only go over a bit.
Reply
The skills and physics of high-performance driving, Pt. 1
Ruby17h20

Yes. For one thing, the driver is also mass that is being accelerated (in any of the four directions) when the car is being accelerated and body is experiencing those forces. E.g. when you brake hard, your head wants to keep going and is restrained by your neck. My harness doesn't 100% lock me in place so when I go through turns, I'm typically using my left leg to brace. And if you don't have power steering, that can be a lot of effort. (Some leagues have it, some don't. F1 yes, F2 no. Nascar no.) Try go-karting for the experience of now power steering.

One particularly violent experience was riding along in a racer's high-powered car. Felt like I was being shaken in a can for 20-minutes, I was sore for days.

It all does depend on the speeds and level of motorsport. F1 is extreme fitness, entry-level amateur club racing a lot less. But it is definitely physical.

Reply
LessWrong Feed [new, now in beta]
Ruby1d60
  • how do I just view recent posts and comments, chronological? ah there's probably something in the sliders. ah, yes. but I just want to view recent temporarily. and really, I want to see all recent things

 

You're not the only person wanting that, so maybe we'll find a way.

  • is there a way to shut off randomness? hmm, following? or just concentrate all the randomness on a single thing?

Following is the intended way to avoid randomness. I kind of see as the "the recommendation algorithm wasn't good enough" escape hatch for people.

I think about it, I have 60% on this being what you did, but it's not obviously the case.

That is approximately how it works. The source code is open source if you're curious enough. However I've been working on a different algorithm that at least currently is much more transparent and doesn't randomly sample, instead does a universal scoring of items. See my recent Quick Take
 

I'd like to read this post. click on it. uh, huh. this is a different UI. It's got gaps on the sides and a back button. something sets me on edge about that. It doesn't feel like I'm on the page. low weight on this one.

That will be changing soon, making it same UI as normal posts.

 

  • click around to find the feedback button. okay, found it. now I start writing feedback, get the settings out of the way so I can scroll down to the thing I want to give feedback about - oops, feedback box is gone.

Hmm, I'll have to investigate that.

something about the multi-post thread ui is throwing me off. I'll learn to read it, but I'm initially having trouble parsing it.

What do you mean by multi-post thread?

Reply
Human Values ≠ Goodness
Ruby3d20

Stepping back for a moment, just want to clarify goal of this comment exchange. In drafting a reply, I realize I was mixing between:

1) determining whether the decision to curate was good or not
2) determining what is true (according to my own beliefs)
3) determining whether the post is "good" or not.

Of course 1) impacts 2) impacts 3). 

I think I came in with LessWrong model you describe and the piece didn't update me so much as seemed like a straightforward explainer of a simple point ("what people say is Good isn't the same as your Values). I think you have a point that the post does something like set up one side of the dichotomy as S1 boxes, though it's salient to me that it also has:

We don’t really know what human values are, or what shape they are, or even whether they’re A Thing at all. We don’t have trivial introspective access to our own values; sometimes we think we value a thing a lot, but realize in hindsight that we value it only a little.

That feels appropriately non-committal. 

I agree there's complexity around egregores/memeplexes and how it gets carved up.

It's definitely not the bar for curation that everything in the post seems correct to the curator. I do think it should leave people better off than if they'd not read it. After this discussion, I'm less sure about this post. "Values are just the S1 boxes" seems so ridiculous to me that I wouldn't expect anyone to think it, I don't know. The egregore stuff feels much higher resolution than what this post is going for, though I think there's interesting stuff to figure out there. I kind of like this post for having sparked that conversation, though perhaps it is a rehash that is tiresome to others.

Reply
Human Values ≠ Goodness
Ruby3d40

I think this is a reasonable question. (1) it prompted an interesting thought for me in terms of "people often feel the need to be Good, which is often or usually a social drive more than a moral one", (2) sometimes I like a new clear explainer on old topics.

Reply
How do you read Less Wrong?
Ruby3d60

Oh, I just recalled the existence of GreaterWrong, an alternative frontend for LessWrong, that includes a recent comments section that might give you what you want: https://www.greaterwrong.com/recentcomments

Reply
Human Values ≠ Goodness
Ruby4d6-2

You write:

the ethical stance suggested in the post is approximately the same as what many newage gurus will tell you "Stop being in your head! Listen to your heart! Follow the sense of yumminess! Free yourself from the expectations of your parents, school, friends and society!".

The post writes:

I’d like to say here “screw memetic egregores, follow the actual values of actual humans”, but then many people will be complete fucking idiots about it.

....

And so Albert throws out all that Goodness crap, and just queries his own feelings of yumminess in-the-moment when making decisions.

This goes badly in a few different ways...

Yes, I can see a crude resemblance to that kind of advice but there's a whole big section about not interpreting it in a dumb way. I'm also confused what the complaint is...there could be a hypothetical audience, different from the actual audience, who would take this the wrong way and do dumb things and therefore it's a bad post even if it makes a correct point?. Granted, seems you think the point is correct.

I am more interested in the question of whether the post's model is correct, seems like we maybe disagree there based on your comment. I'm not convinced. (Among other things I might say that egregores can be composed of sub-egregores and that's fine, doesn't mean there isn't one here). A bit it feels like details, and the core point of something like your actual values (that are quite hard to determine!) are not the say thing as societal sense of "Good". This doesn't preclude interaction between the two and them shaping each other, that feels like it undermines the picture here.

Reply
Human Values ≠ Goodness
Ruby4d42

Ok, there's argument I can see of "unlike other domains, ethics/meta-ethics lacks any empirical feedback loop on beliefs [at least that we've found] and this means all such claims should be made more lightly than anything more empirical/factual". Given that, perhaps more hedging is warranted than "is correct".

Now even before any of this discussion, I'd have been extremely hesitant to lock in my meta-ethical views to ASI, but day to day though, I feel like I need some kind of ethical framework to operate on. That's where I'm not sure about what to do other than figure out what makes sense to me, in the same way I do for other things. 

I'd need to think longer/be convinced to switch to a more modest epistemology specifically for this domain, if that's kind of the suggestion of "not rolling your own". That feels like a big topic though.

But yeah, I can take away "be less confident" here.

Reply11
Load More
The Motorsport Sequence
LW Team Updates & Announcements
Novum Organum
11Ruby's Quick Takes
7y
130
19The skills and physics of high-performance driving, Pt. 2
16h
0
55The skills and physics of high-performance driving, Pt. 1
1d
6
14Same cognitive paints, exceedingly different mental pictures
2d
0
20Thoughts are surprisingly detailed and remarkably autonomous
3d
1
61What's so hard about...? A question worth asking
4d
3
58At odds with the unavoidable meta-message
1mo
22
57The Sixteen Kinds of Intimacy
5mo
2
52LessWrong Feed [new, now in beta]
6mo
83
48A collection of approaches to confronting doom, and my thoughts on them
7mo
18
84A Slow Guide to Confronting Doom
7mo
20
Load More
Eliezer's Lost Alignment Articles / The Arbital Sequence
9 months ago
(+10050)
Tag CTA Popup
9 months ago
(+4/-231)
LW Team Announcements
9 months ago
GreaterWrong Meta
9 months ago
Intellectual Progress via LessWrong
9 months ago
(-401)
Wiki/Tagging
9 months ago
Moderation (topic)
9 months ago
Site Meta
9 months ago
What's a Wikitag?
9 months ago
What's a Wikitag?
9 months ago
Load More