I've just pushed an update to the Reacts Palette. I aimed to a) remove some reacts that either weren't getting used, or seemed to be used confusingly, b) add some reacts that seem missing, c) reorganize them so they were a bit easier to parse.
And, the biggest change is d) which is to mark how likely a claim is via reacting. I'm imagining this primarily used via inline-reacting. If a lot of people end up using it might make sense to build a more specialized system for this, but it seemed to cheap to add via Reacts for the immediate future.
It looks like this...
Fwiw I still don't think it makes sense to call that a nitpick. Seems like a good thing to point out. (I agree it's not, like, a knockdown argument against the whole thing. But I think of nitpicks as things that aren't relevant to the central point of the post)
A thing that feels somewhat relevant here is the Dark Forest Theory of AI Mass Movements. New people keep showing up, seeing a Mass Movement Shaped Hole, and being like "Are y'all blind? Why are you not shouting from the rooftops to shut down AI everywhere and get everyone scared?"
And the answer is "well, I do think maybe LWers are biased against mainstream politics in some counterproductive ways, but there are a lot of genuine reasons to be wary of mass movements. They are dumb, hard-to-aim at exactly the right things, and we probably need some very...
Curated.
I've heard people vaguely wishing for this sort of product for a few years, and I feel pretty excited looking at the potential here.
There's a lot of room for improvement, some of which is UI here, and some of which depends on how individual predictions and prediction-market-communities turn out to evolve. But I think the current product is above the bar of worth taking a look at and signal boosting. I hadn't consciously thought through the lens of "[Prediction], platforms’ UX is orientated towards forecasters, not information consumers", which seems like an obvious font of potential innovation.
The thing that has me pretty confused about your confidence here is not just that there's something weird going on here, but, that you expect it to be confirmed within 5 years.
(Update: I just merged a PR that should fix the issue, i.e. make it clear who's comments got deleted. Should be live in about 7 minutes)
Inline reacts are now live on all new posts!
(Authors can switch old posts to use reacts if they'd like)
Okay this should be working now. @Vanessa Kosoy , @Zack_M_Davis , @DanielFilan , @Viliam_Bur, checking in on how inline reacts are working for you now?
Curated. I liked this as a post in a similar-ish genre of The Gift We Give To Tomorrow, where a somewhat poetic post takes a bunch of existing philosophical concepts, and puts them into a parable/narrative that gives them more emotional oomph.
What browser/OS? (it's definitely supposed to be able to highlight a subset of a paragraph, and which currently seems to work on Chrome 114 on MacOS)
Thanks!
Could you format these where each selection is just a single bullet, with only the text you highlighted? I'm finding it somewhat hard to figure out which exact quotes you mean. (It's also much easier if you have the entire quote. It looks like you're including "..." to indicate "stuff in between", but, literally just copy-pasting the whole quote is more helpful)
One background fact of what's supposed to happen: You can only inline react on unique strings within a comment. What's supposed to happen is that if you've highlighted a non-unique stri...
It occurs to me to be curious if @Zvi has thoughts on how to put stuff in terms Tyler Cowen would understand. (I'm not sure what Cowen wants. I'm personally kinda skeptical of people needing things in special formats rather than just generally going off on incredulity. But, it occurs to me Zvi's recent twitter poll of steps along-the-way to AI doom could be converted into, like, a guesstimate model)
Gotcha. Well that's no good. Can you give me some examples of selections that work and selections that don't?
To clarify, does this prevent you from in-line reacting or just remove your selection? (ie can you click the button and see the react palette, and what text appears there when you do?)
For the case of David Chalmers, I think that's explicitly what Robby was going for in this post: https://www.lesswrong.com/posts/QzkTfj4HGpLEdNjXX/an-artificially-structured-argument-for-expecting-agi-ruin
Okay, I may turn this into a top-level post, but more thoughts here for now?
I feel a lot of latent grief and frustration and also resignation about a lot of tradeoffs presented in the story. I have some sense of where I'll end up when I'm done processing all of it, but alas, I can't just skip to the "done" part.
...
I've hardened myself into the sort of person who is willing to turn away people who need help. In the past, I've helped people and been burned by it badly enough that it's clear I need to defend my own personal boundaries and ensure it does...
Yeah something like seems obvious good.
Meanwhile I just, uh, made my comment smaller so that it was less pronounced a problem for the immediate showcase comment. :P
I have just shipped our first draft of Inline Reacts for comments.
You can mouse over a piece of text on a comment, and a little Add React button will appear off to the right.
If you click on it, you'll see the React palette, and you can then apply the react to that particular string of text. Once you've reacted, the reacted-snippet-of-text will appear with a dotted-underline while you're moused over the comment, and it's corresponding react-icon at the bottom of the comment will also show a dotted outline:
When you hoverover a react, it shows the inline-reac...
To clarify somewhat, my confusion was of my own internal moral orienting. This parable hints at a bunch of tradeoffs that maybe correspond to something like "moral developmental stages" along a particular axis, and I'm palpably only partway through the process and still feel confused about it.
I plan to write a up a response post that goes into more detail.
Okay, I may turn this into a top-level post, but more thoughts here for now?
I feel a lot of latent grief and frustration and also resignation about a lot of tradeoffs presented in the story. I have some sense of where I'll end up when I'm done processing all of it, but alas, I can't just skip to the "done" part.
...
I've hardened myself into the sort of person who is willing to turn away people who need help. In the past, I've helped people and been burned by it badly enough that it's clear I need to defend my own personal boundaries and ensure it does...
meta note on tagging:
This post seemed to be on a topic that... surely there should be commonly used LW concept for, but I couldn't think of it. I tagged it "agent foundations" but feel like there should be something more specific.
The stage of moral grieving I’m personally at is more at the systems stage, and I’m still feeling a bit lost and confused about it. I felt like I actually learned a thing from the reminder ‘oh, we can still just surreptitiously forgive the sinner via individual discretion despite needing to build the system fairly rigidly.’ Also I did recognize the reference to speaker for the dead, and the combination of ‘new satisfying moral click’ alongside a memory of when a simpler application of the Orson Scott Card quote was very satisfying.
...And they turn away and go back to their work—all except for one, who brushes past the grasshopper and whispers “Meet me outside at dusk and I’ll bring you food. We can preserve the law and still forgive the deviation.”
Did I hallucinate an earlier edit of this where this line comes later in the story (after the “And therefore, to reduce risks of centralization, and to limit our own power, we can’t give you any food” line?). I found it more meaningful there (maybe just because of where I happen to be at my own particular journey of moral sophistication.
...The ants accept. The grasshopper’s reserves of energy, cached across the surface of the planet, are harvested fractionally faster than they would have been without its cooperation; its mind is stripped bare and each tiny computational shortcut recorded in case it can add incremental efficiency to the next generation of probes. The ants swarm across the stars, launching themselves in million-lightyear slingshots towards the next oasis, maintaining the relentless momentum of the frontier of their hegemony. The grasshopper’s mind is stored in that colony now,
Hmm, I don't think the person talking is expressing the will of the genome, they're expressing the will of a brain, which is pretty different.
Okay that's a fair/consistent position, but it feels misleading to summarize that to an average person as "eugenics is nonconsensual (and bad?)"
I think being born also doesn't have consent, and "be born, reliably with slightly more genetic diseases or IQ or beauty or whatnot" doesn't seem obviously more in the child's interests than "be born, with less so." (I think there's some potential/likelihood of societal Gattaca style red queen races, but those aren't about the consent of the child, they're about societal equilibria)
One of the takehome lessons from ChaosGPT and AutoGPT is that there'll likely end up being agential AIs, even if the original AI wasn't particularly agentic.
I think I fixed the top-of-post again, but, I thought I fixed it yesterday and I'm confused what happened. Whatever's going on here is much weirder than usual.
Aaron it's me from the future wondering if
a) if non-anoynmous reacts like "unclear" feel scary in the way anonymous ones are
b) if reacts were anonymous, but we had inline reacts (i.e. it told you which specific words or sentence was 'unclear'", how would that feel?
Okay, I get where you're coming from now. Will have to mull over whether I agree but I am at least no longer feel confused about what the disagreement is about now.
Coolio, if you knew what you meant, sure.
(some commentary on my experience reacting to this – if this had been slack/discord, I'd have wanted to emoji-react with a thumbsup to this, intended to imply "cool, I get what you meant, and have seen this." Thumbs-up in this case doesn't necessarily mean "I agree" or any other specific thing, it's a sort of flexible symbol.
I think it was sort of deliberate that we don't have a thumbs-up react here on LW yet, and I'm not sure how I feel about that. I think on one hand, we already have upvotes, and multiple other reacts that mean specific shades of "I agree", "I support", "I endorse", so maybe a generic thumbsup is more confusing than helpful. But, I did wish I had it here.
To be clear I super appreciate you stepping in and trying to see where people were coming from (I think ideally I'd have been doing a better job with that in the first place, but it was kinda hard to do so from inside the conversation)
I found Richard's explanation about what-was-up-with-Vlad's comment to be helpful.
Note, your original phrasing said "it counts if you have to opt into the karma types", and react types are optional.
The post isn't meant to be an explanation for why beliefs exist, it's meant to highlight that by default, people have a bundle of things-that-feel-like beliefs that all seem to be a similar shape. But, if yo... (read more)