Sorry, I meant that it was a mistake on our part. Was not user error! Check out the latest Open Thread to see the experiment there.
I'm trying to come up with a new icon for "not a crux" and also introduce a corresponding "is a crux" icon.
A crux is something upon which your beliefs hinge and would go one way or another. So how about?
Do any of these seem like a good icon? Out of them, which do you most prefer?
Or I could go outright for a hinge:
(this post had "inline reacts" enabled by mistake, but we're not rolling that out broadly yet, so I switch it to regular reacts)
Welcome! Shortform (see the user menu) is a good way to get started, otherwise the AI open thread.
Curated. I like this post taking LessWrong back to its roots of trying to get us humans to reason better and believe truth things. I think we need that now as much as we did in 2009, and I fear that my own beliefs have become ossified through identity and social commitment, etc. LessWrong now talks a lot of about AI, and AI is increasingly a political topic (this post is a little political in a way I don't want to put front and center but I'll curate anyway), which means recalling the ways our minds get stuck and exploring ways to ask ourselves questions in ways where the answer could come back different.
My feeling is this is optimistic. There are people who will fire off a lot of words without having read carefully, so the prior isn't that strong that there's good faith, and unfortunately, I don't think the downvote response is always clear enough to make it feel ok to an author to leave unresponded to. Especially if a comment is lengthy, not as many people will read and downvote it.
Actually if you first +1 to apply it yourself, you can then hover and then downvote it. But it will only show up if you hover.
Very valid concern. We had the same thing with "side comments". So far seems ok. We'd definitely pay data lot of attention to this when designing.
My partner and I put some effort into benefits from polygenic screening, but alas weren't able to make it work.
Quick details: we had IVF embryos created and screened for a monogenic disease, (1) this didn't leave us with enough embryos to choose anything, (2) our embryos were created and stored by UCSF clinic, and any screening would have required transferring to another clinic which would have been time consuming and expensive. Unfortunately two rounds of IVF implantation were unsuccessful, so notwithstanding the monogenic disease risk (unclear how ...
Curated. This post is a feat of scholarship, well-written, and practical on a high impact topic. Thank you for not just doing the research, but writing it up for everyone else's benefit too. As someone who's personally tried for polygenic screening for IQ, etc., I wish I'd had access to this guide last year.
First things first, I'm pro experiments so would be down to experiment with stuff in this area.
Beyond that, seems to depend on a couple of things:
LessWrong currently has about 2,000 logged in users per day. And to 20-100 new users each day (comparing the wide range including peaks recently). If the numbers of viewers wouldn't change that much, perhaps +10%, it wouldn't be a big deal. On the other hand, if Rational An...
Yeah, the current name isn't perfect given the system also has two-axis voting. I might rename it.
Perhaps helping with the mixed stuff, we might prototype "inline reacts" where you select text and your react only applies to that.
Some reactions seem tonally unpleasant:
I agree. See my response to Razied that I think they might have value, and it's interesting to see how they get used in practice. I think there's a world where people abuse them to be mean, and a world where they're used more judiciously. The ability to downvote reacts should also help here, I hope.
I think a top level grouping like this could make sense:
I was imagining something like that too.
There should be a Bikeshed emoji, for comments like this one
:laugh-react:
Reacts are a big enough change that we wouldn't decide to keep them without a lot of testing and getting a sense of their effects.
I agree that some of these are a bit harsh or cold and can be used in a mean way. At first I was thinking to not include them, but I decided that since this is an experiment, I'd include them and see how they get used in practice.
"Not planning to respond" was requested by Wei Dai among others because he disliked when people just left conversations.
"I already addressed this" is intended for authors who put a lot of effort into a post and then have people and raise objections to think that were already addressed (which is pretty frustrating for the aut...
I've seen this and will write up some thoughts / start participating in conversation in the next day or two.
*Reacts that require high karma to be allowed to use, possibly moderator only
The top level categories are roughly ordered by how interested I am in them for LessWrong
I think of Reacts as being more like little mini pre-made comments that fill the niche of things that seem too minor to be worth the trouble of typing up as a regular comment. Either it’s something like “I really liked this” where it feels like it’d be cluttered for a lot of people to write this most of the time[1], or also that writing it as a comment invites one to more discussion or obligates to say more on the topic when all they wanted to do was say “I found this confusing” and not get sucked into a bigger thing.
There’s also a thing in that having par...
Might make this a post later, but here a few of my current thoughts (will post as separate comments due to length).
Curated. Goodhart's Law is an old core concept for LessWrong, and I love when someone(s) come along and add more resolution and rigor to our understanding, and all the more so when they start pointing to how this has practical implications. Would be very cool if this leads to articulation of disagreements between people that allow for progress in the discussion there, e.g. John vs Paul, Jan, etc.
And extra bonus points for exercises at the end too. All in all, good stuff, looking forward to seeing more – especially the results as your vary more of the assumptions (e.g. independence) to line up more with scenarios we anticipate in, e.g. Alignment scenarios.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum id enim gravida, malesuada arcu non, feugiat massa. Donec tempus nisl quam, at sodales magna malesuada eget. Donec ipsum risus, feugiat vel purus quis, feugiat tempus mauris. Fusce sagittis elit tellus, ultrices maximus velit ultrices eu. Mauris fermentum ipsum vel sagittis dignissim. Sed vitae sem quis dui laoreet consectetur. Cras vel est quis velit imperdiet dignissim nec non metus. Morbi at ligula dolor.
Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos hi... (read more)