For this month's open thread, we're experimenting with Inline Reacts as part of the bigger reacts experiment. In addition to being able to react to a whole comment, you can apply a react to a specific snippet from the comment. When you select text in a comment, you'll see this new react-button off to the side (currently only designed to work well on desktop. If it goes well we'll put more polish into getting it working on mobile)

Right now this is enabled on a couple specific posts, and if it goes well we'll roll it out to more posts.


Meanwhile, the usual intro to Open Threads:

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
58 comments, sorted by Click to highlight new comments since: Today at 10:45 AM

Oh, so that's what "inline" reacts are! This brings me back: back in 2014, I wrote a prototype discussion forum (inspired by a weird sun Tweet) where you could upvote or downvote snippets within a post or comment, which would get correspondingly colored blue or red. The backend was Django, and the substring voting worked by traversing the DOM tree and working out the character indices where the current selection starts and ends. More innocent times! I miss jQuery.

[-]Raemon11mo20

Did you get to use it in practice?

Practice project never tried to get "real" users, but the code still works.

Is hosting images on Imgur (as in the parent) OK? (I recently got an Imgur account, because their latest terms-of-service update says they might delete old anonymously-uploaded images.) I've seen posts by more than one author (including the OP, and some of John Wentworth's posts) host images at https://res.cloudinary.com/lesswrong-2-0/, which seems to suggest a designated way to upload images for posts and comments on this website? When I (reluctantly) switch to the Less Wrong docs editor, I don't see it in the selection menu that shows up when you highlight tex—oh, I see now that there's another menu when you click the ¶, which includes image upload. Maybe I'll start to use that (in a dummy LessWrong Docs editor session) instead of Imgur? (I assume no one is going to build a separate image-upload form for Markdown editor users.)

You're welcome to host images wherever you like - we automatically mirror all embedded images on Cloudinary, and replace the URLS in the associated image tags when serving the post/comment (though the original image URLs remain in the canonical post/comment for you, if you go to edit it, or something).

Reply2111
[-]Ruby11mo82

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum id enim gravida, malesuada arcu non, feugiat massa. Donec tempus nisl quam, at sodales magna malesuada eget. Donec ipsum risus, feugiat vel purus quis, feugiat tempus mauris. Fusce sagittis elit tellus, ultrices maximus velit ultrices eu. Mauris fermentum ipsum vel sagittis dignissim. Sed vitae sem quis dui laoreet consectetur. Cras vel est quis velit imperdiet dignissim nec non metus. Morbi at ligula dolor.

Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Aenean in sem at mauris euismod condimentum vel nec odio. Vivamus congue est non leo condimentum placerat. Cras interdum mauris quam, non elementum neque aliquet in. Pellentesque risus massa, aliquam id ante a, lobortis varius est. Aliquam erat volutpat. Proin diam augue, condimentum a metus ut, gravida ultrices ante. Cras eget bibendum purus, at finibus sem. Suspendisse suscipit sit amet leo a cursus. Sed et sem sit amet mi volutpat semper et a ante. Nullam sit amet tincidunt tellus, vitae cursus nunc. Etiam felis ligula, pretium at tellus eu, porta ultricies nisl. Pellentesque sed cursus eros.

Integer dictum tincidunt risus quis varius. Vestibulum erat dui, gravida et commodo et, dapibus faucibus tortor. Ut sit amet vulputate ipsum. Morbi at blandit nibh. Sed sagittis erat dui, eget placerat dui tincidunt sit amet. Sed ex diam, auctor ut aliquet sit amet, euismod sit amet nunc. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Integer a tincidunt sem, in bibendum nunc. Cras vitae iaculis velit.

Suspendisse rhoncus interdum felis, et interdum turpis volutpat eu. Sed vitae justo scelerisque, suscipit justo in, ultricies lorem. Ut ut eleifend mauris. Praesent bibendum erat vel ligula rhoncus, sed porttitor augue pharetra. Aenean in condimentum sapien. Etiam feugiat sodales accumsan. Nam a cursus augue. Integer ullamcorper finibus nisl quis venenatis. Integer accumsan non magna id placerat. Nulla mollis sodales augue et fermentum.

Morbi et molestie nisi, sed mollis nisi. Donec iaculis velit at volutpat viverra. Morbi vehicula mollis lorem, id sagittis purus vulputate at. Vivamus feugiat tempus est, quis suscipit magna blandit ac. Nunc a convallis lectus, eget dapibus lectus. Vestibulum quis urna justo. Nulla tristique ut augue vitae vehicula. Cras condimentum et mauris vel dignissim. Phasellus ante dolor, sollicitudin vitae risus ut, volutpat molestie dui. Integer ex sem, ultrices in auctor eget, elementum non erat.

Reply55433321111111

I think if I hover my cursor over text that has been reacted to, I should see the react.

[-]Raemon10mo20

Yeah, the only reason we don't have that yet is it's a bit technically complicated.

[-]Raemon10mo20

Should be fixed now.

looks good, ish, though now it's barely noticeable:

[-]Raemon10mo20

oh no

[-]niplav10mo70

Courtesy of the programming language checklist.

So you're proposing a new economic policy. Here's why your policy will not work.

Your policy depends on science/theorizing that:
    ☐ has been replicated only once
    ☐ has failed to replicate
    ☐ for which there exist no replication attempts
    ☐ was last taken seriously sometime around 1900
    ☐ requires a dynamic stochastic general equilibrium model with 200 free parameters to perfectly track reality perfectly
Your policy would:
    ☐ disincentivize good things
    ☐ incentivize bad things
    ☐ both
    ☐ be wildly unpopular, even though you think it's the best thing since sliced bread (it's not)
    ☐ You seem to think that taking options away from people helps them
Your policy just reinvents
    ☐ land-value taxes, but worse
    ☐ universal basic income, but worse
    ☐ price discrimination, but worse
    ☐ demand subsidy, but worse
    ☐ demand subsidy, better, but that's still no excuse
    ☐ Your policy sneakily redistributes money from poor to rich people
    ☐ Your policy only works if every country on earth accepts it at the same time
    ☐ You actually have no idea what failure/success of your policy would look like
You claim it fixes
    ☐ climate change
    ☐ godlessness
    ☐ police violence
    ☐ wet socks
    ☐ teenage depression
    ☐ rising rents
    ☐ war
    ☐ falling/rising sperm counts/testosterone levels
You seem to assume that
    ☐ privatization always works
    ☐ privatization never works
    ☐ your country will never become a dictatorship
    ☐ your country will always stay a dictatorship
    the cost of coordination is
        ☐ negligible
        ☐ zero
        ☐ negative
        ☐ Your policy is a Pareto-worsening
In conclusion,
    ☐ You have copied and mashed together some good ideas with some mediocre ideas
    ☐ You have not even tried to understand basic economics/political science/sociology concepts
    ☐ Living under your policy is an adequate punishment for inventing it

I only very recently noticed that you can put \newcommand definitions in equations in LW posts and they'll apply to all the equations in that post. This is an enormous help for writing long technical posts, so I think it'd be nice if it was (a) more discoverable and (b) easier to use. For (b), the annoying thing right now is that I have to put newcommands into one of the equations, so either I need to make a dedicated one, or I need to know which equation I used. Also, the field for entering equations isn't great for entering things with many lines.

Feature suggestion to improve this: in the options section below the post editor, have a multiline text field where you can put LaTeX, and then inject that LaTeX code into MathJax as a preamble (or just add an otherwise empty equation to the page, I don't know to what extent MathJax supports preambles).

[-]JayP10mo50

Hi, the name is Jay and I have been working with data through out my career with usually dipping my mind into ML, Data Engineering or data visualizations.

I am joining today because the guidelines and intro of the community made me want become a part of it and it sounds like a place where I can feel safe to contribute my thoughts and learning and also learn from others.

 

Nice to e-meet you all

I'm surprised there's no tag for either "AI consciousness" or "AI rights", given that there have been several posts discussing both. However, there's a lot of overlap between the two, so perhaps both would be redundant, and the question of which is broader/more fitting becomes relevant. Thoughts?

(Sorry if this is not the right place to put this.)

Inline reactions aren't working for me in Firefox 88 on Xubuntu 16.04: when I select text, I can see the smiley icon next to the comment, but it disappears when I mouse over it.

This probably doesn't count as a bug report; while some companies need to support old software for a long time, in this case, I'm just imagining jimrandomh incredulously telling me, "Wait, you're running Firefox 88, on Xenial?! Why? What's wrong with you? You're a terrible person," and me not having any satisfactory reply.

I have the same bug in Firefox 113.0.2 on Windows 11. But, it seems to depend on what I select: for some selections it works, for some selections it doesn't.

[-]Raemon11mo30

To clarify, does this prevent you from in-line reacting or just remove your selection? (ie can you click the button and see the react palette, and what text appears there when you do?)

When I try to move my mouse over the smiley, both the selection and the smiley disappear before I can click it.

[-]Raemon11mo20

Gotcha. Well that's no good. Can you give me some examples of selections that work and selections that don't?

[-]Viliam11mo20

Selecting a part of a paragraph (or the entire paragraph, by dragging mouse from the beginning to the end) -- does not work.

Selecting the entire paragraph by double clicking -- works.

Selecting across the paragraph boundary -- works.

(So my hypothesis is that there is something like an invisible character after each paragraph, and a selection works only if it contains at least one such character.)

[-]Raemon11mo20

What browser/OS? (it's definitely supposed to be able to highlight a subset of a paragraph, and which currently seems to work on Chrome 114 on MacOS)

[-]Viliam11mo20

Firefox 113.0.2 (64-bit) on Windows 10 Home

Like Vanessa described, the smiley is there, after selecting I can freely move the mouse cursor across the entire screen, but when I get the mouse cursor on the smiley, it disappears, and the text is unselected.

[-]gjm11mo20

Here are some for me (Firefox 113.0.2, Ubuntu 22.04).

  • In some cases the selection (along with the smiley) completely vanishes on mouseover:
    • Paragraph 1: "Lorem ipsum dolor sit amet"
    • Paragraph 1: the whole thing
    • Paragraph 5: the whole thing
  • In some cases, the selection shrinks (and the smiley moves to where it would have been had I made the smaller selection) on mouseover:
    • Paragraph 2: the whole thing
      • Selection shrinks to end after "ali" in the word "aliquam"
    • Paragraph 3: the whole thing
      • Selection shrinks to end after "faucib" in the word "faucibus"
    • Paragraph 3: "Integer dictum tincidunt risus quis varius. Vestibulum erat dui, gravida et commodo"
      • Selection shrinks to "Integer dictum tincidunt risus quis varius. Vestibulum erat "
    • Paragraph 3: "Integer dictum tincidunt risus quis varius. Vestibulum erat dui, gravida et commodo et, dapibus faucibus tortor. Ut sit amet vulputate ipsum. Morbi at blandit nibh. Sed sagittis erat dui, eget placerat dui tincidunt sit amet. Sed ex diam, auctor ut aliquet sit amet, euismod sit amet nunc"
      • Selection shrinks to end after "faucib" in the word "faucibus"
  • In some cases, the selection remains stable on mouseover:
    • Any of the subselections that get shrunk-to as described above
    • All of paragraphs 1-2
    • All of paragraphs 1-3
    • All of paragraphs 3-4
    • Paragraphs 1-2: "Fusce sagittis elit tellus, ultrices maximus velit ultrices eu. Mauris fermentum ipsum vel sagittis dignissim. Sed vitae sem quis dui laoreet consectetur. Cras vel est quis velit imperdiet dignissim nec non metus. Morbi at ligula dolor. [paragraph break] Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Aenean in sem at mauris euismod condimentum vel nec odio. Vivamus congue est non leo condimentum placerat. Cras interdum mauris quam, non elementum neque aliquet in. Pellentesque risus massa, aliquam id ante a, lobortis varius est. Aliquam erat volutpat."

[EDITED per Raemon's request to include all the text rather than using ellipses for brevity, and to put each example in a separate bullet point.]

[-]Raemon11mo20

Thanks! 

Could you format these where each selection is just a single bullet, with only the text you highlighted? I'm finding it somewhat hard to figure out which exact quotes you mean. (It's also much easier if you have the entire quote. It looks like you're including "..." to indicate "stuff in between", but, literally just copy-pasting the whole quote is more helpful)

One background fact of what's supposed to happen: You can only inline react on unique strings within a comment. What's supposed to happen is that if you've highlighted a non-unique string, the tooltip for the button says "you haven't selected a unique string. Please select more text." It detects non-unique strings via a regex that doesn't work entirely reliably (and I'm guessing works differently on some browsers?)

[-]gjm11mo20

Reformatted in what I hope is a sufficiently helpful way.

When I select a definitely non-unique string (e.g., an instance of "amet") and move my mouse over to the smiley face, before the selection is cancelled and the smiley face disappears there is a flash of a larger tooltippy thing which I am guessing contains the warning about non-uniqueness that you describe.

But when I select, e.g., the whole of the first paragraph, or just "Lorem ipsum dolor sit amet", that doesn't happen (or if it does it's too quick for me to see).

So, if I'm understanding what you mean by "unique string" correctly (i.e., a string that occurs only once in the comment), I see different behaviours depending on whether my selection is unique or not, and I get the shrinking/vanishing selection in both cases.

(Which suggests that whatever's going on, it probably isn't of the form "selections thought to be unique are OK, selections thought to be non-unique misbehave", since the unique/non-unique division is visible within the class of selections that vanish.)

[-]Raemon11mo20

Okay this should be working now. @Vanessa Kosoy , @Zack_M_Davis , @DanielFilan , @Viliam_Bur, checking in on how inline reacts are working for you now?

Working now. ("What browser and OS version was that again?" "Look, I procrastinated on upgrades, okay? I'm sorry!" "For our records.")

[-]gjm11mo20

A quick try suggests that it's working now. I haven't tested thoroughly.

Recommendation for everyone who reported bugs in this comment thread: Bug reports are quicker to do, and much easier to understand for devs, if you accompany them with screenshots or gifs of the bug behavior. There are a bunch of screen capture tools which can very easily record a gif and upload it to the internet.

For Windows I've recommended ShareX here before; whereas for MacOS I've heard that e.g. CleanShot might be able to do this (though I haven't tried that myself). I'm not aware of any Linux versions, but they'll surely exist, too. Plus nowadays I suspect that even the inbuilt screen capture tools on each OS can record gifs, and even if they can't automatically upload them, you can instead just drag the final gif into the LW comment.

I can reproduce loss-of-selection on mouseover some of the time on up-to-date Chrome, so, I think probably not browser specific.

Wait, you're running Firefox 88, on Xenial?! Why? What's wrong with you? You're a terrible person.

[-]lsusr10mo30

Why do mathematicians always build roads thru empty regions of land?

One of the field axioms is that the operations commute.

―by dkl9

Is there a way to find all the posts that you have strong upvoted? I bet there is a hidden query that I can use on the All Posts page but I can't find it easily.

If it actually doesn't exist then this comment acts as my vote on getting it implemented.

Putting this in the OT because I'm risking asking something silly and basic here—after reading “Are Bayesian methods guaranteed to overfit?”, it feels like there should exist a Bayesian-update analogue to Kelly betting: underfitting enough to preserve some property that's important because the environment is unpredictable, catastrophic losses have distortionary effects, etc., where fitting to the observations ‘as well as possible’ is analogous to playing purely for expected value and thus turns out to be ‘wrong’ in the kind of iterated games we wind up in in reality. Dweomite's comment is part of what inspired this, because of the way it enumerated reasons having to do with limited training data.

Is this maybe an existing well-known concept that I missed the boat on, or something that's already known to be unworkable or undefinable for some reason? Or what?

[-]Raemon10mo30

I've just pushed an update to the Reacts Palette. I aimed to a) remove some reacts that either weren't getting used, or seemed to be used confusingly, b) add some reacts that seem missing, c) reorganize them so they were a bit easier to parse.

And, the biggest change is d) which is to mark how likely a claim is via reacting. I'm imagining this primarily used via inline-reacting. If a lot of people end up using it might make sense to build a more specialized system for this, but it seemed to cheap to add via Reacts for the immediate future.

It looks like this now, when you first open the palette. It deliberately doesn't emphasize the ability to scroll-for-more-reacts, at first, because I think people are probably already fairly overwhelmed with the default palette.

4 small remarks on reactions not too related to your update

  1. I think the scroll bar with its relatively dark colour still makes scroll-for-more-reacts pretty obvious. Not sure how to improve on this, maybe just making the colour a lighter gray?
  2. The scroll bar has different thickness on Firefox vs Chromium (The fact that I am using Firefox may be why I found the scroll bar too obvious)

  3. Initially, I thought the top bar of reacts are either the most commonly used reacts or the most recently used reacts by myself. Either way, I expected that I could find them in the list of reacts. I got a little bit confused when I read through the whole list once and couldn't find those reacts.
  4. On really long comments, I can see that a part has a react by the underlines, but I cannot tell what is the react because reacts are on the bottom of a comment, and the comment is way too long to see both its content and the bottom at once. Would be great to see the reacts when I hover over the underlined text.

I like the dynamic of how the reacts make it easier to give positive feedback. 

When reading the recent UAP betting threat, I feel like I would want to have a react to congratulate people for taking their beliefs seriously enough to bet. 

"Virtuous"?

The closest I could spot was the thumbs up, but it's not really there.

Hi! I'm doing the seemingly common thing of making an account under a pseudonym so I can have discussions and get smarter without the terrifying and probably dangerous prospect of having all of my partially-formed opinions freely associated with my IRL identity. I thank you all in advance for helping me be less stupid!

[-]Yitz10mo20

Does anyone here know of (or would be willing to offer) funding for creating experimental visualization tools?

I’ve been working on a program which I think has a lot of potential, but it’s the sort of thing where I expect it to be most powerful in the context of “accidental” discoveries made while playing with it (see e.g. early use of the microscope, etc.).

[-]Raemon10mo20

Inline reacts are now live on all new posts! 

(Authors can switch old posts to use reacts if they'd like)

Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren't of that type, their members can't easily engage with those arguments. For example:

...if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data. So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.

So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.' The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It's a bad practice in virtually any theory of risk communication.

-- Tyler Cowen, Risks and Impact of Artificial Intelligence
 

 

is there a canonical source for "the argument for AGI ruin" somewhere, preferably laid out as an explicit argument with premises and a conclusion?

-- David Chalmers, Twitter

Is work already being done to reformulate AI-risk arguments for these communities?

Has Tyler Cowen heard of the Bio Anchors by Ajeya Cotra model or the takeoffspeeds.com model by Tom Davidson or Roodman's model of the singularity, or for that matter the earlier automation models by Robin Hanson? All of them seem to be the sort of thing he wants, I'm surprised he hasn't heard of them. Or maybe he has and thinks they don't count for some reason? I would be curious to know why.

[-]Raemon10mo20

I think those don’t say ‘and then the AI kills you’

They say "And then the entire world gets transformed as superintelligent AIs + robots automate the economy." Does Tyler Cowen buy all of that? Is that not the part he disagrees with?

And then yeah for the AI kills you part there are models as well, albeit not economic growth models because economic growth is a different subject. But there are simple game theory models, for example -- expected utility maximizer with mature technology + misaligned utility function = and then it kills you. And then there are things like Carlsmith's six-step argument and Chalmers' and so forth. What sort of thing does Tyler want, that's different in kind from what we already have?

There's been reasonable amounts of modeling work done in the context of managing money. E.g. https://forum.effectivealtruism.org/posts/Ne8ZS6iJJp7EpzztP/the-optimal-timing-of-spending-on-agi-safety-work-why-we

This is probably the sort of thing Tyler would want but wouldn't know how to find.

[-]Raemon11mo20

For the case of David Chalmers, I think that's explicitly what Robby was going for in this post: https://www.lesswrong.com/posts/QzkTfj4HGpLEdNjXX/an-artificially-structured-argument-for-expecting-agi-ruin 

Thanks, that's getting pretty close to what I'm asking for. Since posting the above, I've also found Katja Grace's Argument for AI x-risk from competent malign agents and Joseph Carlsmith's Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.

Any idea if something similar is being done to cater to economists (or other social scientists)?

[-]Raemon11mo30

It occurs to me to be curious if @Zvi has thoughts on how to put stuff in terms Tyler Cowen would understand. (I'm not sure what Cowen wants. I'm personally kinda skeptical of people needing things in special formats rather than just generally going off on incredulity. But, it occurs to me Zvi's recent twitter poll of steps along-the-way to AI doom could be converted into, like, a guesstimate model)

[-]FGDots10mo10

Well, hi everyone. I must confess i feel a little bit overwhelmed by how deep this website can go but i'll try to not be biased on how smart my reply look :)

I'm Giacomo, a 26 yo graphic designer based in Italy that just fell in love by the possibilities of this website. 
All these portals leading to even more portals of knowledge i can't wait to dive in.

I think it's gonna be a fun ride.

Is there an organisation that can hire independent alignment researchers who already have funding, in order to help with visas for a place that has other researchers, perhaps somewhere in the UK? Is there a need for such an organisation? 

EA Germany has an "Employer of Record" program. Your funding gets put into their account, and they pay your salary from it, formally becoming your employer.

This is probably what you want to google or mention to an organization in the UK. :)

Details (EAD): https://docs.google.com/document/d/1EePELRNTrZGHgeJa3oeRdF_rDsN7LesYppQah_zE7g4

[+][comment deleted]10mo10