All of Raemon's Comments + Replies

This concept doesn't explain why certain beliefs persist even when they don't lead to accurate anticipations. Factors such as cultural tradition, emotional comfort, cognitive biases, and lack of exposure to alternative viewpoints can all contribute to the persistence of beliefs, even when they don't "pay rent" in terms of generating accurate predictions

The post isn't meant to be an explanation for why beliefs exist, it's meant to highlight that by default, people have a bundle of things-that-feel-like beliefs that all seem to be a similar shape. But, if yo... (read more)

2the gears to ascension2d
looks good, ish, though now it's barely noticeable:

I've just pushed an update to the Reacts Palette. I aimed to a) remove some reacts that either weren't getting used, or seemed to be used confusingly, b) add some reacts that seem missing, c) reorganize them so they were a bit easier to parse.

And, the biggest change is d) which is to mark how likely a claim is via reacting. I'm imagining this primarily used via inline-reacting. If a lot of people end up using it might make sense to build a more specialized system for this, but it seemed to cheap to add via Reacts for the immediate future.

It looks like this... (read more)

Fwiw I still don't think it makes sense to call that a nitpick. Seems like a good thing to point out. (I agree it's not, like, a knockdown argument against the whole thing. But I think of nitpicks as things that aren't relevant to the central point of the post)

Yeah, the only reason we don't have that yet is it's a bit technically complicated.

A thing that feels somewhat relevant here is the Dark Forest Theory of AI Mass Movements. New people keep showing up, seeing a Mass Movement Shaped Hole, and being like "Are y'all blind? Why are you not shouting from the rooftops to shut down AI everywhere and get everyone scared?"

And the answer is "well, I do think maybe LWers are biased against mainstream politics in some counterproductive ways, but there are a lot of genuine reasons to  be wary of mass movements. They are dumb, hard-to-aim at exactly the right things, and we probably need some very... (read more)

Curated.

I've heard people vaguely wishing for this sort of product for a few years, and I feel pretty excited looking at the potential here. 

There's a lot of room for improvement, some of which is UI here, and some of which depends on how individual predictions and prediction-market-communities turn out to evolve. But I think the current product is above the bar of worth taking a look at and signal boosting. I hadn't consciously thought through the lens of "[Prediction], platforms’ UX is orientated towards forecasters, not information consumers", which seems like an obvious font of potential innovation.

The thing that has me pretty confused about your confidence here is not just that there's something weird going on here, but, that you expect it to be confirmed within 5 years.

4Gerald Monroe2d
Assume the counterfactual. Actual wreckage has been recovered, and assume that analysis has revealed a smoking gun. Examples: working "antigravity" (assume it works by some unknown interaction with the mass of the planet and thus respects conservation laws) Mass Spectrometry of the materials reveals atomic weights outside the known stable elements range Currently impossible material properties Electron micrographs show obvious patterning that looks like the object was assembled of cell sized nanorobots VIN in an obvious alien language (this is weaker without other ontology breaking evidence) One single update - the analysis of ONE crashed vehicle, by credible individuals with third party confirmation, is enough for ontology breakage. Only way to win a bet like this is insider knowledge. Maybe the OP has actually observed something in the class of the above. With all that said, if such evidence exists, why wasn't it leaked or found by another government or private group and revealed? Probability seems low.
2Gunnar_Zarncke3d
That should let you update at least slightly in favor of the thing he claims being right. That's how betting and prediction markets work, right?

I think those don’t say ‘and then the AI kills you’

2Daniel Kokotajlo5d
They say "And then the entire world gets transformed as superintelligent AIs + robots automate the economy." Does Tyler Cowen buy all of that? Is that not the part he disagrees with? And then yeah for the AI kills you part there are models as well, albeit not economic growth models because economic growth is a different subject. But there are simple game theory models, for example -- expected utility maximizer with mature technology + misaligned utility function = and then it kills you. And then there are things like Carlsmith's six-step argument and Chalmers' and so forth. What sort of thing does Tyler want, that's different in kind from what we already have?

(Update: I just merged a PR that should fix the issue, i.e. make it clear who's comments got deleted. Should be live in about 7 minutes)

Inline reacts are now live on all new posts! 

(Authors can switch old posts to use reacts if they'd like)

Okay this should be working now. @Vanessa Kosoy , @Zack_M_Davis , @DanielFilan , @Viliam_Bur, checking in on how inline reacts are working for you now?

2Zack_M_Davis5d
Working now. ("What browser and OS version was that again?" "Look, I procrastinated on upgrades, okay? I'm sorry!" "For our records.")
2gjm7d
A quick try suggests that it's working now. I haven't tested thoroughly.

Curated. I liked this as a post in a similar-ish genre of The Gift We Give To Tomorrow, where a somewhat poetic post takes a bunch of existing philosophical concepts, and puts them into a parable/narrative that gives them more emotional oomph.

What browser/OS? (it's definitely supposed to be able to highlight a subset of a paragraph, and which currently seems to work on Chrome 114 on MacOS)

2Viliam9d
Firefox 113.0.2 (64-bit) on Windows 10 Home Like Vanessa described, the smiley is there, after selecting I can freely move the mouse cursor across the entire screen, but when I get the mouse cursor on the smiley, it disappears, and the text is unselected.

Thanks! 

Could you format these where each selection is just a single bullet, with only the text you highlighted? I'm finding it somewhat hard to figure out which exact quotes you mean. (It's also much easier if you have the entire quote. It looks like you're including "..." to indicate "stuff in between", but, literally just copy-pasting the whole quote is more helpful)

One background fact of what's supposed to happen: You can only inline react on unique strings within a comment. What's supposed to happen is that if you've highlighted a non-unique stri... (read more)

2gjm9d
Reformatted in what I hope is a sufficiently helpful way. When I select a definitely non-unique string (e.g., an instance of "amet") and move my mouse over to the smiley face, before the selection is cancelled and the smiley face disappears there is a flash of a larger tooltippy thing which I am guessing contains the warning about non-uniqueness that you describe. But when I select, e.g., the whole of the first paragraph, or just "Lorem ipsum dolor sit amet", that doesn't happen (or if it does it's too quick for me to see). So, if I'm understanding what you mean by "unique string" correctly (i.e., a string that occurs only once in the comment), I see different behaviours depending on whether my selection is unique or not, and I get the shrinking/vanishing selection in both cases. (Which suggests that whatever's going on, it probably isn't of the form "selections thought to be unique are OK, selections thought to be non-unique misbehave", since the unique/non-unique division is visible within the class of selections that vanish.)

It occurs to me to be curious if @Zvi has thoughts on how to put stuff in terms Tyler Cowen would understand. (I'm not sure what Cowen wants. I'm personally kinda skeptical of people needing things in special formats rather than just generally going off on incredulity. But, it occurs to me Zvi's recent twitter poll of steps along-the-way to AI doom could be converted into, like, a guesstimate model)

Gotcha. Well that's no good. Can you give me some examples of selections that work and selections that don't?

2Viliam9d
Selecting a part of a paragraph (or the entire paragraph, by dragging mouse from the beginning to the end) -- does not work. Selecting the entire paragraph by double clicking -- works. Selecting across the paragraph boundary -- works. (So my hypothesis is that there is something like an invisible character after each paragraph, and a selection works only if it contains at least one such character.)
2gjm9d
Here are some for me (Firefox 113.0.2, Ubuntu 22.04). * In some cases the selection (along with the smiley) completely vanishes on mouseover: * Paragraph 1: "Lorem ipsum dolor sit amet" * Paragraph 1: the whole thing * Paragraph 5: the whole thing * In some cases, the selection shrinks (and the smiley moves to where it would have been had I made the smaller selection) on mouseover: * Paragraph 2: the whole thing * Selection shrinks to end after "ali" in the word "aliquam" * Paragraph 3: the whole thing * Selection shrinks to end after "faucib" in the word "faucibus" * Paragraph 3: "Integer dictum tincidunt risus quis varius. Vestibulum erat dui, gravida et commodo" * Selection shrinks to "Integer dictum tincidunt risus quis varius. Vestibulum erat " * Paragraph 3: "Integer dictum tincidunt risus quis varius. Vestibulum erat dui, gravida et commodo et, dapibus faucibus tortor. Ut sit amet vulputate ipsum. Morbi at blandit nibh. Sed sagittis erat dui, eget placerat dui tincidunt sit amet. Sed ex diam, auctor ut aliquet sit amet, euismod sit amet nunc" * Selection shrinks to end after "faucib" in the word "faucibus" * In some cases, the selection remains stable on mouseover: * Any of the subselections that get shrunk-to as described above * All of paragraphs 1-2 * All of paragraphs 1-3 * All of paragraphs 3-4 * Paragraphs 1-2: "Fusce sagittis elit tellus, ultrices maximus velit ultrices eu. Mauris fermentum ipsum vel sagittis dignissim. Sed vitae sem quis dui laoreet consectetur. Cras vel est quis velit imperdiet dignissim nec non metus. Morbi at ligula dolor. [paragraph break] Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Aenean in sem at mauris euismod condimentum vel nec odio. Vivamus congue est non leo condimentum placerat. Cras interdum mauris quam, non elementum neque aliquet in. Pellent

To clarify, does this prevent you from in-line reacting or just remove your selection? (ie can you click the button and see the react palette, and what text appears there when you do?)

3Vanessa Kosoy10d
When I try to move my mouse over the smiley, both the selection and the smiley disappear before I can click it.
4Jayson_Virissimo10d
Thanks, that's getting pretty close to what I'm asking for. Since posting the above, I've also found Katja Grace's Argument for AI x-risk from competent malign agents [https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start] and Joseph Carlsmith's Is Power-Seeking AI an Existential Risk [https://arxiv.org/abs/2206.13353], both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny. Any idea if something similar is being done to cater to economists (or other social scientists)?

Did you get to use it in practice?

9Zack_M_Davis10d
Practice project never tried to get "real" users, but the code still works.

Okay, I may turn this into a top-level post, but more thoughts here for now?

I feel a lot of latent grief and frustration and also resignation about a lot of tradeoffs presented in the story. I have some sense of where I'll end up when I'm done processing all of it, but alas, I can't just skip to the "done" part. 

...

I've hardened myself into the sort of person who is willing to turn away people who need help. In the past, I've helped people and been burned by it badly enough that it's clear I need to defend my own personal boundaries and ensure it does... (read more)

12922
2Dagon8h
I'm surprised this quote is not more common around here, in discussions of turning far-mode values into near-mode actions, with the accompanying denial that the long run is strictly the sum of short runs.  
1c.trout2d
Depends on what you mean by "utility." If "happiness" the evidence is very much unclear: though Life Satisfaction (LS) is correlated with income/GDP when we make cross-sectional measurement, LS is not correlated with income/GDP when we make time-series measurements. This is the Easterlin Paradox [https://en.wikipedia.org/wiki/Easterlin_paradox]. Good overview [https://www.youtube.com/watch?v=pOtsgIKjeiQ&ab_channel=KelseyJ.O%27Connor]of a recent paper on it, presented by its author. Full paper here [https://docs.iza.org/dp13923.pdf]. Good discussion of the paper on the EA forum here [https://forum.effectivealtruism.org/posts/coryFCkmcMKdJb7Pz/does-economic-growth-meaningfully-improve-well-being-an] (responses from author as well Michael Plant in the comments).
1seank9d
I'm reminded of The Last Paperclip [https://www.lesswrong.com/posts/igxS7re8nfihpbTo5/the-last-paperclip]
6MSRayne11d
It seems to me that the optimal schedule by which to use up your slack / resources is based on risk. When planning for the future, there's always the possibility that some unknown unknown interferes. When maximizing the total Intrinsically Good Stuff you get to do, you have to take into account timelines where all the ants' planning is for nought and the grasshopper actually has the right idea. It doesn't seem right to ever have zero credence of this (as that means being totally certain that the project of saving up resources for cosmic winter will go perfectly smoothly, and we can't be certain of something that will literally take trillions of years), therefore it is actually optimal to always put some of your resources into living for right now, proportional to that uncertainty about the success of the project.

Yeah something like seems obvious good.

Meanwhile I just, uh, made my comment smaller so that it was less pronounced a problem for the immediate showcase comment. :P

Pinned by Raemon

I have just shipped our first draft of Inline Reacts for comments.

You can mouse over a piece of text on a comment, and a little Add React button will appear off to the right.

If you click on it, you'll see the React palette, and you can then apply the react to that particular string of text. Once you've reacted, the reacted-snippet-of-text will appear with a dotted-underline while you're moused over the comment, and it's corresponding react-icon at the bottom of the comment will also show a dotted outline:

When you hoverover a react, it shows the inline-reac... (read more)

32211
2Max H11d
I inline-reacted to the first sentence of this comment. The comment takes up too much vertical space for the green highlighting to be visible when I hover over the react icon at the bottom though, so I have no way of seeing exactly what I reacted to while it is highlighted. Maybe hovering over underlined text should show the reaction?

To clarify somewhat, my confusion was of my own internal moral orienting. This parable hints at a bunch of tradeoffs that maybe correspond to something like "moral developmental stages" along a particular axis, and I'm palpably only partway through the process and still feel confused about it.

I plan to write a up a response post that goes into more detail.

Okay, I may turn this into a top-level post, but more thoughts here for now?

I feel a lot of latent grief and frustration and also resignation about a lot of tradeoffs presented in the story. I have some sense of where I'll end up when I'm done processing all of it, but alas, I can't just skip to the "done" part. 

...

I've hardened myself into the sort of person who is willing to turn away people who need help. In the past, I've helped people and been burned by it badly enough that it's clear I need to defend my own personal boundaries and ensure it does... (read more)

12922

meta note on tagging:

This post seemed to be on a topic that... surely there should be commonly used LW concept for, but I couldn't think of it. I tagged it "agent foundations" but feel like there should be something more specific.

2DanielFilan9d
Maybe "subagents"?

The stage of moral grieving I’m personally at is more at the systems stage, and I’m still feeling a bit lost and confused about it. I felt like I actually learned a thing from the reminder ‘oh, we can still just surreptitiously forgive the sinner via individual discretion despite needing to build the system fairly rigidly.’ Also I did recognize the reference to speaker for the dead, and the combination of ‘new satisfying moral click’ alongside a memory of when a simpler application of the Orson Scott Card quote was very satisfying.

...And they turn away and go back to their work—all except for one, who brushes past the grasshopper and whispers “Meet me outside at dusk and I’ll bring you food. We can preserve the law and still forgive the deviation.”

Did I hallucinate an earlier edit of this where this line comes later in the story (after the “And therefore, to reduce risks of centralization, and to limit our own power, we can’t give you any food” line?). I found it more meaningful there (maybe just because of where I happen to be at my own particular journey of moral sophistication.

4Richard_Ngo12d
Yeah, I moved it to earlier than it was, for two reasons. Firstly, if the grasshopper was just unlucky, then there's no "deviation" to forgive—it makes sense only if the grasshopper was culpable. Secondly, the earlier parts are about individuals, and the latter parts are about systems—it felt more compelling to go straight from "centralized government" to "locust war" than going via an individual act of kindness. Curious what you found more meaningful about the original placement?

The ants accept. The grasshopper’s reserves of energy, cached across the surface of the planet, are harvested fractionally faster than they would have been without its cooperation; its mind is stripped bare and each tiny computational shortcut recorded in case it can add incremental efficiency to the next generation of probes. The ants swarm across the stars, launching themselves in million-lightyear slingshots towards the next oasis, maintaining the relentless momentum of the frontier of their hegemony. The grasshopper’s mind is stored in that colony now,

... (read more)
4Richard_Ngo12d
Ty, nice to hear! Have edited slightly for clarity, as per Mako's comment.

Hmm, I don't think the person talking is expressing the will of the genome, they're expressing the will of a brain, which is pretty different.

2the gears to ascension15d
Brains are the will of the genome; to know your personality, you must first express the will of your genome, which creates an intelligent network of cells throughout your body to do morphogenesis (cf michael levin); the genome defines the intelligent network, then the network figures out what its will is in terms of intended body form for the circumstance, which in turn produces a nervous system capable of further reshaping itself in response to sensory experiences. At each point, there's a finite amount of coherence loss to produce the next level of mesaoptimizer, and while alignment between these mesaoptimizer-printers isn't perfect, a large amount of what defines ones' base preferences is genetic, which means that anything that could edit those base preferences directly strikes me as fundamentally a consent violation of the deepest core of biological autonomy of a being. I don't agree with GeneSmith that the tech to do runtime rewrites is far; it looks impossible now, but in a few years we will simply run a full cell simulator to back-calculate how to reactivate the genes after editing. And besides, that level of transhumanism isn't limited to the nonsense genetics is: we can fundamentally rewrite substrate into a higher quality biology. (People always say upload to computers, which I think is silly; computers and today's biology both wish they could be as high quality and energy efficient at massively parallel computation as competently engineered biology.)

Okay that's a fair/consistent position, but it feels misleading to summarize that to an average person as "eugenics is nonconsensual (and bad?)"

2the gears to ascension15d
Why? Modification of a genome without consent of the genome is nonconsensual and bad. Getting consent from a genome requires them growing up into a person with life experiences and expressing their will, potentially including via advanced self modification techniques. That's how I'd normally put it.

I think being born also doesn't have consent, and "be born, reliably with slightly more genetic diseases or IQ or beauty or whatnot" doesn't seem obviously more in the child's interests than "be born, with less so." (I think there's some potential/likelihood of societal Gattaca style red queen races, but those aren't about the consent of the child, they're about societal equilibria)

2the gears to ascension15d
The question is whether adult child would accept the customization. If so, then they can wait a few years for the customization to be available via advanced AI. We're only a year or two out from hard ASI. In the meantime, yes, having kids in the current world is in fact probably not moral in the first place.

One of the takehome lessons from ChaosGPT and AutoGPT is that there'll likely end up being agential AIs, even if the original AI wasn't particularly agentic.

4jamesharbrook15d
AutoGPT is an excellent demonstration of the point. Ask someone on this forum 5 years ago whether they think AGI might be a series of next token predictors strung together with  modular cognition occurring in English and they would have called you insane.  Yet if that is how we get something close to AGI it seems like a best case scenario since intrepretability is solved by default and you can measure alignment progress very easily.  Reality is weird in very unexpected ways. 
1mruwnik15d
Right - now I see it. I was testing it on the reactions of @Sune [https://www.lesswrong.com/users/sune?mention=user]'s comment, so it was hidden far away to the right. All in all, nice feature though.

I think I fixed the top-of-post again, but, I thought I fixed it yesterday and I'm confused what happened. Whatever's going on here is much weirder than usual.

3Max H15d
The target of the second hyperlink appears to contain some HTML, which breaks the link and might be the source of some other problems:

Aaron it's me from the future wondering if 

a) if non-anoynmous reacts like "unclear" feel scary in the way anonymous ones are

b) if reacts were anonymous, but we had inline reacts (i.e. it told you which specific words or sentence was 'unclear'", how would that feel?

1aarongertler15d
Non-anonymous reacts feel less scary to me as a writer, and don't feel scary to me as a reactor, though I'd expect most people to be more nervous about publicly sharing a negative reaction than I am. Overall, inline anonymous reacts feel better to me than named non-inline reacts. I care much more about getting specific feedback on my writing than seeing which specific people liked or disliked it.

Okay, I get where you're coming from now. Will have to mull over whether I agree but I am at least no longer feel confused about what the disagreement is about now.

Coolio, if you knew what you meant, sure.

(some commentary on my experience reacting to this – if this had been slack/discord, I'd have wanted to emoji-react with a thumbsup to this, intended to imply "cool, I get what you meant, and have seen this." Thumbs-up in this case doesn't necessarily mean "I agree" or any other specific thing, it's a sort of flexible symbol.

I think it was sort of deliberate that we don't have a thumbs-up react here on LW yet, and I'm not sure how I feel about that. I think on one hand, we already have upvotes, and multiple other reacts that mean specific shades of "I agree", "I support", "I endorse", so maybe a generic thumbsup is more confusing than helpful. But, I did wish I had it here.

2NicholasKross16d
Yeah, I keep finding myself wishing that every other message/communication platform I use, would add Discord-style custom emotes for hyperspecific situations.

To be clear I super appreciate you stepping in and trying to see where people were coming from (I think ideally I'd have been doing a better job with that in the first place, but it was kinda hard to do so from inside the conversation)

I found Richard's explanation about what-was-up-with-Vlad's comment to be helpful.

The link to the substack version says "private."

2Zvi15d
That error was fixed, but let's say 'please help fix the top of the post, for reasons that should be obvious, while we fix that other bug we discussed.'

Note, your original phrasing said "it counts if you have to opt into the karma types", and react types are optional. 

3NicholasKross17d
I acknowledge this is phrased kinda weirdly. I would say this fits the spirit of the question (albeit as a noncentral example), plus "opting-into reacts as a whole" is required on a by-post basis.

(updated the previous comment with some clearer context-setting)

10110101010110

asdfasdfasdf

qqqqqqqq

asdfasdfasdf

Load More