LESSWRONG
LW

525
Ruby
14780Ω13717417211003
Message
Dialogue
Subscribe

LessWrong Team

 

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
LW Team Updates & Announcements
Novum Organum
Mikhail Samin's Shortform
Ruby1d20

I recall a video circulating that showed Dario had changed his position on racing with China that feels perhaps relevant. People can of course change their mind, but I still dislike it.

Reply
Mikhail Samin's Shortform
Ruby2d90

We have policies to not look at user data. Vote data and DM data are the most sacred, though we will look at votes if the patterns suggest fraudulent behavior (e.g. mass downvoting of a person). We tend to inform/consult others on this, but no, there's nothing technical blocking someone from accessing the data on their own.

Reply
Open Thread Autumn 2025
Ruby2d96

I'd carefully examine the plan to do an MD given the breadth of you interest/capabilities. It seems like you could do a lot of things and the opportunity cost is pretty high. Certainly if your goal is caring for others, I'd question it. Not just what comes after, but whether it really makes sense to do.

Reply
Cancer has a surprising amount of detail
Ruby2d*80

Curated. I like this post because I see in it two messages: (a) that there's a lot of complexity, (b) that this complexity is in principle tractable: patterns can be detected, predictions can be made, and by extension better decisions. I've often thought it's easy to feel that modern medicine is advanced – so much more than past medicine, but I think that's how it feels given this is the most advanced medicine has ever been, not how it will be. I hope eventually current treatments are considered primitive.

I'd wager that somewhat more advanced AI/statistical models will in fact be able to distill out an ontology and features that are human understandable.

From my limited personal experience (my wife was diagnosed with osteosarcoma in 2020), the descriptions of this post land. A huge deal for her case was attempting to figure out a precise diagnosis for her rare cancer, and even that was trying to group in one of two broad buckets, the implication being aggressiveness of the cancer and hence best treatment. Eventually her tumor was sequenced, but that was only so-so useful because ability to interpret the genome was so limited, reading a few known genes like portents. (Ultimately we concluded the cancer was relatively mild, we amputated for good measure to be sure of it, and skipped chemo. Five years on, there seems to be no sign of the cancer.)

Reply
At odds with the unavoidable meta-message
Ruby21d20

I think they also mix between a broader metaphysical claim and claim about practical strategy that could be made without the metaphysical claim.

Reply1
Antisocial media: AI’s killer app?
Ruby1mo20

Still feels hard to believe. The most viewed YouTube video has 15B views and I don't think there are that many with over a billion. But you think one specific character.ai persona has nearly a billion conversations started? 

https://en.wikipedia.org/wiki/List_of_most-viewed_YouTube_videos

Reply
Antisocial media: AI’s killer app?
Ruby1mo20

I see the 864M interactions, which I don't think means open conversations.

Reply
Four ways learning Econ makes people dumber re: future AI
Ruby1mo174

Curated. There's an amusing element here: one of the major arguments for concern about powerful AI is how things will fail to generalize out of distribution. There's a similarish claim here – standard economics thinking not generalizing well to the unfamiliar and assumption-breaking domain of AI.

More broadly, I've long felt many people don't get something like "this is different but actually the same". As in, AI is different from previous technologies (surprising!) but also fits broader existing trendlines (e.g. the pretty rapid growth of humanity over its history, this is business as usual once you zoom out). Or the different is that there will be something different and beyond LLMs in coming years, but this is on trend as LLMs were different from what came before.

This post helps convey the above. To the extent there are laws of economics, they still hold, but AI, namely artificial people (from an economic perspective at least) require non-standard analysis and the outcome is weird and non-standard too compared to the expectations of many. All in all, kudos!

Reply
LessWrong Feed [new, now in beta]
Ruby1mo20

Thanks for the extra context. I mean, if we can get our design right then maybe we can inspire the rest ;)

There's a new experimental feature of React, <Activity>, that'd let us allow for navigation to a different page and then returning to the feed without losing your place. I haven't tried to make it work yet but it's high on the to-do list.

Reply
LessWrong Feed [new, now in beta]
Ruby1mo40

Oh no, that sounds no good at all. You might be relieved to I have on my to do to to explore overlay alternative design.

Reply
Load More
11Ruby's Quick Takes
7y
129
Eliezer's Lost Alignment Articles / The Arbital Sequence
8 months ago
(+10050)
Tag CTA Popup
9 months ago
(+4/-231)
LW Team Announcements
9 months ago
GreaterWrong Meta
9 months ago
Intellectual Progress via LessWrong
9 months ago
(-401)
Wiki/Tagging
9 months ago
Moderation (topic)
9 months ago
Site Meta
9 months ago
What's a Wikitag?
9 months ago
58At odds with the unavoidable meta-message
22d
22
56The Sixteen Kinds of Intimacy
4mo
2
52LessWrong Feed [new, now in beta]
5mo
71
48A collection of approaches to confronting doom, and my thoughts on them
7mo
18
84A Slow Guide to Confronting Doom
7mo
20
207Eliezer's Lost Alignment Articles / The Arbital Sequence
8mo
10
281Arbital has been imported to LessWrong
8mo
30
43Which LessWrong/Alignment topics would you like to be tutored in? [Poll]
1y
12
49How do we know that "good research" is good? (aka "direct evaluation" vs "eigen-evaluation")
1y
21
68Friendship is transactional, unconditional friendship is insurance
1y
24
Load More