I'd carefully examine the plan to do an MD given the breadth of you interest/capabilities. It seems like you could do a lot of things and the opportunity cost is pretty high. Certainly if your goal is caring for others, I'd question it. Not just what comes after, but whether it really makes sense to do.
Curated. I like this post because I see in it two messages: (a) that there's a lot of complexity, (b) that this complexity is in principle tractable: patterns can be detected, predictions can be made, and by extension better decisions. I've often thought it's easy to feel that modern medicine is advanced – so much more than past medicine, but I think that's how it feels given this is the most advanced medicine has ever been, not how it will be. I hope eventually current treatments are considered primitive.
I'd wager that somewhat more advanced AI/statistical models will in fact be able to distill out an ontology and features that are human understandable.
From my limited personal experience (my wife was diagnosed with osteosarcoma in 2020), the descriptions of this post land. A huge deal for her case was attempting to figure out a precise diagnosis for her rare cancer, and even that was trying to group in one of two broad buckets, the implication being aggressiveness of the cancer and hence best treatment. Eventually her tumor was sequenced, but that was only so-so useful because ability to interpret the genome was so limited, reading a few known genes like portents. (Ultimately we concluded the cancer was relatively mild, we amputated for good measure to be sure of it, and skipped chemo. Five years on, there seems to be no sign of the cancer.)
I think they also mix between a broader metaphysical claim and claim about practical strategy that could be made without the metaphysical claim.
Still feels hard to believe. The most viewed YouTube video has 15B views and I don't think there are that many with over a billion. But you think one specific character.ai persona has nearly a billion conversations started?
https://en.wikipedia.org/wiki/List_of_most-viewed_YouTube_videos
I see the 864M interactions, which I don't think means open conversations.
Curated. There's an amusing element here: one of the major arguments for concern about powerful AI is how things will fail to generalize out of distribution. There's a similarish claim here – standard economics thinking not generalizing well to the unfamiliar and assumption-breaking domain of AI.
More broadly, I've long felt many people don't get something like "this is different but actually the same". As in, AI is different from previous technologies (surprising!) but also fits broader existing trendlines (e.g. the pretty rapid growth of humanity over its history, this is business as usual once you zoom out). Or the different is that there will be something different and beyond LLMs in coming years, but this is on trend as LLMs were different from what came before.
This post helps convey the above. To the extent there are laws of economics, they still hold, but AI, namely artificial people (from an economic perspective at least) require non-standard analysis and the outcome is weird and non-standard too compared to the expectations of many. All in all, kudos!
Thanks for the extra context. I mean, if we can get our design right then maybe we can inspire the rest ;)
There's a new experimental feature of React, <Activity>, that'd let us allow for navigation to a different page and then returning to the feed without losing your place. I haven't tried to make it work yet but it's high on the to-do list.
Oh no, that sounds no good at all. You might be relieved to I have on my to do to to explore overlay alternative design.
I actually have a prototype of "Claude/LLM integrated into LessWrong" that makes it easy to load LW content into the context. I could enable that for you but it's actually on Claude 3.5, iirc. Should maybe update it, check that it still works well, and let people try it out.
Making it easy to export content though is an alternative.
We have policies to not look at user data. Vote data and DM data are the most sacred, though we will look at votes if the patterns suggest fraudulent behavior (e.g. mass downvoting of a person). We tend to inform/consult others on this, but no, there's nothing technical blocking someone from accessing the data on their own.