About once every 15 minutes, someone tweets "you can just do things". It seems like a rather powerful and empowering meme and I was curious where it came from, so I did some research into its origins. Although I'm not very satisfied with what I was able to reconstruct, here are some of the things that I found:
In 1995, Steve Jobs gives the following quote in an interview:
...Life can be much broader, once you discover one simple fact, and that is that everything around you that you call life was made up by people that w
Update! I missed an entire evolutionary branch of the meme: "You can just do stuff" (rather than "things").
In March 2021, @leaacta tweets:
life hack: you don't have to explain yourself or understand anything, you can just do stuff
And gets retweeted by a bunch of people in TPOT.
Then, in June 2022, comedian Rodney Norman posts a video called Go Be Weird with a motivational speech of some sort:
...Hey, you know you can just do stuff?
Like, you don't need anybody's permission or anything.
You just... you just kind of come up with weird stuff you want to go do,
If I see a YouTube video pop up in my feed right after it’s published, I can often come up with a comment that gets a lot of likes and ends up near the top of the comment section.[1] It’s actually not that hard to do: the hardest part is being quick enough[2] to get into the first 10-30 comments (which I assume is the average number of comments viewers glance over), but the comment itself might be pretty generic and not that relevant to the video’s content.
Do you know a way I could use tha...
IIUC, those are just bots who copy early and liked comments. So my comment would also be copied by other bots.
An AI content X/Twitter account with nearly 100k followers blocked me, and I got a couple of disapproving replies for pointing out that the account was AI-generated. I quote-tweeted the account mostly to share a useful Chrome Extension that I've been using the detect AI content, but I was surprised that there was a negative reaction in the form of a few replies pointing out the account was AI-generated. I am neither pro- nor anti-AI accounts, but being aware of the nature of the content seems to be useful.
Would be curious to hear others' thoughts on the ph...
Bot farms have been around for awhile. Use of AI for this purpose (along with all other, more useful purposes) has been massively increasing over the last few years, and a LOT in the last 6 months.
Personally, I'd rather have someone point out the errors or misleading statements in the post, rather than worrying about whether it's AI or just a content farm of low-paid humans or someone with too much time and a bad agenda. But a lot of folks think "AI generated" is bad, and react as such (some by stopping following such accounts, some by blocking the complainers).
The non-dumb solution is to sunset the Jones Act, isn't it? The problem with workarounds is that they generally need to be approved by the same government that is maintaining the law in the first place.
One theme I've been thinking about recently is how bids for connection and understanding are often read as criticism. For example:
Person A shares a new idea, feeling excited and hoping to connect with Person B over something they've worked hard on and hold dear.
Person B asks a question about a perceived inconsistency in the idea, feeling excited and hoping for an answer which helps them better understand the idea (and Person B).
Person A feels hurt and unfairly rejected by Person B. Specifically, Person A feels like Person B isn't willing to give their sinc...
It was definitely relevant! Thank you for the link--I think introducing this idea might assist communication in some of my relationships.
I found Yarrow Bouchard's quick take on the EA Forum regarding LessWrong's performance in the COVID-19 pandemic quite good.
I don't trust her to do such an analysis in an unbiased way [[1]] , but the quick take was pretty full of empirical investigation that made me change my mind wrt to how well LessWrong in particular did.
There's much more historiography to be done here, who believed what, when, what the long-term effects of COVID-19 are, which interventions did what, but this seems like the state of the art on "how well did LessWrong actually p...
Analysis of "first to talk seriously about it" is probably not worth much, for COVID-19 OR for the Soviet Union. Actual behavior and impact are what matter, and I don't know that LW members were significantly different from their non-LW-using cohorts in their areas.
I very roughly polled METR staff (using Fatebook) what the 50% time horizon will be by EOY 2026, conditional on METR reporting something analogous to today's time horizon metric.
I got the following results: 29% average probability that it will surpass 32 hours. 68% average probability that it will surpass 16 hours.
The first question got 10 respondents and the second question got 12. Around half of the respondents were technical researchers. I expect the sample to be close to representative, but maybe a bit more short-timelines than the rest of METR staff.
The average probability that the question doesn't resolve AMBIGUOUS is somewhere around 60%.
Just for context, the reason we might not report something like today's time horizon metric is we don't have enough tasks beyond 8 hours. We're actively working on several ways to extend this, but there's always a chance none of them will work out and we won't have enough confidence to report a number by the end of 2026.
People not working with LLMs often say things like "nope, they just follow stochastic patterns in the data, matrices of floats don't have beliefs or goals". People on LessWrong could, I think, claim something like "they have beliefs, and to what extent they have goals is a very important empirical question".
Here's my attempt at writing a concise decent quality answer the second group could give to the first.
Consider a houseplant. Its leaves are directed towards the window. If you ...
Note that the usage of these terms and demand for rigor varies by orders of magnitude based on who you're talking with and what aspects of "belief" are salient to the question at hand. My friends and coworkers don't bat an eye at "Claude believes that Paris is the capital of France", or even "Claude thinks it's wasteful to spend money on 3p antivirus software".
Only when considering whether a given AI instance is a moral actor or moral patient does the ambiguity matter, and then we're really best off tabooing these words that imply high similarity to the way humans experience things.
Hi, does anyone from the US want to donation-swap with me to a German tax-deductible organization? I want to donate $2410 to the Berkeley Genomics Project via Manifund.
For anyone considering niplav's offer, the most obvious tax-deductible-in-Germany donation options for EAs / rationalists is probably Effektiv Spenden's "giving funds":
Learned about 'Harberger tax' recently.
The motivation is like
I think unless they explicitly want to harm or threaten you, was the point - which incidentally is often a situation not accounted for in the foundational assumptions of many economic models (utility functions generally considered to be independent and monotonic in resources and so on).
A talk on embedded AIXI from Alexander Meulemans and Rajai Nasser is in 2 hours: https://uaiasi.com/2025/12/14/alexander-meulemans-rajai-nasser-on-embedded-aixi/
OpenAI claims 5.2 solved an open COLT problem with no assistance: https://openai.com/index/gpt-5-2-for-science-and-math/
This might be the first thing that meets my bar of autonomously having an original insight??
By "not too hard in retrospect" I mean that the models are applying well-known techniques in settings where these are not obvious, but also not too surprising, e.g. where experts in that subfield will say things like "of course you'd do that" when examining the solution. (Of course, one should be careful with such self-reports, but I tend to find this believable)
Am I understanding correctly that recent revelations from Ilya's deposition (e.g. looking at the parts here) suggest Ilya Sutskever and Mira Murati seem like very selfish and/or cowardly people? They seem approximately as scheming or manipulative as Sam Altman, if maybe more cowardly and less competent.
My understanding from is that they were basically wholly responsible for causing the board to try to fire Sam Altman. But when it went south, they actively sabotaged the firing (e.g. Mira disavowing it and trying to retain her role, Ilya saying he regr...
"cowardly" because my strong guess is that their actions were driven by fear of social censure rather than calculated attempts to minimize losses. If they were trying to minimize losses to their non-selfish goals of ousting Sam A, who I think they believed to be a bad and dangerous actor, that would have been better served by coming clean about why they did what they did.
Do we have some page containing resources for rationalist parents, or generally for parents of smart children? Such as recommended books, toys, learning apps, etc.
I found tag https://www.lesswrong.com/w/parenting but I was hoping for some kind of best textbooks / recommendations / reference works but for parents/children.
I'm not arguing either way. I just note this specific aspect that seems relevant. The question is: Is the babies body more susceptible to alcohol than an adults body. For example, does the liver work better or worse than for a baby? Are there developmental processes that can be disturbed by the presence of alcohol? By default I'd assume that the effect is proportional (except maybe the baby "lives faster" in some sense, so the effect may be proportional to metabilism or growth speed or something). But all of that is speculation.
I saw that both Anthropic and OpenAI publish transparency reports on government requests for user data, which includes FISA/NSLs.
Rationalists often say "insane" to talk about normie behaviors they don't like, and "sane" to talk about behaviors they like better. This seems unnecessarily confusing and mean to me.
This clearly is very different from how most people use these words. Like, "guy who believes in God" is very different from "resident of a psych ward." It can even cause legitimate confusion when you want to switch back to the traditional definition of "insane". This doesn't seem very rational to me!
Also, the otherizing/dismissiv...
Yeah, I think some rationalists, e.g. Eliezer, use it a lot more than the general population, and differently from the popular figurative sense. As in "raising the sanity waterline."
You suspect someone in your community is a bad actor. Kinds of reasons not to move against them:
More reasons:
2.b. The problem is not lack of legible evidence per se, but the fact that the other members of the group are too stupid to understand anything; from their perspective even quite obvious evidence is illegible.
7. If you attack them and fail, it will strengthen their position; and either the chance of failure or the bonus they would get is high enough to make the expected value of your attack negative.
For example, they may have prepared a narrative like "there is a conspiracy against our group that will soon try to divide us by bringing up unfounded accusations against people like me", so if your fail to convince the others, you will provide evidence for the narrative.