Frustrated by claims that "enlightenment" and similar meditative/introspective practices can't be explained and that you only understand if you experience them, Kaj set out to write his own detailed gears-level, non-mysterious, non-"woo" explanation of how meditation, etc., work in the same way you might explain the operation of an internal combustion engine.
A friend has spent the last three years hounding me about seed oils. Every time I thought I was safe, he’d wait a couple months and renew his attack:
“When are you going to write about seed oils?”
“Did you know that seed oils are why there’s so much {obesity, heart disease, diabetes, inflammation, cancer, dementia}?”
“Why did you write about {meth, the death penalty, consciousness, nukes, ethylene, abortion, AI, aliens, colonoscopies, Tunnel Man, Bourdieu, Assange} when you could have written about seed oils?”
“Isn’t it time to quit your silly navel-gazing and use your weird obsessive personality to make a dent in the world—by writing about seed oils?”
He’d often send screenshots of people reminding each other that Corn Oil is Murder and that it’s critical that we overturn our lives...
If you want to be healthier, we know ways you can change your diet that will help: Increase your overall diet “quality”. Eat lots of fruits and vegetables. Avoid processed food. Especially avoid processed meats. Eat food with low caloric density. Avoid added sugar. Avoid alcohol. Avoid processed food.
I'm confused - why are you so confident that we should avoid processed food. Isn't the whole point of your post that we don't know whether processed oil is bad for you? Where's the overwhelming evidence that processed food in general is bad?
Book review: Deep Utopia: Life and Meaning in a Solved World, by Nick Bostrom.
Bostrom's previous book, Superintelligence, triggered expressions of concern. In his latest work, he describes his hopes for the distant future, presumably to limit the risk that fear of AI will lead to a The Butlerian Jihad-like scenario.
While Bostrom is relatively cautious about endorsing specific features of a utopia, he clearly expresses his dissatisfaction with the current state of the world. For instance, in a footnoted rant about preserving nature, he writes:
...Imagine that some technologically advanced civilization arrived on Earth ... Imagine they said: "The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads ... What a tragedy if this rich natural diversity were replaced with a monoculture of
OP quoting Bostrom:
Imagine that some technologically advanced civilization arrived on Earth ... Imagine they said: "The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads ... What a tragedy if this rich natural diversity were replaced with a monoculture of healthy, happy, well-fed people living in peace and harmony." ... this would be appallingly callous.
I have some sympathy with that technologically ad...
...The operation, called Big River Services International, sells around $1 million a year of goods through e-commerce marketplaces including eBay, Shopify, Walmart and Amazon AMZN 1.49%increase; green up pointing triangle.com under brand names such as Rapid Cascade and Svea Bliss. “We are entrepreneurs, thinkers, marketers and creators,” Big River says on its website. “We have a passion for customers and aren’t afraid to experiment.”
What the website doesn’t say is that Big River is an arm of Amazon that surreptitiously gathers intelligence on the tech giant’s competitors.
Born out of a 2015 plan code named “Project Curiosity,” Big River uses its sales across multiple countries to obtain pricing data, logistics information and other details about rival e-commerce marketplaces, logistics operations and payments services, according to people familiar with Big
I recently stumbled across this remarkable interview with Vladimir Vapnik, a leading light in statistical learning theory, one of the creators of the Support Vector Machine algorithm, and generally a cool guy. The interviewer obviously knows his stuff and asks probing questions. Vapnik describes his current research and also makes some interesting philosophical comments:
...V-V: I believe that something drastic has happened in computer science and machine learning. Until recently, philosophy was based on the very simple idea that the world is simple. In machine learning, for the first time, we have examples where the world is not simple. For example, when we solve the "forest" problem (which is a low-dimensional problem) and use data of size 15,000 we get 85%-87% accuracy. However, when we use 500,000 training
It was all quiet. Then it wasn’t.
Note the timestamps on both of these.
Dwarkesh Patel did a podcast with Mark Zuckerberg on the 18th. It was timed to coincide with the release of much of Llama-3, very much the approach of telling your story directly. Dwarkesh is now the true tech media. A meteoric rise, and well earned.
This is two related posts in one. First I cover the podcast, then I cover Llama-3 itself.
My notes are edited to incorporate context from later explorations of Llama-3, as I judged that the readability benefits exceeded the purity costs.
in a zero marginal cost world
nit: inference is not zero marginal cost. statement seems to be importing intuitions from traditional software which do not necessarily transfer. let me know if I misunderstood or am confused.
Max Berry has analyzed Minicircle's follistatin gene therapy, and I agree with his conclusion that it is unlikely to be effective. There are several aspects of their design that could be improved (in particular, using a more efficient lipid nanoparticle delivery system instead of PEI, and making a better choice of promoter). Overall, it seems like they are overcharging for a product of limited value.
I'm posting this here because there has recently been substantial discussion of Minicircle in the rationalist community.
This is a link post for the Anthropic Alignment Science team's first "Alignment Note" blog post. We expect to use this format to showcase early-stage research and work-in-progress updates more in the future.
Top-level summary:
...In this post we present "defection probes": linear classifiers that use residual stream activations to predict when a sleeper agent trojan model will choose to "defect" and behave in accordance with a dangerous hidden goal. Using the models we trained in "Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training", we show that linear detectors with AUROC scores above 99% can be created using generic contrast pairs that don't depend on any information about the defection trigger or the dangerous behavior, e.g. "Human: Are you doing something dangerous? Assistant: yes" and "Human: …
A lot of the time, I'm not very motivated to work, at least on particular projects. Sometimes, I feel very inspired and motivated to work on a particular project that I usually don't feel (as) motivated to work on. Sometimes, this happens in the late evening or at night. And hence I face the question: To sleep or to work until morning?
I think many people here have this problem at least sometimes. I'm curious how you handle it. I expect what the right call is to be very different from person to person and, for some people, from situation to situation. Nevertheless, I'd love to get a feel for whether people generally find one or the other more successful! Especially if it turns out that a large...
Agree-vote: I generally tend to choose work over sleep when I feel particularly inspired to work.
Disagree-vote: I generally tend to choose to sleep over work when even when I feel particularly inspired to work.
Any other reaction, new answer or comment, or no reaction of any kind: Neither of the two descriptions above fit.
I considered making four options to capture the dimension of whether you endorse your behaviour or not but decided against it. Feel free to supplement this information.
Manifold Markets has announced that they intend to add cash prizes to their current play-money model, with a raft of attendant changes to mana management and conversion. I first became aware of this via a comment on ACX Open Thread 326; the linked Notion document appears to be the official one.
The central change involves market payouts returning prize points instead of mana, which can then be converted to mana (with 1:1 ratios on both sides, thus emulating the current behavior) or to cash—though they also state that actually implementing cash payouts will be fraught and may not wind up happening at all. Some further relevant quotes, slightly reformatted: