Frustrated by claims that "enlightenment" and similar meditative/introspective practices can't be explained and that you only understand if you experience them, Kaj set out to write his own detailed gears-level, non-mysterious, non-"woo" explanation of how meditation, etc., work in the same way you might explain the operation of an internal combustion engine.
This paper presents , an alternative to for the activation function in sparse autoencoders that produces a pareto improvement over the standard sparse autoencoder architectures and sparse autoencoders trained with Sqrt(L1) penalty.
Learnable parameters of a sparse autoencoder:
Notation: Encoder/Decoder
Let
so that the full computation done by an SAE can be expressed as
An SAE is trained with gradient descent on
where is the sparsity penalty coefficient (often "L1 coefficient") and is the sparsity penalty function, used to encourage sparsity.
is commonly the L1 norm but recently has been shown to produce a Pareto improvement on the L0 and CE metrics.
There has been other work producing pareto improvements to SAEs by taking as the penalty function. We will use this as a further baseline to compare against when...
This is the eighth post in my series on Anthropics. The previous one is Lessons from Failed Attempts to Model Sleeping Beauty Problem. The next one is Beauty and the Bets.
Suppose we take the insights from the previous post, and directly try to construct a model for the Sleeping Beauty problem based on them.
We expect a halfer model, so
On the other hand, in order not repeat Lewis' Model's mistakes:
But both of these statements can only be true if
And, therefore, apparently, has to be zero, which sounds obviously wrong. Surely the Beauty can be awaken on Tuesday!
At this point, I think, you wouldn't be surprised, if I tell you that there are philosophers who are eager to bite this bullet and claim that the Beauty should, indeed, reason as...
Re: no coherent “stable” truth value: indeed. But still… if she wonders out loud “what day is it?” at the very moment she says that, it has an answer. An experimenter who overhears her knows the answer. It seems to me that you “resolve” this tension is that the two of them are technically asking a different question, even though they are using the same words. But still… how surprised should she be if she were to learn that today is Monday? It seems that taking your stance to its conclusion, the answer would be “zero surprise: she knew for sure she wou...
This summarizes a (possibly trivial) observation that I found interesting.
Story
An all-powerful god decides to play a game. They stop time, grab a random human, and ask them "What will you see next?". The human answers, then time is switched back on and the god looks at how well they performed. Most of the time the humans get it right, but occasionally they are caught by surprise and get it wrong.
To be more generous the god decides to give them access (for the game) to the entirety of all objective facts. The position and momentum of every elementary particle, every thought and memory anyone has ever had (before the time freeze) etc. However, suddenly performance in the game drops from 99% to 0%. How can this be? They...
An idea I've been playing with recently:
Suppose you have some "objective world" space . Then in order to talk about subjective questions, you need a reference frame, which we could think of as the members of a fiber of some function , for some "interpretation space" .
The interpretations themselves might abstract to some "latent space" according to a function . Functions of would then be "subjective" (depending on the interpretation they arise from), yet still potentially meaningfully constrained, based on . In particular if some struct...
Some background about me. I currently live in seaside,ca. Have a bs in psychology and an A.A.S in information technology network administration. I currently am a cashier at a gas station but want to find a better job for many reasons. I want a job that will fulfill my high need for analytical thought(high in need for cognition if you know what that means) and problem solving and that hopefully maximizes the amount of time i can be with my wife (who is in the military and "works" 7-3. I am pretty new to the job search thing because i spent 6 years in college with the same job as basically a system admin. (note of worry about all jobs have already developed carpal tunnel and had surgery and...
There are also people who's job it is to be a lot on the telephone and thus are well-reached by telephone even if they are younger.
A friend has spent the last three years hounding me about seed oils. Every time I thought I was safe, he’d wait a couple months and renew his attack:
“When are you going to write about seed oils?”
“Did you know that seed oils are why there’s so much {obesity, heart disease, diabetes, inflammation, cancer, dementia}?”
“Why did you write about {meth, the death penalty, consciousness, nukes, ethylene, abortion, AI, aliens, colonoscopies, Tunnel Man, Bourdieu, Assange} when you could have written about seed oils?”
“Isn’t it time to quit your silly navel-gazing and use your weird obsessive personality to make a dent in the world—by writing about seed oils?”
He’d often send screenshots of people reminding each other that Corn Oil is Murder and that it’s critical that we overturn our lives...
Thanks for this piece. I admit I have always had a bit of residual aversion to seed oils that I've struggled to shake.
Having said that, as you're pushing so strongly against seed oils in favour of "processing" as a mechanism for poor health, I think I need to push back a bit.
If you want to be healthier, we know ways you can change your diet that will help: Increase your overall diet “quality”. Eat lots of fruits and vegetables. Avoid processed food. Especially avoid processed meats.
"Avoid processed food" works very well as a heuristic - far better th...
Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be.
There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks arise if AIs take their own actions at the expense of human interests.
Governments are poor stewards for both types of risk. Misuse regulation is like the regulation of any other technology. There are reasonable rules that the government might set, but omission bias and incentives to protect small but well organized groups at the expense of everyone else will lead to lots of costly ones too. Misalignment regulation is not in the Overton window for any government. Governments do not have strong incentives...
You're saying governments can't address existential risk, because they only care about what happens within their borders and term limits. And therefore we should entrust existential risk to firms, which only care about their own profit in the next quarter?!
Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk).
I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a lot of the critcism of EA is hostile, or lazy, and is extremely unlikely to convince a believer.
Take this recent Leif Weinar time article as an example. I liked a few of the object level critiques, but many of the points were twisted, and the overall point was hopelessly muddled (are they trying to say that voluntourism is the solution here?). As people have noted, the piece was needlessly hostile to EA (and incredibly hostile to Will Macaskill in particular). And...
Good article.
It's an asymmetry worth pointing out.
It seems related to some concept of "low interest rate phenomenon in ideas". Sometimes in a low interest rate environment, people fund all sorts of stuff, because they want any return and credit is cheap. Later much of this looks bunk. Likewise, much EA behaviour around the plentiful money and status of the FTX era looks profligate by todays standards. In the same way I wonder what ideas are held up by some vague consensus rather than being good ideas.
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
I like his UI. In fact, I shared about CQ2 with Andy in February since his notes site was the only other place where I had seen the sliding pane design. He said CQ2 is neat!
Epistemic status: pretty confident. Based on several years of meditation experience combined with various pieces of Buddhist theory as popularized in various sources, including but not limited to books like The Mind Illuminated, Mastering the Core Teachings of the Buddha, and The Seeing That Frees; also discussions with other people who have practiced meditation, and scatterings of cognitive psychology papers that relate to the topic. The part that I’m the least confident of is the long-term nature of enlightenment; I’m speculating on what comes next based on what I’ve experienced, but have not actually had a full enlightenment. I also suspect that different kinds of traditions and practices may produce different kinds of enlightenment states.
While I liked Valentine’s recent post on kensho and its follow-ups a lot,...
Based on the link, it seems you follow the Theravada tradition.
For what it's worth, I don't really follow any one tradition, though Culadasa does indeed have a Theravada background.