Frustrated by claims that "enlightenment" and similar meditative/introspective practices can't be explained and that you only understand if you experience them, Kaj set out to write his own detailed gears-level, non-mysterious, non-"woo" explanation of how meditation, etc., work in the same way you might explain the operation of an internal combustion engine.

yanni2h61
0
I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist. Gemini did a good job of summarising it: This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown: What it Doesn't Mean: * Self-Flagellation: This practice isn't about beating yourself up or dwelling on guilt. * Ignoring External Factors: It doesn't deny the role of external circumstances in a situation. What it Does Mean: * Owning Your Reaction: It's about acknowledging how a situation makes you feel and taking responsibility for your own emotional response. * Shifting Focus: Instead of blaming others or dwelling on what you can't control, you direct your attention to your own thoughts and reactions. * Breaking Negative Cycles: By understanding your own reactions, you can break free from negative thought patterns and choose a more skillful response. Analogy: Imagine a pebble thrown into a still pond. The pebble represents the external situation, and the ripples represent your emotional response. While you can't control the pebble (the external situation), you can control the ripples (your reaction). Benefits: * Reduced Suffering: By taking responsibility for your own reactions, you become less dependent on external circumstances for your happiness. * Increased Self-Awareness: It helps you understand your triggers and cultivate a more mindful response to situations. * Greater Personal Growth: By taking responsibility, you empower yourself to learn and grow from experiences. Here are some additional points to consider: * This practice doesn't mean excusing bad behavior. You can still hold others accountable while taking responsibility for your own reactions. * It's a gradual process. Be patient with yourself as you learn to practice this approach.
Quintin Pope2hΩ240
0
Idea for using current AI to accelerate medical research: suppose you were to take a VLM and train it to verbally explain the differences between two image data distributions. E.g., you could take 100 dog images, split them into two classes, insert tiny rectangles into class 1, feed those 100 images into the VLM, and then train it to generate the text "class 1 has tiny rectangles in the images". Repeat this for a bunch of different augmented datasets where we know exactly how they differ, aiming for a VLM with a general ability to in-context learn and verbally describe the differences between two sets of images. As training processes, keep making there be more and subtler differences, while training the VLM to describe all of them. Then, apply the model to various medical images. E.g., brain scans of people who are about to develop dementia versus those who aren't, skin photos of malignant and non-malignant blemishes, electron microscope images of cancer cells that can / can't survive some drug regimen, etc. See if the VLM can describe any new, human interpretable features. The VLM would generate a lot of false positives, obviously. But once you know about a possible feature, you can manually investigate whether it holds to distinguish other examples of the thing you're interested in. Once you find valid features, you can add those into the training data of the VLM, so it's no longer just trained on synthetic augmentations. You might have to start with real datasets that are particularly easy to tell apart, in order to jumpstart your VLM's ability to accurately describe the differences in real data. The other issue with this proposal is that it currently happens entirely via in context learning. This is inefficient and expensive (100 images is a lot for one model at once!). Ideally, the VLM would learn the difference between the classes by actually being trained on images from those classes, and learn to connect the resulting knowledge to language descriptions of the associated differences through some sort of meta learning setup. Not sure how best to do that, though.
Elizabeth10h92
1
A very rough draft of a plan to test prophylactics for airborne illnesses. Start with a potential superspreader event. My ideal is a large conference,  many of whom travelled to get there, in enclosed spaces with poor ventilation and air purification, in winter. Ideally >=4 days, so that people infected on day one are infectious while the conference is still running.  Call for sign-ups for testing ahead of time (disclosing all possible substances and side effects). Split volunteers into control and test group. I think you need ~500 sign ups in the winter to make this work.  Splitting controls is probably the hardest part. You'd like the control and treatment group to be identical, but there are a lot of things that affect susceptibility.  Age, local vs. air travel, small children vs. not, sleep habits... it's hard to draw the line Make it logistically trivial to use the treatment. If it's lozenges or liquids, put individually packed dosages in every bathroom, with a sign reminding people to use them (color code to direct people to the right basket). If it's a nasal spray you will need to give everyone their own bottle, but make it trivial to get more if someone loses theirs. Follow-up a week later, asking if people have gotten sick and when.  If the natural disease load is high enough this should give better data than any paper I've found.  Top contenders for this plan: * zinc lozenge  * salt water gargle * enovid * betadine gargle * zinc gargle
niplav7h71
3
Consider proposing the most naĩve formula for logical correlation[1]. Let a program p be a tuple of code for a Turing machine, intermediate tape states after each command execution, and output. All in binary. That is p=(c,t,o), with c∈{0,1}+,t∈({0,1}+)+ and o∈{0,1}+. Let l=|t| be the number of steps that p takes to halt. Then a formula for the logical correlation 合 [2] of two halting programs p1=(c1,t1,o1),p2=(c2,t2,o2), a tape-state discount factor γ[3], and a string-distance metric d:{0,1}+×{0,1}+→N could be 合 (p1,p2,γ)=d(o1,o2)−12+∑min(l1,l2)k=0γk⋅d(t1(l1−k),t2(l2−k)) The lower 合 , the higher the logical correlation between p1 and p2. The minimal value is −0.5. If d(o1,o2)<d(o1,o3), then it's also the case that 合 (p1,p2,γ)<合 (p1,p3,γ). One might also want to be able to deal with the fact that programs have different trace lengths, and penalize that, e.g. amending the formula: 合 ′(p1,p2,γ)=合 (p1,p2,γ)+2|l1−l2| I'm a bit unhappy that the code doesn't factor in the logical correlation, and ideally one would want to be able to compute the logical correlation without having to run the program. How does this relate to data=code? ---------------------------------------- 1. Actually not explained in detail anywhere, as far as I can tell. I'm going to leave out all motivation here. ↩︎ 2. Suggested by GPT-4. Stands for joining, combining, uniting. Also "to suit; to fit", "to have sexual intercourse", "to fight, to have a confrontation with", or "to be equivalent to, to add up". ↩︎ 3. Which is needed because tape states close to the output are more important than tape states early on. ↩︎
MIRI Technical Governance Team is hiring, please apply and work with us! We are looking to hire for the following roles: * Technical Governance Researcher (2-4 hires) * Writer (1 hire) The roles are located in Berkeley, and we are ideally looking to hire people who can start ASAP. The team is currently Lisa Thiergart (team lead) and myself. We will research and design technical aspects of regulation and policy that could lead to safer AI, focusing on methods that won’t break as we move towards smarter-than-human AI. We want to design policy that allows us to safely and objectively assess the risks from powerful AI, build consensus around the risks we face, and put in place measures to prevent catastrophic outcomes. The team will likely work on: * Limitations of current proposals such as RSPs * Inputs into regulations, requests for comment by policy bodies (ex. NIST/US AISI, EU, UN) * Researching and designing alternative Safety Standards, or amendments to existing proposals * Communicating with and consulting for policymakers and governance organizations If you have any questions, feel free to contact me on LW or at peter@intelligence.org 

Popular Comments

Recent Discussion

Cross posted from the EA Forum.

Epistemic Status: all numbers are made up and/or sketchily sourced. Post errs on the side of simplistic poetry – take seriously but not literally.


If you want to coordinate with one person on a thing about something nuanced, you can spend as much time as you want talking to them – answering questions in realtime, addressing confusions as you notice them. You can trust them to go off and attempt complex tasks without as much oversight, and you can decide to change your collective plans quickly and nimbly.

You probably speak at around 100 words per minute. That's 6,000 words per hour. If you talk for 3 hours a day, every workday for a year, you can communicate 4.3 million words worth...

Isn't LessWrong a disproof of this?  Aren't we thousands of people?  If you picked two active LWers at random, do you think the average overlap in their reading material would be 5 words?  More like 100,000, I'd think.

yanni2h61

I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist.

Gemini did a good job of summarising it:

This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown:

What it Doesn't Mean:

  • Self-Flagellation:
... (read more)
7niplav7h
Consider proposing the most naĩve formula for logical correlation[1]. Let a program p be a tuple of code for a Turing machine, intermediate tape states after each command execution, and output. All in binary. That is p=(c,t,o), with c∈{0,1}+,t∈({0,1}+)+ and o∈{0,1}+. Let l=|t| be the number of steps that p takes to halt. Then a formula for the logical correlation 合 [2] of two halting programs p1=(c1,t1,o1),p2=(c2,t2,o2), a tape-state discount factor γ[3], and a string-distance metric d:{0,1}+×{0,1}+→N could be 合 (p1,p2,γ)=d(o1,o2)−12+∑min(l1,l2)k=0γk⋅d(t1(l1−k),t2(l2−k)) The lower 合 , the higher the logical correlation between p1 and p2. The minimal value is −0.5. If d(o1,o2)<d(o1,o3), then it's also the case that 合 (p1,p2,γ)<合 (p1,p3,γ). One might also want to be able to deal with the fact that programs have different trace lengths, and penalize that, e.g. amending the formula: 合 ′(p1,p2,γ)=合 (p1,p2,γ)+2|l1−l2| I'm a bit unhappy that the code doesn't factor in the logical correlation, and ideally one would want to be able to compute the logical correlation without having to run the program. How does this relate to data=code? ---------------------------------------- 1. Actually not explained in detail anywhere, as far as I can tell. I'm going to leave out all motivation here. ↩︎ 2. Suggested by GPT-4. Stands for joining, combining, uniting. Also "to suit; to fit", "to have sexual intercourse", "to fight, to have a confrontation with", or "to be equivalent to, to add up". ↩︎ 3. Which is needed because tape states close to the output are more important than tape states early on. ↩︎

Ideally one would want to be able to compute the logical correlation without having to run the program.

I think this isn't possible in the general case. Consider two programs, one of which is "compute the sha256 digest of all 30 byte sequences and halt if the result is 9a56f6b41455314ff1973c72046b0821a56ca879e9d95628d390f8b560a4d803" and the other of which is "compute the md5 digest of all 30 byte sequences and halt if the result is 055a787f2fb4d00c9faf4dd34a233320".

Any method that was able to compute the logical correlation between those would also be a program which at a minimum reverses all cryptograhic hash functions.

2Alexander Gietelink Oldenziel7h
Could you say more about the motivation here ?
3niplav7h
Whenever people have written/talked about ECL, a common thing I've read/heard was that "of course, this depends on us finding some way of saying that one decision algorithm is similar/dissimilar to another one, since we're not going to encounter the case of perfect copies very often". This was at least the case when I last asked Oesterheld about this, but I haven't read Treutlein 2023 closely enough yet to figure out whether he has a satisfying solution. The fact we didn't have a characterization of logical correlation bugged me and was in the back of my mind, since it felt like a problem that one could make progress on. Today in the shower I was thinking about this, and the post above is what came of it. (I also have the suspicion that having a notion of "these two programs produce the same/similar outputs in a similar way" might be handy in general.)

Idea for using current AI to accelerate medical research: suppose you were to take a VLM and train it to verbally explain the differences between two image data distributions. E.g., you could take 100 dog images, split them into two classes, insert tiny rectangles into class 1, feed those 100 images into the VLM, and then train it to generate the text "class 1 has tiny rectangles in the images". Repeat this for a bunch of different augmented datasets where we know exactly how they differ, aiming for a VLM with a general ability to in-context learn and verb... (read more)

Summary: the moderators appear to be soft banning users with 'rate-limits' without feedback.  A careful review of each banned user reveals it's common to be banned despite earnestly attempting to contribute to the site.  Some of the most intelligent banned users have mainstream instead of EA views on AI.   

Note how the punishment lengths are all the same, I think it was a mass ban-wave of 3 week bans:

Gears to ascension was here but is no longer, guess she convinced them it was a mistake.

Have I made any like really dumb or bad comments recently:

https://www.greaterwrong.com/users/gerald-monroe?show=comments

Well I skimmed through it.  I don't see anything.  Got a healthy margin now on upvotes, thanks April 1.

Over a month ago, I did comment this stinker.  Here is what seems to the...

As I said above, I would guess this is a bad moderation decision. Although I don't know the full story and don't see you on the moderation log.

4kave6h
I'm not seeing any active rate limits. Do you know when you observed it? It's certainly the case that an automatic rate limit could have kicked in and then, as voting changed, been removed.
2habryka5h
Yeah, I am also not seeing anything. Maybe it was something temporary, but I thought we had set it up to leave a trace if any automatic rate limits got applied in the past.  Curious what symptom Nora observed (GreaterWrong has been having some problems with rate-limit warnings that I've been confused by, so I can imagine that looking like a rate-limit from our side).
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

The Save State Paradox: A new question for the construct of reality in a simulated world

Consider this thought experiment - in a simulated world (if we do indeed currently live in one), how could we detect an event similar to a state “reset”? Such events could be triggered for existential safety reasons or one unbeknownst to us? If this was the case, how would we become aware of such occurrences if we were reverted to a time before the execution; affecting memories, physical states and environmental continuity?  Imagine if seemingly inexplicable concep... (read more)

Tomorrow, Lily and I will be leading a Kids Contra Jam at NEFFA (2pm in the Sudbury room!). We'll be playing off of Lily's tune list, but someone was asking about chords. I decided to have a go at writing out the simplest acceptable chords for each of the tunes we're planning. Each letter represents two downbeats:

All the Rage

A: 𝄆 E E A B 𝄇 x4
B: 𝄆 A A B B 𝄇 x4

Lisnagun

𝄆 G G C D 𝄇 x8

Devil's Dream

𝄆 D D A E 𝄇 x8

Reign of Love

A: 𝄆  Em Em C D 𝄇 x4
B: 𝄆  Em Em C C
D  D  C C 𝄇

June Apple

A: 𝄆 A A G G
    A A G D 𝄇
B: 𝄆  A A G D 𝄇
...

I recently asked about the glorious AI future but I meant to ask something more actionable. Near-term (say, next 5 years) stuff that ambitious people can aim for.

Lots of recent tech is a mixed bag in terms of societal outcomes and whatnot. I have the usual complaints about viruses being too easy to create, social media, phone overuse,  gpt reddit astroturfing bots, facial recognition, mass surveillance, cheap quadcopters (cuz grenades), etc etc. [1]

I sure love my flush toilet though. And lightning rods, electricity, batteries, the computer mouse, wheels, airplanes, microscopes, vaccines, oral rehydration therapy (aka pedialyte), antiobiotics, the Haber-Bosch process, cultured meat if it works, air & water filters, reusable rockets, youtube, and cheap iron all seem pretty great. Democracy via anonymous paper ballots is also a clear...

2bhauth14h
Downside is it doesn't work. There are several reasons it doesn't work, but one is: What happens to the vaporized rock? Were you thinking it just goes up a 20km deep hole without condensing on the walls?
1Exa Watson15h
not excited about this - such a coach is either going to give very politically correct opinions, or target audiences with glaring insecurities, like young or low confidence men.. just like human coaches. 
1lukehmiles15h
How do cancer vaccines work?
Dagon4h20

[epistemic status: just what I've read in popular-ish press, no actual knowledge nor expertise]

Two main mechanisms that I know of:

- Some cancers are caused (or enabled, or activated, or something) by viruses, and there's been immense progress in tailoring vaccines for specific viruses.

- Some cancers seem to be susceptible to targeted immune response (tailored antibodies).  Vaccines for these cancers enable one's body to reduce or eliminate spread of the cancer.

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA