This post is a not a so secret analogy for the AI Alignment problem. Via a fictional dialog, Eliezer explores and counters common questions to the Rocket Alignment Problem as approached by the Mathematics of Intentional Rocketry Institute. 

MIRI researchers will tell you they're worried that "right now, nobody can tell you how to point your rocket’s nose such that it goes to the moon, nor indeed any prespecified celestial destination."

Elizabeth7h163
0
Check my math: how does Enovid compare to to humming? Nitric Oxide is an antimicrobial and immune booster. Normal nasal nitric oxide is 0.14ppm for women and 0.18ppm for men (sinus levels are 100x higher). journals.sagepub.com/doi/pdf/10.117… Enovid is a nasal spray that produces NO. I had the damndest time quantifying Enovid, but this trial registration says 0.11ppm NO/hour. They deliver every 8h and I think that dose is amortized, so the true dose is 0.88. But maybe it's more complicated. I've got an email out to the PI but am not hopeful about a response clinicaltrials.gov/study/NCT05109…   so Enovid increases nasal NO levels somewhere between 75% and 600% compared to baseline- not shabby. Except humming increases nasal NO levels by 1500-2000%. atsjournals.org/doi/pdf/10.116…. Enovid stings and humming doesn't, so it seems like Enovid should have the larger dose. But the spray doesn't contain NO itself, but compounds that react to form NO. Maybe that's where the sting comes from? Cystic fibrosis and burn patients are sometimes given stratospheric levels of NO for hours or days; if the burn from Envoid came from the NO itself than those patients would be in agony.  I'm not finding any data on humming and respiratory infections. Google scholar gives me information on CF and COPD, @Elicit brought me a bunch of studies about honey.   With better keywords google scholar to bring me a bunch of descriptions of yogic breathing with no empirical backing. There are some very circumstantial studies on illness in mouth breathers vs. nasal, but that design has too many confounders for me to take seriously.  Where I'm most likely wrong: * misinterpreted the dosage in the RCT * dosage in RCT is lower than in Enovid * Enovid's dose per spray is 0.5ml, so pretty close to the new study. But it recommends two sprays per nostril, so real dose is 2x that. Which is still not quite as powerful as a single hum. 
A tension that keeps recurring when I think about philosophy is between the "view from nowhere" and the "view from somewhere", i.e. a third-person versus first-person perspective—especially when thinking about anthropics. One version of the view from nowhere says that there's some "objective" way of assigning measure to universes (or people within those universes, or person-moments). You should expect to end up in different possible situations in proportion to how much measure your instances in those situations have. For example, UDASSA ascribes measure based on the simplicity of the computation that outputs your experience. One version of the view from somewhere says that the way you assign measure across different instances should depend on your values. You should act as if you expect to end up in different possible future situations in proportion to how much power to implement your values the instances in each of those situations has. I'll call this the ADT approach, because that seems like the core insight of Anthropic Decision Theory. Wei Dai also discusses it here. In some sense each of these views makes a prediction. UDASSA predicts that we live in a universe with laws of physics that are very simple to specify (even if they're computationally expensive to run), which seems to be true. Meanwhile the ADT approach "predicts" that we find ourselves at an unusually pivotal point in history, which also seems true. Intuitively I want to say "yeah, but if I keep predicting that I will end up in more and more pivotal places, eventually that will be falsified". But.... on a personal level, this hasn't actually been falsified yet. And more generally, acting on those predictions can still be positive in expectation even if they almost surely end up being falsified. It's a St Petersburg paradox, basically. Very speculatively, then, maybe a way to reconcile the view from somewhere and the view from nowhere is via something like geometric rationality, which avoids St Petersburg paradoxes. And more generally, it feels like there's some kind of multi-agent perspective which says I shouldn't model all these copies of myself as acting in unison, but rather as optimizing for some compromise between all their different goals (which can differ even if they're identical, because of indexicality). No strong conclusions here but I want to keep playing around with some of these ideas (which were inspired by a call with @zhukeepa). This was all kinda rambly but I think I can summarize it as "Isn't it weird that ADT tells us that we should act as if we'll end up in unusually important places, and also we do seem to be in an incredibly unusually important place in the universe? I don't have a story for why these things are related but it does seem like a suspicious coincidence."
I think I'm gonna start posting top blogpost to the main feed (mainly from dead writers or people I predict won't care) 
The main thing I got out of reading Bostrom's Deep Utopia is a better appreciation of this "meaning of life" thing. I had never really understood what people meant by this, and always just rounded it off to people using lofty words for their given projects in life. The book's premise is that, after the aligned singularity, the robots will not just be better at doing all your work but also be better at doing all your leisure for you. E.g., you'd never study for fun in posthuman utopia, because you could instead just ask the local benevolent god to painlessly, seamlessly put all that wisdom in your head. In that regime, studying with books and problems for the purpose of learning and accomplishment is just masochism. If you're into learning, just ask! And similarly for any psychological state you're thinking of working towards. So, in that regime, it's effortless to get a hedonically optimal world, without any unendorsed suffering and with all the happiness anyone could want. Those things can just be put into everyone and everything's heads directly—again, by the local benevolent-god authority. The only challenging values to satisfy are those that deal with being practically useful. If you think it's important to be the first to discover a major theorem or be the individual who counterfactually helped someone, living in a posthuman utopia could make things harder in these respects, not easier. The robots can always leave you a preserve of unexplored math or unresolved evil... but this defeats the purpose of those values. It's not practical benevolence if you had to ask for the danger to be left in place; it's not a pioneering scientific discovery if the AI had to carefully avoid spoiling it for you. Meaning is supposed to be one of these values: not a purely hedonic value, and not a value dealing only in your psychological states. A further value about the objective state of the world and your place in relation to it, wherein you do something practically significant by your lights. If that last bit can be construed as something having to do with your local patch of posthuman culture, then there can be plenty of meaning in the postinstrumental utopia! If that last bit is inextricably about your global, counterfactual practical importance by your lights, then you'll have to live with all your "localistic" values satisfied but meaning mostly absent. It helps to see this meaning thing if you frame it alongside all the other objectivistic "stretch goal" values you might have. Above and beyond your hedonic values, you might also think it good for you and others to have objectively interesting lives, accomplished and fulfilled lives, and consumingly purposeful lives. Meaning is one of these values, where above and beyond the joyful, rich experiences of posthuman life, you also want to play a significant practical role in the world. We might or might not be able to have lots of objective meaning in the AI utopia, depending on how objectivistic meaningfulness by your lights ends up being. > Considerations that in today's world are rightly dismissed as frivolous may well, once more pressing problems have been resolved, emerge as increasingly important [remaining] lodestars... We could and should then allow ourselves to become sensitized to fainter, subtler, less tangible and less determinate moral and quasi-moral demands, aesthetic impingings, and meaning-related desirables. Such recalibration will, I believe, enable us to discern a lush normative structure in the new realm that we will find ourselves in—revealing a universe iridescent with values that are insensible to us in our current numb and stupefied condition (pp. 318-9).
There was this voice inside my head that told me that since I got Something to protect, relaxing is never ok above strict minimum, the goal is paramount, and I should just work as hard as I can all the time. This led me to breaking down and being incapable to work on my AI governance job for a week, as I just piled up too much stress. And then, I decided to follow what motivated me in the moment, instead of coercing myself into working on what I thought was most important, and lo and behold! my total output increased, while my time spent working decreased. I'm so angry and sad at the inadequacy of my role models, cultural norms, rationality advice, model of the good EA who does not burn out, which still led me to smash into the wall despite their best intentions. I became so estranged from my own body and perceptions, ignoring my core motivations, feeling harder and harder to work. I dug myself such deep a hole. I'm terrified at the prospect to have to rebuild my motivation myself again.

Popular Comments

Recent Discussion

The history of science has tons of examples of the same thing being discovered multiple time independently; wikipedia has a whole list of examples here. If your goal in studying the history of science is to extract the predictable/overdetermined component of humanity's trajectory, then it makes sense to focus on such examples.

But if your goal is to achieve high counterfactual impact in your own research, then you should probably draw inspiration from the opposite: "singular" discoveries, i.e. discoveries which nobody else was anywhere close to figuring out. After all, if someone else would have figured it out shortly after anyways, then the discovery probably wasn't very counterfactually impactful.

Alas, nobody seems to have made a list of highly counterfactual scientific discoveries, to complement wikipedia's list of multiple discoveries.

To...

The Iowa Election Markets were roughly contemporaneous with Hanson's work. They are often co-credited.

2Wei Dai23m
Even if someone made a discovery decades earlier than it otherwise would have been, the long term consequences of that may be small or unpredictable. If your goal is to "achieve high counterfactual impact in your own research" (presumably predictably positive ones) you could potentially do that in certain fields (e.g., AI safety) even if you only counterfactually advance the science by a few months or years. I'm a bit confused why you're asking people to think in the direction outlined in the OP.
3Answer by Carl Feynman2h
Wegener’s theory of continental drift was decades ahead of its time. He published in the 1920s, but plate tectonics didn’t take over until the 1960s.  His theory was wrong in important ways, but still.
1kromem2h
Do you have a specific verse where you feel like Lucretius praised him on this subject? I only see that he praises him relative to other elementaists before tearing him and the rest apart for what he sees as erroneous thinking regarding their prior assertions around the nature of matter, saying: "Yet when it comes to fundamentals, there they meet their doom. These men were giants; when they stumble, they have far to fall:" (Book 1, lines 740-741) I agree that he likely was a precursor to the later thinking in suggesting a compository model of life starting from pieces which combined to forms later on, but the lack of the source material makes it hard to truly assign credit. It's kind of like how the Greeks claimed atomism originated with the much earlier Mochus of Sidon, but we credit Democritus because we don't have proof of Mochus at all but we do have the former's writings. We don't even so much credit Leucippus, Democritus's teacher, as much as his student for the same reasons, similar to how we refer to "Plato's theory of forms" and not "Socrates' theory of forms." In any case, Lucretius oozes praise for Epicurus, comparing him to a god among men, and while he does say Empedocles was far above his contemporaries saying the same things he was, he doesn't seem overly deferential to his positions as much as criticizing the shortcomings in the nuances of their theories with a special focus on theories of matter. I don't think there's much direct influence on Lucretius's thinking around proto-evolution, even if there's arguably plausible influence on Epicurus's which in turn informed Lucretius.

U.S. Secretary of Commerce Gina Raimondo announced today additional members of the executive leadership team of the U.S. AI Safety Institute (AISI), which is housed at the National Institute of Standards and Technology (NIST). Raimondo named Paul Christiano as Head of AI Safety, Adam Russell as Chief Vision Officer, Mara Campbell as Acting Chief Operating Officer and Chief of Staff, Rob Reich as Senior Advisor, and Mark Latonero as Head of International Engagement. They will join AISI Director Elizabeth Kelly and Chief Technology Officer Elham Tabassi, who were announced in February. The AISI was established within NIST at the direction of President Biden, including to support the responsibilities assigned to the Department of Commerce under the President’s landmark Executive Order.

Paul Christiano, Head of AI Safety, will design

...
2Davidmanheim10h
The OP claimed it was a failure of BSL levels that induced biorisk as a cause area, and I said that was a confused claim. Feel free to find someone who disagrees with me here, but the proximate causes of EAs worrying about biorisk has nothing to do with BSL lab designations. It's not BSL levels that failed in allowing things like the soviet bioweapons program, or led to the underfunded and largely unenforceable BWC, or the way that newer technologies are reducing the barriers to terrorists and other being able to pursue bioweapons.
2Adam Scholl3h
I think we must still be missing each other somehow. To reiterate, I'm aware that there is also non-accidental biorisk, for which one can hardly blame the safety measures. But there is substantial accident risk too, since labs often fail to contain pathogens even when they're trying to.
2Davidmanheim12h
I did not say that they didn't want to ban things, I explicitly said "whether to allow certain classes of research at all," and when I said "happy to rely on those levels, I meant that the idea that we should have "BSL-5" is the kind of silly thing that novice EAs propose that doesn't make sense because there literally isn't something significantly more restrictive other than just banning it. I also think that "nearly all EA's focused on biorisk think gain of function research should be banned" is obviously underspecified, and wrong because of the details. Yes, we all think that there is a class of work that should be banned, but tons of work that would be called gain of function isn't in that class.

I meant that the idea that we should have "BSL-5" is the kind of silly thing that novice EAs propose that doesn't make sense because there literally isn't something significantly more restrictive

I mean, I'm sure something more restrictive is possible. But my issue with BSL levels isn't that they include too few BSL-type restrictions, it's that "lists of restrictions" are a poor way of managing risk when the attack surface is enormous. I'm sure someday we'll figure out how to gain this information in a safer way—e.g., by running simulations of GoF experimen... (read more)

This post brings together various questions about the college application process, as well as practical considerations of where to apply and go. We are seeing some encouraging developments, but mostly the situation remains rather terrible for all concerned.

Application Strategy and Difficulty

Paul Graham: Colleges that weren’t hard to get into when I was in HS are hard to get into now. The population has increased by 43%, but competition for elite colleges seems to have increased more. I think the reason is that there are more smart kids. If so that’s fortunate for America.

Are college applications getting more competitive over time?

Yes and no.

  1. The population size is up, but the cohort size is roughly the same.
  2. The standard ‘effort level’ of putting in work and sacrificing one’s childhood and gaming
...
Wei Dai45m20

Some of my considerations for college choice for my kid, that I suspect others may also want to think more about or discuss:

  1. status/signaling benefits for the parents (This is probably a major consideration for many parents to push their kids into elite schools. How much do you endorse it?)
  2. sex ratio at the school and its effect on the local "dating culture"
  3. political/ideological indoctrination by professors/peers
  4. workload (having more/less time/energy to pursue one's own interests)
2Wei Dai3h
Is this actually true? China has (1) (affirmative action via "Express and objective (i.e., points and quotas)") for its minorities and different regions and FWICT the college admissions "eating your whole childhood" problem over there is way worse. Of course that could be despite (1) not because of it, but does make me question whether (3) ("Implied and subjective ('we look at the whole person').") is actually far worse than (1) for this.
1rotatingpaguro4h
What are these events?
3Jacob G-W3h
I'm assuming the recent protests about the Gaza war: https://www.nytimes.com/live/2024/04/24/us/columbia-protests-mike-johnson

Warning: This post might be depressing to read for everyone except trans women. Gender identity and suicide is discussed. This is all highly speculative. I know near-zero about biology, chemistry, or physiology. I do not recommend anyone take hormones to try to increase their intelligence; mood & identity are more important.

Why are trans women so intellectually successful? They seem to be overrepresented 5-100x in eg cybersecurity twitter, mathy AI alignment, non-scam crypto twitter, math PhD programs, etc.

To explain this, let's first ask: Why aren't males way smarter than females on average? Males have ~13% higher cortical neuron density and 11% heavier brains (implying   more area?). One might expect males to have mean IQ far above females then, but instead the means and medians are similar:

Left. Right.

My theory...

lc1h20

Why aren't males way smarter than females on average? Males have ~13% higher cortical neuron density and 11% heavier brains...

Men are smarter than women, by about 2-4 points on average. Men are also larger, and so need bigger brains to compensate for their size anyways.

1lukehmiles2h
Someone on a subreddit said "free testosterone" is what matters and they usually just measure uh "regular testosterone" in blood or something. I have no idea if that's true. Know what those studies measured? Wildly guessing here, but my intuition is that estrogen would have a greater impact on neuroticism than testosterone. Although I can't even say which direction.
3lukehmiles2h
Like what exactly? That seems unlikely to me. I suppose we will have results from the ongoing gender transitions soon.
1lukehmiles2h
I only linked the U-shaped study to mention that someone had said something vaguely similar. Notice my words "people have posited a U-shaped curve...". Study indeed seems like garbage. Perhaps i should've said that explicitly. Yes so the experiment is that a million people are starting up in taking hormones/blockers now. I don't think proper results are in but what I have myself observed seems like strong evidence that blocking T preserves or raises intelligence on the margin.
This is a linkpost for https://dynomight.net/seed-oil/

A friend has spent the last three years hounding me about seed oils. Every time I thought I was safe, he’d wait a couple months and renew his attack:

“When are you going to write about seed oils?”

“Did you know that seed oils are why there’s so much {obesity, heart disease, diabetes, inflammation, cancer, dementia}?”

“Why did you write about {meth, the death penalty, consciousness, nukes, ethylene, abortion, AI, aliens, colonoscopies, Tunnel Man, Bourdieu, Assange} when you could have written about seed oils?”

“Isn’t it time to quit your silly navel-gazing and use your weird obsessive personality to make a dent in the world—by writing about seed oils?”

He’d often send screenshots of people reminding each other that Corn Oil is Murder and that it’s critical that we overturn our lives...

1nonveumann1h
This is shockingly similar to what I'm going through.  And the fries that fucked me up the other night are indeed fried in canola oil. I'm cautiously optimistic but I know how complicated these things can be -_-. Will report back!
9Ann3h
Hmm, while I don't think olives in general are unhealthy in the slightest (you can overload on salt if you focus on them too much because they are brined, but that's reasonable to expect), there is definitely a meaningful distinction between the two types of processing we're referencing. Nixtamalization isn't isolating a part of something, it's rendering nutrients already in the corn more available. Fermenting olives isn't isolating anything, (though extracting olive oil is), it's removing substances that make the olive inedible. Same for removing tannins from acorns. Cooking is in main part rendering substances more digestible. We often combine foods to make nutrients more accessible, like adding oil to greens with fat-soluble vitamins. I do think there's a useful intuition that leaving out part of an edible food is less advantageous than just eating the whole thing, because we definitely do want to get sufficient nutrients, and if we're being sated without enough of the ones we can't generate we'll have problems. This intuition doesn't happen to capture my specific known difficulty with an industrially processed additive, though, which is a mild allergy to a contaminant on a particular preservative that's commonly industrially produced via a specific strain of mold. (Being citric acid, there's no plausible mechanism by which I could be allergic to the substance itself, especially considering I have no issues whatsoever with citrus fruits.) In this case there's rarely a 'whole food' to replace - it's just a preservative.
1Slapstick2h
I would consider adding salt to something to be making that thing less healthy. If adding salt is essential to making something edible, I think it would be healthier to opt for something that doesn't require added salt. That's speaking generally though, someone might not be getting enough sodium, but typically there is adequate sodium in a diet of whole foods. I would disagree that adding refined oil to greens would be healthy overall. Not sure how much oil we're talking, but a tablespoon of oil has more calories than an entire pound of greens. Even if the oil increases the availability of vitamins, I am very sceptical that it would be healthier than greens or other whole plants with an equivalent caloric content to the added oil. I believe it's also the case that fats from whole foods can offer similar bioavailability effects. At the same time, as far as I'm aware some kinds of vinegar might sometimes be a healthy addition to a meal, despite it's processing being undoubtedly contrary to the general guidelines I'm defending, so even if I don't agree about the oil I think the point still stands. I do think you're offering some valid points that confound my idea of simple guidelines somewhat, but I still don't think they're very significant exceptions to my main point. Appreciate the dialogue:)
Ann1h10

We're talking about a tablespoon of (olive, traditionally) oil and vinegar mixed for a serving of simple sharp vinaigrette salad dressing, yeah. From a flavor perspective, generally it's hard for the vinegar to stick to the leaves without the oil.

If you aren't comfortable with adding a refined oil, adding unrefined fats like nuts and seeds, eggs or meat, should have some similar benefits in making the vitamins more nutritionally available, and also have the benefit of the nutrients of the nuts, seeds, eggs or meat, yes. Often these are added to salad anywa... (read more)

I've seen a lot of news lately about the ways that particular LLMs score on particular tests.

Which if any of those tests can I go take online to see how my performance on them compares to the models?

Answer by Lech MazurApr 25, 202410

You can go through an archive of NYT Connections puzzles I used in my leaderboard. The scoring I use allows only one try and gives partial credit, so if you make a mistake after getting 1 line correct, that's 0.25 for the puzzle. Top humans get near 100%. Top LLMs score around 30%. Timing is not taken into account.

1cSkeleton5h
Is there any information on how long the LLM spent on taking the tests? Any idea? I'd like to know the comparison with human times. (I realize it can depend on hardware, etc but would just like some general idea.)
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

I refuse to join any club that would have me as a member.

— Groucho Marx

Alice and Carol are walking on the sidewalk in a large city, and end up together for a while.

"Hi, I'm Alice! What's your name?"

Carol thinks:

If Alice is trying to meet people this way, that means she doesn't have a much better option for meeting people, which reduces my estimate of the value of knowing Alice. That makes me skeptical of this whole interaction, which reduces the value of approaching me like this, and Alice should know this, which further reduces my estimate of Alice's other social options, which makes me even less interested in meeting Alice like this.

Carol might not think all of that consciously, but that's how human social reasoning tends to...

"When there's a will to fail, obstacles can be found."   —John McCarthy

I first watched Star Wars IV-VI when I was very young.  Seven, maybe, or nine?  So my memory was dim, but I recalled Luke Skywalker as being, you know, this cool Jedi guy.

Imagine my horror and disappointment, when I watched the saga again, years later, and discovered that Luke was a whiny teenager.

I mention this because yesterday, I looked up, on Youtube, the source of the Yoda quote:  "Do, or do not.  There is no try."

Oh.  My.  Cthulhu.

Along with the Youtube clip in question, I present to you a little-known outtake from the scene, in which the director and writer, George Lucas, argues with Mark Hamill, who played Luke Skywalker:

Luke:  All right, I'll give it a

...
1done2h
Source? I spent a few seconds trying to find the video, but it's impossible!
Nisan2h40

It is a fiction.

Epistemic – this post is more suitable for LW as it was 10 years ago

 

Thought experiment with curing a disease by forgetting

Imagine I have a bad but rare disease X. I may try to escape it in the following way:

1. I enter the blank state of mind and forget that I had X.

2. Now I in some sense merge with a very large number of my (semi)copies in parallel worlds who do the same. I will be in the same state of mind as other my copies, some of them have disease X, but most don’t.  

3. Now I can use self-sampling assumption for observer-moments (Strong SSA) and think that I am randomly selected from all these exactly the same observer-moments. 

4. Based on this, the chances that my next observer-moment after...

ABlue2h10

Is this an independent reinvention of the law of attraction? There doesn't seem to be anything special about "stop having a disease by forgetting about it" compared to the general "be in a universe by adopting a mental state compatible with that universe." That said, becoming completely convinced I'm a billionaire seems more psychologically involved than forgetting I have some disease, and the ratio of universes where I'm a billionaire versus I've deluded myself into thinking I'm a billionaire seems less favorable as well.

Anyway, this doesn't seem like a g... (read more)

4Donald Hobson5h
The point is, if all the robots are a true blank state, then none of them is you. Because your entire personality has just been forgotten.
4justinpombrio6h
No, that doesn't work. It invalidates the implicit assumption you're making that the probability that a person chooses to "forget" is independent of whether they have the disease. Ultimately, you're "mixing" the various people who "forgot", and a "mixing" procedure can't change the proportion of people who have the disease. When you take this into account, the conclusion becomes rather mundane. Some copies of you can gain the disease, while a proportional number of copies can lose it. (You might think you could get some respite by repeatedly trading off "who" has the disease, but the forgetting procedure ensures that no copy ever feels respite, as that would require remembering having the disease.)
1Alen6h
The multiverse might be very big. Perhaps you're mad enough having the disease will bring you to a state of mind that a version with no disease has. That's why wizards have to be mad to use magic.

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA