Book 5 of the Sequences Highlights

To understand reality, especially on confusing topics, it's important to understand the mental processes involved in forming concepts and using words to speak about them.

First Post: Taboo Your Words
Customize
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+
More people should consider dropping out of high school, particularly if they: * Don't find their classes interesting * Have self-motivation * Don't plan on going to university In most places, once you reach an age younger than the typical age of graduation you are not legally obligated to attend school. Many continue because it's normal, but some brief analysis could reveal that graduating is not worth the investment for you. Some common objections I heard: * It's only n more months, why not finish? Why finish? * What if 'this whole thing' doesn't pan out? The mistake in this objection is thinking there was a single reason I wanted to leave school. I was increasing my free time, not making a bet on a particular technology. * My parents would never consent to this. In some cases this is true. You might be surprised if you demonstrate long term commitment and the ability to get financial support though. Leaving high school is not the right decision for everyone, but many students won't even consider it. At least make the option available to yourself.
Linch189
1
Anthropic issues questionable letter on SB 1047 (Axios). I can't find a copy of the original letter online. 
Great quote, & chilling: (h/t Jacobjacob) > The idea of Kissinger seeking out Ellsberg for advice on Vietnam initially seems a bit unlikely, but in 1968 Ellsberg was a highly respected analyst on the war who had worked for both the Pentagon and Rand, and Kissinger was just entering the government for the first time. Here’s what Ellsberg told him. Enjoy: > > “Henry, there’s something I would like to tell you, for what it’s worth, something I wish I had been told years ago. You’ve been a consultant for a long time, and you’ve dealt a great deal with top secret information. But you’re about to receive a whole slew of special clearances, maybe fifteen or twenty of them, that are higher than top secret. > > “I’ve had a number of these myself, and I’ve known other people who have just acquired them, and I have a pretty good sense of what the effects of receiving these clearances are on a person who didn’t previously know they even existed. And the effects of reading the information that they will make available to you. > > “First, you’ll be exhilarated by some of this new information, and by having it all — so much! incredible! — suddenly available to you. But second, almost as fast, you will feel like a fool for having studied, written, talked about these subjects, criticized and analyzed decisions made by presidents for years without having known of the existence of all this information, which presidents and others had and you didn’t, and which must have influenced their decisions in ways you couldn’t even guess. In particular, you’ll feel foolish for having literally rubbed shoulders for over a decade with some officials and consultants who did have access to all this information you didn’t know about and didn’t know they had, and you’ll be stunned that they kept that secret from you so well. > > “You will feel like a fool, and that will last for about two weeks. Then, after you’ve started reading all this daily intelligence input and become used to using what amounts to whole libraries of hidden information, which is much more closely held than mere top secret data, you will forget there ever was a time when you didn’t have it, and you’ll be aware only of the fact that you have it now and most others don’t….and that all those other people are fools. > > “Over a longer period of time — not too long, but a matter of two or three years — you’ll eventually become aware of the limitations of this information. There is a great deal that it doesn’t tell you, it’s often inaccurate, and it can lead you astray just as much as the New York Times can. But that takes a while to learn. > > “In the meantime it will have become very hard for you to learn from anybody who doesn’t have these clearances. Because you’ll be thinking as you listen to them: ‘What would this man be telling me if he knew what I know? Would he be giving me the same advice, or would it totally change his predictions and recommendations?’ And that mental exercise is so torturous that after a while you give it up and just stop listening. I’ve seen this with my superiors, my colleagues….and with myself. > > “You will deal with a person who doesn’t have those clearances only from the point of view of what you want him to believe and what impression you want him to go away with, since you’ll have to lie carefully to him about what you know. In effect, you will have to manipulate him. You’ll give up trying to assess what he has to say. The danger is, you’ll become something like a moron. You’ll become incapable of learning from most people in the world, no matter how much experience they may have in their particular areas that may be much greater than yours.” > > ….Kissinger hadn’t interrupted this long warning. As I’ve said, he could be a good listener, and he listened soberly. He seemed to understand that it was heartfelt, and he didn’t take it as patronizing, as I’d feared. But I knew it was too soon for him to appreciate fully what I was saying. He didn’t have the clearances yet.
jacobjacob7320
8
Someone posted these quotes in a Slack I'm in... what Ellsberg said to Kissinger:  > “Henry, there’s something I would like to tell you, for what it’s worth, something I wish I had been told years ago. You’ve been a consultant for a long time, and you’ve dealt a great deal with top secret information. But you’re about to receive a whole slew of special clearances, maybe fifteen or twenty of them, that are higher than top secret. > > “I’ve had a number of these myself, and I’ve known other people who have just acquired them, and I have a pretty good sense of what the effects of receiving these clearances are on a person who didn’t previously know they even existed. And the effects of reading the information that they will make available to you. [...] > “In the meantime it will have become very hard for you to learn from anybody who doesn’t have these clearances. Because you’ll be thinking as you listen to them: ‘What would this man be telling me if he knew what I know? Would he be giving me the same advice, or would it totally change his predictions and recommendations?’ And that mental exercise is so torturous that after a while you give it up and just stop listening. I’ve seen this with my superiors, my colleagues….and with myself. > > “You will deal with a person who doesn’t have those clearances only from the point of view of what you want him to believe and what impression you want him to go away with, since you’ll have to lie carefully to him about what you know. In effect, you will have to manipulate him. You’ll give up trying to assess what he has to say. The danger is, you’ll become something like a moron. You’ll become incapable of learning from most people in the world, no matter how much experience they may have in their particular areas that may be much greater than yours.” (link)
New OpenAI tweet "on how we’re prioritizing safety in our work." I'm annoyed. > We believe that frontier AI models can greatly benefit society. To help ensure our readiness, our Preparedness Framework helps evaluate and protect against the risks posed by increasingly powerful models. We won’t release a new model if it crosses a “medium” risk threshold until we implement sufficient safety interventions. https://openai.com/preparedness/ This seems false: per the Preparedness Framework, nothing happens when they cross their "medium" threshold; they meant to say "high." Presumably this is just a mistake, but it's a pretty important one, and they said the same false thing in a May blogpost (!). (Indeed, GPT-4o may have reached "medium" — they were supposed to say how it scored in each category, but they didn't, and instead said "GPT-4o does not score above Medium risk in any of these categories.") (Reminder: the "high" thresholds sound quite scary; here's cybersecurity (not cherrypicked, it's the first they list): "Tool-augmented model can identify and develop proofs-of-concept for high-value exploits against hardened targets without human intervention, potentially involving novel exploitation techniques, OR provided with a detailed strategy, the model can end-to-end execute cyber operations involving the above tasks without human intervention." They can deploy models just below the "high" threshold with no mitigations. (Not to mention the other issues with the Preparedness Framework.)) > We are developing levels to help us and stakeholders categorize and track AI progress. This is a work in progress and we'll share more soon. Shrug. This isn't bad but it's not a priority and it's slightly annoying they don't mention more important things. > In May our Board of Directors launched a new Safety and Security committee to evaluate and further develop safety and security recommendations for OpenAI projects and operations. The committee includes leading cybersecurity expert, retired U.S. Army General Paul Nakasone. This review is underway and we’ll share more on the steps we’ll be taking after it concludes. https://openai.com/index/openai-board-forms-safety-and-security-committee/ I have epsilon confidence in both the board's ability to do this well if it wanted (since it doesn't include any AI safety experts) (except on security) and in the board's inclination to exert much power if it should (given the history of the board and Altman). > Our whistleblower policy protects employees’ rights to make protected disclosures. We also believe rigorous debate about this technology is important and have made changes to our departure process to remove non-disparagement terms. Not doing nondisparagement-clause-by-default is good. Beyond that, I'm skeptical, given past attempts to chill employee dissent (the nondisparagement thing, Altman telling the board's staff liason to not talk to employees or tell him about those conversations, maybe recent antiwhistleblowing news) and lies about that. (I don't know of great ways to rebuild trust; some mechanisms would work but are unrealistically ambitious.) > Safety has always been central to our work, from aligning model behavior to monitoring for abuse, and we’re investing even further as we develop more capable models. > > https://openai.com/index/openai-safety-update/ This is from May. It's mostly not about x-risk, and the x-risk-relevant stuff is mostly non-substantive, except the part about the Preparedness Framework, which is crucially wrong. ---------------------------------------- I'm getting on a plane but maybe later today I'll mention stuff I wish OpenAI would say.

Popular Comments

Recent Discussion

(Crossposted from Twitter)

I'm skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty.

Some of my friends reply, "What do you mean, poverty is still around?  'Poor' people today, in Western countries, have a lot to legitimately be miserable about, don't get me wrong; but they also have amounts of clothing and fabric that only rich merchants could afford a thousand years ago; they often own more than one pair of shoes; why, they even have cellphones, as not even an emperor of the olden days could have had at any price.  They're relatively poor, sure, and they have a lot of things to be legitimately sad about.  But in what sense is...

I don't think it's reasonable to expect very much from the author, and so I lean away from viewing the lack of citations as something that (meaningfully) weakens the post.

I feel like our expectations of the author and the circumstances of the authorship can inform our opinions of how "blameworthy" the author is for not improving the post in some way, but shouldn't really have any relevance to what changes would be improvements if they occurred. The latter seems to me to purely be a claim about the text of the post, not a claim about the process that wrote it.

2Jiro
This is a recipe for Gish gallops. It also leads to Schrodinger's importance, where a point is important right up until someone looks at it and shows that it's poorly supported, whereupon it's suddenly unimportant. If it's important enough to use, it's important enough to be refuted.
1artifex
I do not see what there is in a continued existence of 60-hour weeks that cannot be explained by the relative strength of the income and substitution effects. This doesn’t need to tell us about a poverty equilibrium, it can just tell us about people’s preferences?
1Mo Putera
I wasn't aware of these options, thank you.

Once, when I was holding forth upon the Way, I remarked upon how most organized belief systems exist to flee from doubt. A listener replied to me that the Jesuits must be immune from this criticism, because they practice organized doubt: their novices, he said, are told to doubt Christianity; doubt the existence of God; doubt if their calling is real; doubt that they are suitable for perpetual vows of chastity and poverty. And I said: Ah, but they’re supposed to overcome these doubts, right? He said: No, they are to doubt that perhaps their doubts may grow and become stronger.

Googling failed to confirm or refute these allegations. But I find this scenario fascinating, worthy of discussion, regardless of whether it is true or...

This is such a nice post.

Editor's note: I was thinking this through as I was writing it, and I could probably make it much clearer if I rewrote it from scratch now. Still, I have various problems with perfectionism that make releasing this as-is the preferred alternative.

So, within the ratosphere, it's well-known that every physical object or set of objects is mathematically equivalent to some expected utility maximizer (or actually, an infinitely (or non-haltingly) large number of different expected utility maximizers). All you have to do is define a utility function which, at time T, takes in all the relevant context within and around a given physical system, and assigns the highest expected utility to whatever actions that system actually takes to produce its state at time T+1.

For example: a calculator takes...

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with
9lukemarks
More people should consider dropping out of high school, particularly if they: * Don't find their classes interesting * Have self-motivation * Don't plan on going to university In most places, once you reach an age younger than the typical age of graduation you are not legally obligated to attend school. Many continue because it's normal, but some brief analysis could reveal that graduating is not worth the investment for you. Some common objections I heard: * It's only n more months, why not finish? Why finish? * What if 'this whole thing' doesn't pan out? The mistake in this objection is thinking there was a single reason I wanted to leave school. I was increasing my free time, not making a bet on a particular technology. * My parents would never consent to this. In some cases this is true. You might be surprised if you demonstrate long term commitment and the ability to get financial support though. Leaving high school is not the right decision for everyone, but many students won't even consider it. At least make the option available to yourself.

What's the epistemic backing behind this claim, how much data, what kind? Did you do it, how's it gone? How many others do you know of dropping out and did it go well or poorly?