Frustrated by claims that "enlightenment" and similar meditative/introspective practices can't be explained and that you only understand if you experience them, Kaj set out to write his own detailed gears-level, non-mysterious, non-"woo" explanation of how meditation, etc., work in the same way you might explain the operation of an internal combustion engine.

The main thing I got out of reading Bostrom's Deep Utopia is a better appreciation of this "meaning of life" thing. I had never really understood what people meant by this, and always just rounded it off to people using lofty words for their given projects in life. The book's premise is that, after the aligned singularity, the robots will not just be better at doing all your work but also be better at doing all your leisure for you. E.g., you'd never study for fun in posthuman utopia, because you could instead just ask the local benevolent god to painlessly, seamlessly put all that wisdom in your head. In that regime, studying with books and problems for the purpose of learning and accomplishment is just masochism. If you're into learning, just ask! And similarly for any psychological state you're thinking of working towards. So, in that regime, it's effortless to get a hedonically optimal world, without any unendorsed suffering and with all the happiness anyone could want. Those things can just be put into everyone and everything's heads directly—again, by the local benevolent-god authority. The only challenging values to satisfy are those that deal with being practically useful. If you think it's important to be the first to discover a major theorem or be the individual who counterfactually helped someone, living in a posthuman utopia could make things harder in these respects, not easier. The robots can always leave you a preserve of unexplored math or unresolved evil... but this defeats the purpose of those values. It's not practical benevolence if you had to ask for the danger to be left in place; it's not a pioneering scientific discovery if the AI had to carefully avoid spoiling it for you. Meaning is supposed to be one of these values: not a purely hedonic value, and not a value dealing only in your psychological states. A further value about the objective state of the world and your place in relation to it, wherein you do something practically significant by your lights. If that last bit can be construed as something having to do with your local patch of posthuman culture, then there can be plenty of meaning in the postinstrumental utopia! If that last bit is inextricably about your global, counterfactual practical importance by your lights, then you'll have to live with all your "localistic" values satisfied but meaning mostly absent. It helps to see this meaning thing if you frame it alongside all the other objectivistic "stretch goal" values you might have. Above and beyond your hedonic values, you might also think it good for you and others to have objectively interesting lives, accomplished and fulfilled lives, and consumingly purposeful lives. Meaning is one of these values, where above and beyond the joyful, rich experiences of posthuman life, you also want to play a significant practical role in the world. We might or might not be able to have lots of objective meaning in the AI utopia, depending on how objectivistic meaningfulness by your lights ends up being. > Considerations that in today's world are rightly dismissed as frivolous may well, once more pressing problems have been resolved, emerge as increasingly important [remaining] lodestars... We could and should then allow ourselves to become sensitized to fainter, subtler, less tangible and less determinate moral and quasi-moral demands, aesthetic impingings, and meaning-related desirables. Such recalibration will, I believe, enable us to discern a lush normative structure in the new realm that we will find ourselves in—revealing a universe iridescent with values that are insensible to us in our current numb and stupefied condition (pp. 318-9).
I recently listened to The Righteous Mind. It was surprising to me that many people seem to intrinsically care about many things that look very much like good instrumental norms to me (in particular loyalty, respect for authority, and purity). The author does not make claims about what the reflective equilibrium will be, nor does he explain how the liberals stopped considering loyalty, respect, and purity as intrinsically good (beyond "some famous thinkers are autistic and didn't realize the richness of the moral life of other people"), but his work made me doubt that most people will have well-being-focused CEV. The book was also an interesting jumping point for reflection about group selection. The author doesn't make the sorts of arguments that would show that group selection happens in practice (and many of his arguments seem to show a lack of understanding of what opponents of group selection think - bees and cells cooperating is not evidence for group selection at all), but after thinking about it more, I now have more sympathy for group-selection having some role in shaping human societies, given that (1) many human groups died, and very few spread (so one lucky or unlucky gene in one member may doom/save the group) (2) some human cultures may have been relatively egalitarian enough when it came to reproductive opportunities that the individual selection pressure was not that big relative to group selection pressure and (3) cultural memes seem like the kind of entity that sometimes survive at the level of the group. Overall, it was often a frustrating experience reading the author describe a descriptive theory of morality and try to describe what kind of morality makes a society more fit in a tone that often felt close to being normative / fails to understand that many philosophers I respect are not trying to find a descriptive or fitness-maximizing theory of morality (e.g. there is no way that utilitarians think their theory is a good description of the kind of shallow moral intuitions the author studies, since they all know that they are biting bullets most people aren't biting, such as the bullet of defending homosexuality in the 19th century).
I wonder how much testosterone during puberty lowers IQ. Most of my high school math/CS friends seemed low-T and 3/4 of them transitioned since high school. They still seem smart as shit. The higher-T among us seem significantly brain damaged since high school (myself included). I wonder what the mechanism would be here... Like 50% of my math/cs Twitter is trans women and another 40% is scrawny nerds and only like 9% big bald men. I have a tremendously large skull (like XXL hats) - maybe that's why I can still do some basic math after the testosterone brain poison during puberty? My voice is kind of high pitched for my body — related?? My big strong brother got the most brain damaged and my thin brother kept most of what he had. Now I'm looking at tech billionaires. Mostly lo-T looking men. Elon Musk & Jeff Bezos were big & bald but seem to have pretty big skulls  to compensate ¿ I guess this topic/theory is detested by cis women, trans women,  lo-T men, and hi-T men all alike because it has something bad to say about all of them.  But here's a recipe for success according to the theory: * be born with giant head (please don't kill your mother, maybe suggest she get a C section) * delay your puberty until you've learned enough to get by, maybe age 22 or so * start slamming testosterone and amphetamines to get your workaholicism, betterThanEveryone complex, and drive for power *  go to Turkey for a hair transplant *  profit
Elizabeth1d304
0
Brandon Sanderson is a bestselling fantasy author. Despite mostly working with traditional publishers, there is a 50-60 person company formed around his writing[1]. This podcast talks about how the company was formed. Things I liked about this podcast: 1. he and his wife both refer to it as "our" company and describe critical contributions she made. 2. the number of times he was dissatisfied with the way his publisher did something and so hired someone in his own company to do it (e.g. PR and organizing book tours), despite that being part of the publisher's job. 3. He believed in his back catalog enough to buy remainder copies of his books (at $1/piece) and sell them via his own website at sticker price (with autographs). This was a major source of income for a while.  4. Long term grand strategic vision that appears to be well aimed and competently executed. 1. ^ The only non-Sanderson content I found was a picture book from his staff artist. 
There was this voice inside my head that told me that since I got Something to protect, relaxing is never ok above strict minimum, the goal is paramount, and I should just work as hard as I can all the time. This led me to breaking down and being incapable to work on my AI governance job for a week, as I just piled up too much stress. And then, I decided to follow what motivated me in the moment, instead of coercing myself into working on what I thought was most important, and lo and behold! my total output increased, while my time spent working decreased. I'm so angry and sad at the inadequacy of my role models, cultural norms, rationality advice, model of the good EA who does not burn out, which still led me to smash into the wall despite their best intentions. I became so estranged from my own body and perceptions, ignoring my core motivations, feeling harder and harder to work. I dug myself such deep a hole. I'm terrified at the prospect to have to rebuild my motivation myself again.

Popular Comments

Recent Discussion

This is a linkpost for https://dynomight.net/seed-oil/

A friend has spent the last three years hounding me about seed oils. Every time I thought I was safe, he’d wait a couple months and renew his attack:

“When are you going to write about seed oils?”

“Did you know that seed oils are why there’s so much {obesity, heart disease, diabetes, inflammation, cancer, dementia}?”

“Why did you write about {meth, the death penalty, consciousness, nukes, ethylene, abortion, AI, aliens, colonoscopies, Tunnel Man, Bourdieu, Assange} when you could have written about seed oils?”

“Isn’t it time to quit your silly navel-gazing and use your weird obsessive personality to make a dent in the world—by writing about seed oils?”

He’d often send screenshots of people reminding each other that Corn Oil is Murder and that it’s critical that we overturn our lives...

1denkenberger4h
People have been breathing a lot of smoke in the last million years or so, so one might think that we would have evolved to tolerate it, but it's still really bad for us. Though there are certainly lots of ways to go wrong deviating from what we are adapted to, our current unnatural environment is far better for our life expectancy than the natural one. As pointed out in other comments, some food processing can be better for us.
1Slapstick12h
A cooked food could technically be called a processed food but I don't think that adds much meaningful confusion. I would say the same about soaking something in water. Olives can be made edible by soaking them in water. If they're made edible by soaking in a salty brine (an isolated component that can be found in whole foods in more suitable quantities) then they're generally less healthy. Local populations might adapt by finding things that can be heavily processed into edible foods which can allow them to survive, but these foods aren't necessarily ones which would be considered healthy in a wider context.
Ann18m10

Aside from the rare naturally edible-when-ripe cultivar, olives are (mostly) made edible by fermenting and curing them. With salt, yes. And lye, often. Even olives fermented in water are then cured in brine. What saltless olives are you interacting with?

Edit: Also, cooking is very much processing food. It has all the mechanisms to change things and generate relevant pollutants. It changes substances drastically, and different substances differently drastically. Cooking with fire will create smoke, etc. Cooking with overheated teflon cookware will kill your... (read more)

2Joseph Miller16h
I'm confused - why are you so confident that we should avoid processed food. Isn't the whole point of your post that we don't know whether processed oil is bad for you? Where's the overwhelming evidence that processed food in general is bad?

The history of science has tons of examples of the same thing being discovered multiple time independently; wikipedia has a whole list of examples here. If your goal in studying the history of science is to extract the predictable/overdetermined component of humanity's trajectory, then it makes sense to focus on such examples.

But if your goal is to achieve high counterfactual impact in your own research, then you should probably draw inspiration from the opposite: "singular" discoveries, i.e. discoveries which nobody else was anywhere close to figuring out. After all, if someone else would have figured it out shortly after anyways, then the discovery probably wasn't very counterfactually impactful.

Alas, nobody seems to have made a list of highly counterfactual scientific discoveries, to complement wikipedia's list of multiple discoveries.

To...

Very cool! I used to think Hume was the most ahead of his time, but this seems like the same feat if not better.

1Answer by Mateusz Bagiński1h
Maybe Hanson et al.'s Grabby aliens model? @Anders_Sandberg  said that some N years before that (I think more or less at the time of working on Dissolving the Fermi Paradox), he "had all of the components [of the model] on the table" and it just didn't occur to him that they can be composed in this way. (personal communication, so I may be misremembering some details). Although it's less than 10 years, so... Speaking of Hanson, prediction markets seem like a more central example. I don't think the idea was [inconceivable in principle] 100 years ago.
1Niclas Kupper2h
It would be interesting for people to post current research that they think has some small chance of outputting highly singular results!
1Answer by Niclas Kupper2h
Grothendiek seems to have been an extremely singular researcher, various of his discoveries would have likely been significantly delayed without him. His work on sheafs is mind bending the first time you see it and was seemingly ahead of its time.

(Cross-posted from my website. Audio version here, or search "Joe Carlsmith Audio" on your podcast app.

This is the first essay in a series that I’m calling “Otherness and control in the age of AGI.” See here for more about the series as a whole.)

When species meet

The most succinct argument for AI risk, in my opinion, is the “second species” argument. Basically, it goes like this.

Premise 1: AGIs would be like a second advanced species on earth, more powerful than humans.

Conclusion: That’s scary.

To be clear: this is very far from airtight logic.[1] But I like the intuition pump. Often, if I only have two sentences to explain AI risk, I say this sort of species stuff. “Chimpanzees should be careful about inventing humans.” Etc.[2]

People often talk about aliens here,...

I think this series might be easier for some to engage with if they imagine Carlsmith to be challenging priors around what AI minds will be like. I don't claim this is his intention.

For me, the series makes more sense read back to front - starting with some options of how to engage with the future, noting the tendency of LessWrongers to distrust god and nature, noting how that leads towards a slightly dictatorial tendency, suggesting alternative poises and finally noting that just as we can take a less controlling poise towards the future, so might AIs tow... (read more)

5lukehmiles5h
I wonder how much testosterone during puberty lowers IQ. Most of my high school math/CS friends seemed low-T and 3/4 of them transitioned since high school. They still seem smart as shit. The higher-T among us seem significantly brain damaged since high school (myself included). I wonder what the mechanism would be here... Like 50% of my math/cs Twitter is trans women and another 40% is scrawny nerds and only like 9% big bald men. I have a tremendously large skull (like XXL hats) - maybe that's why I can still do some basic math after the testosterone brain poison during puberty? My voice is kind of high pitched for my body — related?? My big strong brother got the most brain damaged and my thin brother kept most of what he had. Now I'm looking at tech billionaires. Mostly lo-T looking men. Elon Musk & Jeff Bezos were big & bald but seem to have pretty big skulls  to compensate ¿ I guess this topic/theory is detested by cis women, trans women,  lo-T men, and hi-T men all alike because it has something bad to say about all of them.  But here's a recipe for success according to the theory: * be born with giant head (please don't kill your mother, maybe suggest she get a C section) * delay your puberty until you've learned enough to get by, maybe age 22 or so * start slamming testosterone and amphetamines to get your workaholicism, betterThanEveryone complex, and drive for power *  go to Turkey for a hair transplant *  profit

Testosterone influences brain function but not so much general IQ. It may influence to which areas your attention and thus most of your learning goes. For example, Lower testosterone increases attention to happy faces while higher to angry faces. 

1lukehmiles6h
Seems it is easier / more streamlined / more googlable now for a teenage male to get testosterone blockers than testosterone. Latter is very frowned upon — I guess because it is cheating in sports. Try googling eg "get testosterone prescription high school reddit -trans -ftm". The results are exclusively people shaming the cheaters. Whereas of course googling "get testosterone blockers high school reddit" gives tons of love & support & practical advice. Females however retain easy access to hormones via birth control.

Produced while being an affiliate at PIBBSS[1]. The work was done initially with funding from a Lightspeed Grant, and then continued while at PIBBSS. Work done in collaboration with @Paul Riechers, @Lucas Teixeira, @Alexander Gietelink Oldenziel, and Sarah Marzen. Paul was a MATS scholar during some portion of this work. Thanks to Paul, Lucas, Alexander, Sarah, and @Guillaume Corlouer for suggestions on this writeup.

Introduction

What computational structure are we building into LLMs when we train them on next-token prediction? In this post we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data-generating process. We'll explain exactly what this means in the post. We are excited by these results because

  • We have a formalism that relates training data to internal
...
1Niclas Kupper1h
Is there some theoretical result along the lines of "A sufficiently large transformer can learn any HMM"?

Depending on what one means by 'learn' this is provably impossible. The reason has nothing to do with the transformer architecture (which one shouldn't think of as a canonical architecture in the grand scheme of things anyway).

There is a 2-state generative HMM such that the optimal predictor of the output of said generative model provably requires an infinite number of states. This is for any model of computation, any architecture.

Of course, that's maybe not what you intend by 'learn'. If you mean by 'learn' express the underlying function of an HMM then the answer is yes by the Universal Approximation Theorem (a very fancy name for a trivial application of the Stone-Weierstrass theorem).

Hope this helped. 😄

This summarizes a (possibly trivial) observation that I found interesting.

 

Story

An all-powerful god decides to play a game. They stop time, grab a random human, and ask them "What will you see next?". The human answers, then time is switched back on and the god looks at how well they performed. Most of the time the humans get it right, but occasionally they are caught by surprise and get it wrong.

To be more generous the god decides to give them access (for the game) to the entirety of all objective facts. The position and momentum of every elementary particle, every thought and memory anyone has ever had (before the time freeze) etc. However, suddenly performance in the game drops from 99% to 0%. How can this be? They...

2Ben2h
I am having trouble following you. If little-omega is a reference frame I would expect it to be a function that takes in the "objective world" (Omega) and spits out a subjective one. But you seem to have it the other way around? Or am I misunderstanding?

isn't a reference frame; rather, if is a world then aka are the reference frames for .

Essentially when dealing with generalized reference frames that contain answers to questions such as "who are you?", the possible reference frames are going to depend on the world (because you can only be a real person, and which real people there are depends on what the world is). As such, "reference frames" don't make sense in isolation, rather one needs a (world, reference frame) pair, which is what I call an "interpretation".

4Dagon18h
This depends on the mechanism of attaining all these memories.  In that world, it COULD be that you still know which memories are privileged, or at least which ones include meeting God and being in position to be asked the question.  I mean, I'm with you fundamentally: it's not obvious that ANYTHING is truly objective - other people can report experiences, but that's mediated by your perceptions as well. In most cases, one can avoid the confusion by specifying predicting WHAT experiences will happen to WHICH observer.
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

The Löwenheim–Skolem theorem implies, among other things, that any first-order theory whose symbols are countable, and which has an infinite model, has a countably infinite model. This means that, in attempting to refer to uncountably infinite structures (such as in set theory), one "may as well" be referring to an only countably infinite structure, as far as proofs are concerned.

The main limitation I see with this theorem is that it preserves arbitrarily deep quantifier nesting. In Peano arithmetic, it is possible to form statements that correspond (under the standard interpretation) to arbitrary statements in the arithmetic hierarchy (by which I mean, the union of and for arbitrary n). Not all of these statements are computable. In general, the question of whether a given statement is...

jessicata1hΩ120

Thanks, didn't know about the low basis theorem.

2jessicata1h
LS shows to be impossible one type of infinitarian reference, namely to uncountably infinite sets. I am interested in showing to be impossible a different kind of infinitarian reference. "Impossible" and "reference" are, of course, interpreted differently by different people.
6AlexMennen7h
I think what you proved essentially boils down to the fact that a consistent guessing oracle can be used to compute a completion of any consistent recursively axiomatizable theory. (In fact, it turns out that a consistent guessing oracle can be used to compute a model (in the sense of functions and relations on a set) of any consistent recursively axiomatizable theory; this follows from what you showed and the fact that an oracle for a complete theory can be used to compute a model of that theory.) I disagree with The translation from T to U is computable. The consistent guessing oracle only came in to find a completion of U, but it could also find a completion of T (in fact, a completion of U can be computably translated to a completion of T), so the consistent guessing oracle doesn't really have anything to do with the relationship between T and U.
2jessicata1h
U axiomatizes a consistent guessing oracle producing a model of T. There is no consistent guessing oracle applied to U. In the previous post I showed that a consistent guessing oracle can produce a model of T. What I show in this post is that the theory of this oracle can be embedded in propoaitional logic so as to enable provability preserving translations.

I took the Reading the Mind in the Eyes Test test today. I got 27/36. Jessica Livingston got 36/36.

Reading expressions is almost mind reading. Practicing reading expressions should be easy with the right software. All you need is software that shows a random photo from a large database, asks the user to guess what it is, and then informs the user what the correct answer is. I felt myself getting noticeably better just from the 36 images on the test.

Short standardized tests exist to test this skill, but is there good software for training it? It needs to have lots of examples, so the user learns to recognize expressions instead of overfitting on specific pictures.

Paul Ekman has a product, but I don't know how good it is.

ö1h30

The test scores me as 'normal' with 29/36. I remember doing a similar (maybe the same) test and scoring decidedly below average about two years ago.

I understand the attraction of having this skill trainable in its own context like flashcards but consider it a false shortcut.  I think it is more about directing attention. 

Setting aside a few cycles of my attention to practice in every day life worked for me and I think it should be wildly superior to treating it as a problem of categorizing features.

1. You get so much more context to infer fr... (read more)

9Answer by Matt Goldenberg15h
Paul Ekmans software is decent. When I used it (before it was a SaaS, just a cd) it just basicallyflashed an expression for a moment then went back to neutral pic. After some training it did help to identify micro expressions in people
3Jacob G-W15h
*Typo: Jessica Livingston not Livingstone
2lsusr14h
Fixed. Thanks.

Joe’s summary is here, these are my condensed takeaways in my own words. All links in this section are to the essays

Outline

...

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA