Frustrated by claims that "enlightenment" and similar meditative/introspective practices can't be explained and that you only understand if you experience them, Kaj set out to write his own detailed gears-level, non-mysterious, non-"woo" explanation of how meditation, etc., work in the same way you might explain the operation of an internal combustion engine.

I recently listened to The Righteous Mind. It was surprising to me that many people seem to intrinsically care about many things that look very much like good instrumental norms to me (in particular loyalty, respect for authority, and purity). The author does not make claims about what the reflective equilibrium will be, nor does he explain how the liberals stopped considering loyalty, respect, and purity as intrinsically good (beyond "some famous thinkers are autistic and didn't realize the richness of the moral life of other people"), but his work made me doubt that most people will have well-being-focused CEV. The book was also an interesting jumping point for reflection about group selection. The author doesn't make the sorts of arguments that would show that group selection happens in practice (and many of his arguments seem to show a lack of understanding of what opponents of group selection think - bees and cells cooperating is not evidence for group selection at all), but after thinking about it more, I now have more sympathy for group-selection having some role in shaping human societies, given that (1) many human groups died, and very few spread (so one lucky or unlucky gene in one member may doom/save the group) (2) some human cultures may have been relatively egalitarian enough when it came to reproductive opportunities that the individual selection pressure was not that big relative to group selection pressure and (3) cultural memes seem like the kind of entity that sometimes survive at the level of the group. Overall, it was often a frustrating experience reading the author describe a descriptive theory of morality and try to describe what kind of morality makes a society more fit in a tone that often felt close to being normative / fails to understand that many philosophers I respect are not trying to find a descriptive or fitness-maximizing theory of morality (e.g. there is no way that utilitarians think their theory is a good description of the kind of shallow moral intuitions the author studies, since they all know that they are biting bullets most people aren't biting, such as the bullet of defending homosexuality in the 19th century).
Elizabeth17h294
0
Brandon Sanderson is a bestselling fantasy author. Despite mostly working with traditional publishers, there is a 50-60 person company formed around his writing[1]. This podcast talks about how the company was formed. Things I liked about this podcast: 1. he and his wife both refer to it as "our" company and describe critical contributions she made. 2. the number of times he was dissatisfied with the way his publisher did something and so hired someone in his own company to do it (e.g. PR and organizing book tours), despite that being part of the publisher's job. 3. He believed in his back catalog enough to buy remainder copies of his books (at $1/piece) and sell them via his own website at sticker price (with autographs). This was a major source of income for a while.  4. Long term grand strategic vision that appears to be well aimed and competently executed. 1. ^ The only non-Sanderson content I found was a picture book from his staff artist. 
There was this voice inside my head that told me that since I got Something to protect, relaxing is never ok above strict minimum, the goal is paramount, and I should just work as hard as I can all the time. This led me to breaking down and being incapable to work on my AI governance job for a week, as I just piled up too much stress. And then, I decided to follow what motivated me in the moment, instead of coercing myself into working on what I thought was most important, and lo and behold! my total output increased, while my time spent working decreased. I'm so angry and sad at the inadequacy of my role models, cultural norms, rationality advice, model of the good EA who does not burn out, which still led me to smash into the wall despite their best intentions. I became so estranged from my own body and perceptions, ignoring my core motivations, feeling harder and harder to work. I dug myself such deep a hole. I'm terrified at the prospect to have to rebuild my motivation myself again.
MIRI Technical Governance Team is hiring, please apply and work with us! We are looking to hire for the following roles: * Technical Governance Researcher (2-4 hires) * Writer (1 hire) The roles are located in Berkeley, and we are ideally looking to hire people who can start ASAP. The team is currently Lisa Thiergart (team lead) and myself. We will research and design technical aspects of regulation and policy that could lead to safer AI, focusing on methods that won’t break as we move towards smarter-than-human AI. We want to design policy that allows us to safely and objectively assess the risks from powerful AI, build consensus around the risks we face, and put in place measures to prevent catastrophic outcomes. The team will likely work on: * Limitations of current proposals such as RSPs * Inputs into regulations, requests for comment by policy bodies (ex. NIST/US AISI, EU, UN) * Researching and designing alternative Safety Standards, or amendments to existing proposals * Communicating with and consulting for policymakers and governance organizations If you have any questions, feel free to contact me on LW or at peter@intelligence.org 
Tamsin Leake2d14-13
14
Regardless of how good their alignment plans are, the thing that makes OpenAI unambiguously evil is that they created a strongly marketed public product and, as a result, caused a lot public excitement about AI, and thus lots of other AI capabilities organizations were created that are completely dismissive of safety. There's just no good reason to do that, except short-term greed at the cost of higher probability that everyone (including people at OpenAI) dies. (No, "you need huge profits to solve alignment" isn't a good excuse — we had nowhere near exhausted the alignment research that can be done without huge profits.)

Popular Comments

Recent Discussion

It was all quiet. Then it wasn’t.

Note the timestamps on both of these.

Dwarkesh Patel did a podcast with Mark Zuckerberg on the 18th. It was timed to coincide with the release of much of Llama-3, very much the approach of telling your story directly. Dwarkesh is now the true tech media. A meteoric rise, and well earned.

This is two related posts in one. First I cover the podcast, then I cover Llama-3 itself.

My notes are edited to incorporate context from later explorations of Llama-3, as I judged that the readability benefits exceeded the purity costs.

Podcast Notes: Llama-3 Capabilities

  1. (1:00) They start with Llama 3 and the new L3-powered version of Meta AI. Zuckerberg says “With Llama 3, we think now that Meta AI is the most intelligent, freely-available
...
5Chris_Leong12h
Do you have any thoughts on whether it would make sense to push for a rule that forces open-source or open-weight models to be released behind an API for a certain amount of time before they can be released to the public?
Zvi3m20

It is better than nothing I suppose but if they are keeping the safeties and restrictions on then it will not teach you whether it is fine to open it up.

In the late 19th century, two researchers meet to discuss their differing views on the existential risk posed by future Uncontrollable Super-Powerful Explosives.

  • Catastrophist: I predict that one day, not too far in the future, we will find a way to unlock a qualitatively new kind of explosive power. This explosive will represent a fundamental break with what has come before. It will be so much more powerful than any other explosive that whoever gets to this technology first might be in a position to gain a DSA over any opposition. Also, the governance and military strategies that we were using to prevent wars or win them will be fundamentally unable to control this new technology, so we'll have to reinvent everything on the fly or die in
...

Also, the US did consider the possibility of waging a preemptive nuclear war on the USSR to prevent it from getting nukes. (von Neumann advocated for this I think?) If the US was more of a warmonger, they might have done it, and then there would have been a more unambiguous world takeover.

This summarizes a (possibly trivial) observation that I found interesting.

 

Story

An all-powerful god decides to play a game. They stop time, grab a random human, and ask them "What will you see next?". The human answers, then time is switched back on and the god looks at how well they performed. Most of the time the humans get it right, but occasionally they are caught by surprise and get it wrong.

To be more generous the god decides to give them access (for the game) to the entirety of all objective facts. The position and momentum of every elementary particle, every thought and memory anyone has ever had (before the time freeze) etc. However, suddenly performance in the game drops from 99% to 0%. How can this be? They...

Dagon10m20

If you have the memories of every single human up to that point, then you don't know which of them you are.

This depends on the mechanism of attaining all these memories.  In that world, it COULD be that you still know which memories are privileged, or at least which ones include meeting God and being in position to be asked the question. 

I mean, I'm with you fundamentally: it's not obvious that ANYTHING is truly objective - other people can report experiences, but that's mediated by your perceptions as well. In most cases, one can avoid the confusion by specifying predicting WHAT experiences will happen to WHICH observer.

2tailcalled6h
An idea I've been playing with recently: Suppose you have some "objective world" space Ω. Then in order to talk about subjective questions, you need a reference frame, which we could think of as the members of a fiber of some function ω:I→Ω, for some "interpretation space" I. The interpretations themselves might abstract to some "latent space" Λ according to a function λ:I→Λ. Functions of Λ would then be "subjective" (depending on the interpretation they arise from), yet still potentially meaningfully constrained, based on (λ,ω). In particular if some structure in Ω lifts homomorphically up through ω and down through λ, you get exactly the same structure in Λ. (And these obviously compose nicely since they're just spans, so far.) The key question is what kind of space/algebra to preserve. I can find lots of structures that work well for particular abstractions, but it seems like the theory would have to be developed separately for each type of structure, as I don't see any overarching one.

tl;dr: LessWrong released an album! Listen to it now on Spotify, YouTube, YouTube Music, or Apple Music.

On April 1st 2024, the LessWrong team released an album using the then-most-recent AI music generation. All the music is fully AI-generated, and the lyrics are adapted (mostly by humans) from LessWrong posts (or other writing LessWrongers might be familiar with).

We made probably 3,000-4,000 song generations to get the 15 we felt happy about, which I think works out to about 5-10 hours of work per song we used (including all the dead ends and things that never worked out).

The album is called I Have Been A Good Bing. I think it is a pretty fun album and maybe you'd enjoy it if you listened to it! Some of my favourites are...

I would pay to see this live at a bar or one of those county fair (we had a GLaDOS cover band once so it's not out of the question)

2michael_mjd3h
If we don't get a song like that, take comfort that GLaDoS's songs from the Portal soundtrack are basically the same idea as the Sydney reference. Link: https://www.youtube.com/watch?v=dVVZaZ8yO6o

The Löwenheim–Skolem theorem implies, among other things, that any first-order theory whose symbols are countable, and which has an infinite model, has a countably infinite model. This means that, in attempting to refer to uncountably infinite structures (such as in set theory), one "may as well" be referring to an only countably infinite structure, as far as proofs are concerned.

The main limitation I see with this theorem is that it preserves arbitrarily deep quantifier nesting. In Peano arithmetic, it is possible to form statements that correspond (under the standard interpretation) to arbitrary statements in the arithmetic hierarchy (by which I mean, the union of and for arbitrary n). Not all of these statements are computable. In general, the question of whether a given statement is...

Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be.

There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks arise if AIs take their own actions at the expense of human interests.

Governments are poor stewards for both types of risk. Misuse regulation is like the regulation of any other technology. There are reasonable rules that the government might set, but omission bias and incentives to protect small but well organized groups at the expense of everyone else will lead to lots of costly ones too. Misalignment regulation is not in the Overton window for any government. Governments do not have strong incentives...

Governments are not social welfare maximizers

 

Most people making up governments, and society in general, care at least somewhat about social welfare.  This is why we get to have nice things and not descend into chaos.

Elected governments have the most moral authority to take actions that effect everyone, ideally a diverse group of nations as mentioned in Daniel Kokotajlo's maximal proposal comment.

1Rudi C2h
A core disagreement is over “more doomed.” Human extinction is preferable to a totalitarian stagnant state. I believe that people pushing for totalitarianism have never lived under it.
3Daniel Kokotajlo1h
Who is pushing for totalitarianism? I dispute that AI safety people are pushing for totalitarianism.
2MondSemmel1h
Flippant response: people pushing for human extinction have never been dead under it, either.
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

TL;DR

Tacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Tacit Knowledge Videos could widen this bottleneck. This post is a Schelling point for aggregating these videos—aiming to be The Best Textbooks on Every Subject for Tacit Knowledge Videos. Scroll down to the list if that's what you're here for. Post videos that highlight tacit knowledge in the comments and I’ll add them to the post. Experts in the videos include Stephen Wolfram, Holden Karnofsky, Andy Matuschak, Jonathan Blow, Tyler Cowen, George Hotz, and others. 

What are Tacit Knowledge Videos?

Samo Burja claims YouTube has opened the gates for a revolution in tacit knowledge transfer. Burja defines tacit knowledge as follows:

Tacit knowledge is knowledge that can’t properly be transmitted via verbal or written instruction, like the ability to create

...
2Amadeus Pagel2h
I was enthusiastic about the title of this post, hoping for something different from the usual lesswrong content, but disappointed by most of the examples. In my view if you take this idea of learning tacit knowledge with video seriously, it shouldn't affect just how you learn, but what you learn, rather then trying to learn book subjects by watching videos.

If you have recommendations, post them! I doubt the author tried to filter the subjects very much by "book subjects" it's just what people seem to have found good ones so far. 

1Johannes C. Mayer2h
Yes, abstraction is the right thing to think about. That is the context in which I was considering this computation. In this post I describe a sort of planning abstraction that you can do if you have an extremely regular environment. It does not yet talk about how to store this environment, but you are right that this can of course also be done similarly efficiently.
1Johannes C. Mayer2h
In this post, I describe a toy setup, where I have a graph of 10100 vertices. I would like to compute for any two vertices A and B how to get from A to B, i.e. compute a path from A to B. The point is that if we have a very special graph structure we can do this very efficiently. O(n) where n is the plan length.

In that post, you say that you have a graph of  vertices with a particular structure. In that scenario, where is that structured graph of  vertices coming from? Presumably there's some way you know the graph looks like this

rather than looking like this

 

If you know that your graph is a nice sparse graph that has lots of symmetries, you can take advantage of those properties to skip redundant parts of the computation (and when each of your  nodes has at most 100 inbound edges and 100 outbound edges, then you ... (read more)

I

Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora. Let’s name this hypothetical movement the Effective Samaritans.

Like the EA movement of today, they believe in doing as much good as possible, whatever this means. They began by evaluating existing charities, reading every RCT to find the very best ways of helping.

But many effective samaritans were starting to wonder. Is this randomista approach really the most prudent? After all, Scandinavia didn’t become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures.

The Scandinavian societal model which lifted the working class, brought weekends, universal suffrage, maternity leave, education, and universal healthcare can be traced back all the...

1Amadeus Pagel2h
I think charter cities are a questionable idea, even though I'm pro free markets. It seems that the sort of constitional change and stability required for a charter city is no easier to achieve then the kind of constitutional change and stability required for a free market in the entire country. I don't think trying either in developing countries as an outsider is a good use of anyone's resources.
2Richard_Kennaway2h
Whenever I've seen people invoking Inference to the Best Explanation to justify a conclusion (as opposed to philosophising about the logic of argument), they have given no reason why their preferred explanation is the Best, they have just pronounced it so. A Bayesian reasoner can (or should be able to) show their work, but the ItoBE reasoner has no work to show.
9romeostevensit4h
Our sensible Chesterton fences His biased priors Their inflexible ideological commitments In addition to epistemic priors, there are also ontological priors and teleological priors to cross compare, each with their own problems. On top of which, people are even worse at comparing non epistemic priors than they are at comparing epistemic priors. As such, attempts to point out that these are an issue will be seen as a battle tactic: move the argument from a domain in which they have the upper hand (from their perspective) to unfamiliar territory in which you'll have an advantage, and resisted. You may share the experience I've had that most attempts at discussion don't go anywhere. We mostly repeat our cached knowledge at each other. If two people who are earnestly trying to grok each other's positions drill down for long enough they'll get to a bit of ontology comparison, where it turns out they have different intuitions because they are using different conceptual metaphors for different moving parts of their model. But this takes so long that by the time it happens only a few bits of information get exchanged before one or both parties are too tired to continue. The workaround seems to be that if two people have a working relationship then, over time, they can accrue enough bits to get to real cruxes, and this can occasionally suggest novel research directions. My main theory of change is therefore to find potentially productive pairings of people faster, and create the conditions under which they can speedrun getting to useful cruxes. Unfortunately, Eli Tyre tried this theory of change and reported that it mostly didn't work, after a bunch of good faith efforts from a bunch of people. I'm not sure what's next. I personally believe more progress could be made if people were trained in consciousness of abstraction (per Korzybski), but this is a sufficiently difficult ask as to fail people's priors on how much effort to spend on novel skills with unclear payoffs.

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA