Recent Discussion

Crossposted from my blog

When I mention my dad’s abuse, I mention salient things - physical pain, insults, and controlling behavior. These are “clearly bad” - if I tell you that he often told me I was lazy and would fail horribly at life once I left home, you know it's bad, because it’s concrete, easy to imagine and obviously unkind. But this wasn’t the worst of the abuse; the most terrible parts were extraordinarily hard to understand or describe.

In his world, I felt insane - I couldn’t tell what was real, who was at fault, or why my heart hurt so much. My sense of clarity around my own intentions crumbled; everything I thought or did might have seemed good on the surface, but that goodness became...

In additional to multiple human/agent points of view, it's worth going a little further down the "ignore intent" path, to consider the situation as purely environmental.  It doesn't matter that these are humans or what they want or how they appear - is this good for you?   If not, go elsewhere.

Note that this is intended as an extension of "viewing through multiple lenses", not a recommendation that this should be primary.

1pwno1hWhy is frame control central to this post? While it explains frame control well, the focus seems to be about people consciously/unconsciously harmfully manipulating one another. How to avoid being manipulated, gaslighted, deceived, etc is an important topic to discuss and a valuable skill to have. And this post offers good advice on it (whether or not it intended to). But it could’ve done so without bringing up the concept of frame control.
6Henry Prowbell1hThere was a discussion on the Sam Harris podcast where he talks about the alarming frequency at which leaders of meditation communities end up abusing, controlling or sleeping with their students. I can't seem to find the episode name now. But I remember being impressed with the podcast guest, a meditation teacher, who said they had seen this happening all around them and before they took over as the leader of their meditation centre had tried to put in place things to stop themselves falling into the same traps. They had taken their family and closest friends aside and asked them for help, saying things to this effect: "If you ever see me slipping into behaviour that looks dodgy I need you to point it out to me immediately and in no uncertain terms. Even though I've experienced awakening I'm still fallible and I don't know how I'm going to handle all this power and all these beautiful young students wanting to sleep with me." This kind of mindset is a norm I'd love to see encouraged and supported in the leaders of the rationalist community.
1Linch2hI wonder if you can infer de facto intent from the consequences, ie, not the intents-that-they-think-they-had, but more the intents they actually had. In particular, a lot of motivated cognition often makes people not "believe" that the beliefs they explicit hold just-so-conveniently lead to giving them greater power and status, etc. But usually the degree of this reality warping isn't absolute. I'd be interested in dispassionately examining potential frame controller's actions and noting whether their actions and justifications just so conveniently happen to always lead to them getting large personal gains at little personal cost, while incurring large costs (especially autonomy) of others.

The last day has seen the imposition of new travel restrictions and spreading alarm about the Omicron variant. It sure looks like a repeat of what happened with Alpha and Delta, as well as the original strain back in early 2020, and that we are probably doing this again.

How will this play out this time? It’s very early. There’s tons of unknowns. Yet there is a lot we can say already, and many actions need to happen now to have any hope of doing much good. So it’s time to move fast, see what we can say right now and try to capture the speed premium.

I’ll start with a summary of what we know about the game board based on previous experience, then look at what we...

Good point - "boy who cried wolf" syndrome

4Neel Nanda2hWhat fraction of these fizzled out because they were displaced by a fitter variant vs just not spreading further? That seems very important for figuring out how much to freak out
1tkpwaeub3hThis doesn't directly answer your question but it appears that people who received mRNA vaccines produced fewer antibodies for one of the four endemic coronaviruses than those who were naturally infected. If that's true, it's very encouraging news as far as adapting vaccines is concerned: https://www.medrxiv.org/content/10.1101/2021.09.30.21264363v1 [https://www.medrxiv.org/content/10.1101/2021.09.30.21264363v1]
1tkpwaeub3hIt seems that Omicron may have mutated over time within a single immunocompromised host, so it's not clear to me how natural selection applies.

what this post is

The following is an incomplete draft, which I'm publishing now because I am unlikely to ever finish writing it.

I no longer fully endorse all the claims in the post.  (In a few cases, I've added a note to say this explicitly.)  However, there are some arguments in the post that I still endorse, and which I have not seen made elsewhere.

This post is the result of me having lots of opinions about LM scaling, at various times in 2021, which were difficult to write down briefly or independently of one another.  This post, originally written in July 2021, is the closest I got to writing them all down in one place.

-nost, 11/26/21


0. caveat

This post will definitely disappoint you.

Or, anyway, it will definitely disappoint me....

The fact that language models inevitably will end up contradicting themselves is due to the fact that they have finite memory. Asking them not to contradict themselves over sufficiently large amount of text is asking for the impossible: they're figuring out what to output as the current token by looking at only the last n tokens, so if the contradicting fact lies further back than that there is no way for the model to update on that. And no increase in the size of the models, without fundamental architecture change, will fix that problem.

But architecture changes to deal with that problem might be coming...

Quadratic voting is a proposal for a voting system that ensures participants cast a number of votes proportional to the degree they care about the issue by making the marginal cost of each additional vote linearly increasing - see this post by Vitalik for an excellent introduction.

One major issue with QV is collusion - since the marginal cost of buying one vote is different for different people, if you could spread a number of votes out across multiple people, you could buy more votes for the same amount of money. For instance, suppose you and a friend have $100 each, and you care only about Cause A and they care only about Cause B, and neither of you care about any of the other causes up for...

I don't see how that's a useful hypothetical scenario. If you have enough secret agents that you can coerce a large percentage of voters to do what you want, what voting system is able to stop you?

"I will help you with your thing if you do the same for me" is the core ethos of non-dictatorial civilization. IMO, QV encouraging cooperation (not "collusion") is a point in its favor, not against.

If a lion could speak, we could not understand her.

—Ludwig Wittgenstein

In order for information to be transmitted from one place to another, it needs to be conveyed by some physical medium: material links of cause and effect that vary in response to variation at the source, correlating the states of different parts of the universe—a "map" that reflects a "territory." When you see a rock, that's only possible because the pattern of light reflected from the rock into your eyes is different from what it would have been if the rock were a different color, or if it weren't there.

This is the rudimentary cognitive technology of perception. Notably, perception only requires technology on the receiving end. Your brain and your eyes were optimized by natural selection to...

The stronger the common interest in accurate signaling, the higher the opportunity cost of doing something else, which is sufficient for differential signaling cost, though  traumatized humans often act against both their narrowly and broadly defined economic self-interest.

This is a thread for rationality-related or LW-related jokes and humor. Please post jokes (new or old) in the comments.

------------------------------------

Q: Why are Chromebooks good Bayesians?

A: Because they frequently update!

------------------------------------

A super-intelligent AI walks out of a box...

------------------------------------

Q: Why did the psychopathic utilitarian push a fat man in front of a trolley?

A: Just for fun.

It's awful that you were downvoted in this thread when you were mostly right and the others were mostly wrong. I'm updating my estimate of LW's average intelligence downward.

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with

I am curious about what kind of music this community enjoys.

Music recommendation algorithms don't work well for me, not even the highly advanced almost-AGI-level algorithms of Spotify. They tend to be conservative and keep me in a bubble. I found my best new music by asking people for their favourite songs. This approach seems to offer a highly efficient search across the music graph.

If you can explain why you think that song is perfect, that would be a bonus!

If you like Steely Dan you may just get Weather Reports output from the early eighties - Procession, Domino Theory and Sportin' Life. Omar Hakim on drums and Victor Bailey on bass. Confians isn't typical of their output but it is so heartfelt. I am told it is sung in a Brazilian creole, but I have not idea what it is about. But then again, I do.

Shpongle consistently pushed the envelope of electronic music, always creative, always original. For example...

5Answer by cousin_it14hI've been playing music for many years and have thought of many songs as "perfect" by various musical criteria, melody, beat and so on. But deep down I think musical criteria aren't the answer. It all comes down to which mood the song puts you in, so the perfect song = the one that hits the right mood at your current stage in life. So it's gonna be unavoidably different between people, and for the same person across time. For me as a teenager it was "Losing My Religion", somehow. Now at almost 40, this recording of Aguas de Março [https://www.youtube.com/watch?v=E1tOV7y94DY] makes me smile.
1Yiar15hGlad you liked it! 🙂 Yeah, I love it when they add instrumental versions 😃 What a great song! 😃 Thanks for recommending! Well if you like groove, here’s a little playlist I put together of groovy goodies https://music.apple.com/se/playlist/groovy/pl.u-pRomsWXGVP0?l=en [https://music.apple.com/se/playlist/groovy/pl.u-pRomsWXGVP0?l=en]
1Answer by philip_b16hSomehow I love a lot of songs from a few radiostations of SomaFM [https://somafm.com/] radiostations: * Groove Salad [https://somafm.com/groovesalad/] - "A nicely chilled plate of ambient/downtempo beats and grooves." * Illinois Street Lounge [https://somafm.com/illstreet/] - "Classic bachelor pad, playful exotica and vintage music of tomorrow." * Christmas Lounge [https://somafm.com/christmas/] - "Chilled holiday grooves and classic winter lounge tracks. (Kid and Parent safe!)" I frequently listen to these and download songs I hear there.

There's a thing I've sometimes noticed happening in social science debates. In the spirit of How to Write Quickly While Maintaining Epistemic Rigor, let's consider an example.

A note: I'm going to pick a controversial example, of the causes of transsexuality. This is because for the past few years, I have been studying transsexuality to try to understand its causes better, and so it's an example I have thought deeply about and know a lot about. And because I've been hit by the problem described in this post.

But understandably, it's also something that a lot of people, particularly trans people and people involved in pro- or anti-trans politics, care a lot about. Given the centrality of the example, I think its reasonable to debate object-level matters of the...

One early response I got, by someone who preferred to be anonymous, had some objections to this part:

But it relies strongly on the notion that taboo things become erotic, and while this is a popular idea, I've yet to see any convincing evidence for it, nor have the various people I've seen advocate for the theory been able to provide much.

The response was:

I agree the argument you're criticizing doesn't strike me as all that strong, but "taboos are sexy (to a non-negligible number of people, sometimes)" doesn't seem as unreasonable as you're painting it to

... (read more)

This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky on AGI forecasting, following up on Paul and Eliezer's "Takeoff Speeds" discussion.

 

Color key:

 Chat by Paul and Eliezer  Chat by Ajeya  Inline comments 

 

8. September 20 conversation

 

8.1. Chess and Evergrande

 

[Christiano][15:28] 

 I still feel like you are overestimating how big a jump alphago is, or something. Do you have a mental prediction of how the graph of (chess engine quality) vs (time) looks, and whether neural net value functions are a noticeable jump in that graph?

Like, people investing in "Better Software" doesn't predict that you won't be able to make progress at playing go. The reason you can make a lot of progress at go is that there was extremely little investment in playing better go.

So then

...

Sam Altman explicitly contradicted that in a later q&a, when someone asked him about that quote.

9Rob Bensinger11hTranscript error fixed -- the line that previously read [Yudkowsky][17:40] I expect it to go away before the end of days but with there having been a big architectural innovation, not Stack More Layers [Christiano][17:40] I expect it to go away before the end of days but with there having been a big architectural innovation, not Stack More Layers [Yudkowsky][17:40] if you name 5 possible architectural innovations I can call them small or large should be [Yudkowsky][17:40] I expect it to go away before the end of days but with there having been a big architectural innovation, not Stack More Layers [Christiano][17:40] yeah whereas I expect layer stacking + maybe changing loss (since logprob is too noisy) is sufficient [Yudkowsky][17:40] if you name 5 possible architectural innovations I can call them small or large
5Conor Sullivan12hRight, and history sides with Paul. The earliest steam engines were missing key insights and so operated slowly, used their energy very inefficiently, and were limited in what they could do. The first steam engines were used as pumps, and it took a while before they were powerful enough to even move their own weight (locomotion). Each progressive invention, from Savery to Newcomen to Watt dramatically improved the efficiency of the engine, and over time engines could do more and more things, from pumping to locomotion to machining to flight. It wasn't just one sudden innovation and now we have an engine that can do all the things including even lifting itself against the pull of Earth's gravity. It took time, and progress on smooth metrics, before we had extremely powerful and useful engines that powered the industrial revolution. That's why the industrial revolution(s) took hundreds of years. It wasn't one sudden insight that made it all click.
2amc16hI think Herculano-Houzel would want to mention that humans have 3x (iirc) more neurons in their cerebral cortex than even the elephant species with the biggest brains. Those elephants have more total neurons because their cerebellar cortices have like 200 billion neurons. Humans have more cortical neurons than any animal, including blue whales, because neuron sizes scale differently for different Orders and primates specifically scale well. Crucially, people have thought human brains were special among primates but she makes the point that it's the other great apes that are special in having smaller brains according to primate brain scaling laws. This is because humans either had a unique incentive to keep up with the costs of scaling or because they had a unique ability to keep up with the costs (due to e.g. cooking). Having better algorithms that could take advantage of scale fits with her views, I think.