This is a special post for short-form writing by Richard_Kennaway. Only they can create top-level comments. Comments here also appear on the Shortform Page and All Posts page.

New to LessWrong?

23 comments, sorted by Click to highlight new comments since: Today at 2:14 AM

Goodhart is the malign god who gives you whatever you ask for.

Sufficient optimization pressure destroys all that is not aligned to the metric it is optimizing. Value is fragile.

Roko's Basilisk, as told in the oldest extant collection of jokes, the Philogelos, c. 4th century AD.

A pedant having fallen into a pit called out continually to summon help. When no one answered, he said to himself, "I am a fool if I do not give all a beating when I get out in order that in the future they shall answer me and furnish me with a ladder."

H/T Dynomight.

This is a story from the long-long-ago, from the Golden Age of Usenet.

On the science fiction newsgroups, there was someone—this is so long ago that I forget his name—who had an encyclopedic knowledge of fannish history, and especially con history, backed up by an extensive documentary archive. Now and then he would have occasion to correct someone on a point of fact, for example, pointing out that no, events at SuchAndSuchCon couldn't have influenced the committee of SoAndSoCon, because SoAndSoCon actually happened several years before.

The greater the irrefutability of the correction, the greater people's fury at being corrected. He would be scornfully accused of being well-informed.

"Prompt engineer" is a job that AI will wipe out before anyone even has it as a job.

There's an article in the current New Scientist, taking the form of a review of a new TV series called "The Peripheral", which is based on William Gibson's novel of the same name. The URL may not be useful, as it's paywalled, and if you can read it you're probably a subscriber and already have.

The article touches on MIRI and Leverage. A relevant part:

But over the past couple of years, singulatarian [sic] organisations [i.e. the Singularity Institute/MIRI and Leverage/Leverage Research] have been losing their mindshare. A number of former staffers at Leverage have said it was "similar to an abusive relationship". A researcher at MIRI wrote of how the group made her so paranoid about AI that she had what she calls a "psychotic break" leading to her hospitalisation. In response, MIRI pointed to the use of psychedelics in "subgroups". The singulatarian community is looking less like smart visionaries and more like cult survivors.

Posting this to draw people's attention to how these parts of the community are being written about in the slightly wider world.

The safer it is made, the faster it will be developed, until the desired level of danger has been restored.

I have a dragon in my garage. I mentioned it to my friend Jim, and of course he was sceptical. "Let's see this dragon!" he said. So I had him come round, and knocked on the garage door. The door opened and the dragon stepped out right there in front of us.

"That can't really be a dragon!" he says. It's a well-trained dragon, so I had it walk about and spread its wings, showing off its iridescent scaly hide.

"Yes, it looks like a dragon," he goes on, "but it can't really be a dragon. Dragons belch fire!"

The dragon raised an eyebrow, and discreetly belched some fire into a corner of the yard.

"Yes, but it can't really be a dragon," he says, "dragons can—"

"𝔉𝔩𝔶?" it said. It took off and flew around, then came back to land in front of us.

"Ah, but aren't dragons supposed to collect gold, and be enormously old, and full of unknowable wisdom?"

With an outstretched wingtip the dragon indicated the (rather modest, I had to admit) pile of gold trinkets in the garage. "I can't really cut off one of its talons to count the growth rings,” I said, “but if you want unknowable wisdom, I think it’s about to give you some.” The dragon walked up to Jim, stared at him eye to eye for a long moment, and at last said "𒒄𒒰𒓠𒓪𒕃𒔻𒔞"

"Er...so I've heard," said Jim, looking a bit wobbly. "But seriously," he said when he'd collected himself, "you can't expect me to believe you have a dragon!"

The physicality of decision.

A month ago I went out for a 100 mile bicycle ride. I'm no stranger to riding that distance, having participated in organised rides of anywhere from 50 to 150 miles for more than twelve years, but this was the first time I attempted that distance without the support of an organised event. Events provide both the psychological support of hundreds, sometimes thousands, of other cyclists riding the same route, and the practical support of rest stops with water and snacks.

I designed the route so that after 60 miles, I would be just 10 miles from home. This was so that if, at that point, the full 100 was looking unrealistic, I could cut it short. I had done 60 miles on my own before, so I knew what that was like, but never even 70. So I would be faced with a choice between a big further effort of 40 miles, and a lesser but still substantial effort of 10 miles. I didn't want to make it too easy to give up.

East from New Costessey to Acle, north to Stalham, curve to the west by North Walsham and Aylsham, then south-west to Alderford and the route split.

The ride did not go especially well. I wasn't feeling very energetic on that day, and I wasn't very fast. By the time I reached Aylsham I was all but decided to take the 10 mile route when I got to Alderford. I could hardly imagine doing anything else. But I also knew that was just my feelings of fatigue in the moment talking, not the "I" that had voluntarily taken on this task.

At Alderford I stopped and leaned my bike against a road sign. It was mid-afternoon on a beautiful day for cycling: little wind, sunny, but not too hot. I considered which way to go. Then I drank some water and considered some more. Then I ate another cereal bar and considered some more. And without there being an identifiable moment of decision, I got on my bike and did the remaining 40 miles.

All universal quantifiers are bounded.

The open question is whether this includes the universe itself.

[ETA: alfredmacdonald's post referred to here has been deleted.]

Well, well, alfredmacdonald has banned me from his posts, which of course he has every right to do. Just for the record, I'll paste the brief thread that led to this here.

Richard_Kennaway (in reply to this comment)

I also notice that alfredmacdonald last posted or commented here 10 years ago, and the content of the current post is a sharp break from his earlier (and brief) participation. What brings you back, Alfred? (If the answer is in the video, I won't see it. The table of contents was enough and I'm not going to listen to a 108-minute monologue.)

alfredmacdonald

When I used the website, contributors like lukeprog fostered the self-study of rationality through rigorous source materials like textbooks.

This is no longer remotely close to the case, and the community has become farcical hangout club with redundant jargon/nerd-slang and intolerable idiosyncrasies. Example below:

If the answer is in the video, I won't see it.

Hopeful to disappoint you, but the answer is in the video.

Richard_Kennaway

My declining to listen to your marathon is an "intolerable idiosyncrasy"? You're not making much of a case for my attention. I can see the long list of issues you have with LessWrong, and I'm sure I can predict what I would hear. What has moved you to this sudden explosion? Has this been slowly building up during your ten years of silent watching? What is the context?

alfredmacdonald

you're not making a case for keeping your comments, so mutatis mutandis

When I watch a subtitled film, it is not long before I no longer notice that I am reading subtitles, and when I recall scenes from it afterwards, the actors’ voices in my head are speaking the words that I read.

Me too!  it's a very specific form of synesthesia.  For languages I know a little bit, but not well enough to do without subtitles, it can trick me into thinking I'm far more good at understanding native speakers than I am at all. 

I can't wait until LLMs are good, fast, and cheap enough, and AR or related video technology exists, such that I can get automatic subtitles for real-life conversations, in English as well as other languages.

Epistemic status: crafted primarily for rhetorical parallelism.

All theories are right, but some are useless.

[Wittgenstein] once greeted me with the question: 'Why do people say that it was natural to think that the sun went round the earth rather than that the earth turned on its axis?' I replied: 'I suppose, because it looked as if the sun went round the earth.' 'Well,' he asked, 'what would it have looked like if it had looked as if the earth turned on its axis ?" (Source)

Like this.

Interesting application of a blockchain. What catches my attention is this (my emphasis):

The deepest thinkers about Dark Forest seem to agree that while its use of cryptography is genuinely innovative, an even more compelling proof of concept in the game is its “autonomous” game world—an online environment that no one controls, and which cannot be taken down.

So much for "we can always turn the AI off." This thing is designed to be impossible to turn off.

"Parasite gives wolves what it takes to be pack leaders", Nature, 24 November 2022.

Toxoplasma gondii, the parasite well-known for making rodents lose their fear of cats, and possibly making humans more reckless, also affects wolves in an interesting way.

"infected wolves were 11 times more likely than uninfected ones to leave their birth family to start a new pack, and 46 times more likely to become pack leaders — often the only wolves in the pack that breed."

The gesturing towards the infected wolves being more reproductively fit in general is probably wrong, however. Of course wolves can be more aggressive if it's actually a good idea, there's no need for a parasite to force them to be more aggressive; the suggestion about American lions going extinct is absurd - 11,000 years is more than enough time for wolves to recalibrate such a very heritable trait if it's so fitness-linked! So the question there is merely what is going on? Some sort of bias or very localized fitness benefit?

Is there a selection bias whereas ex ante going for pack leader is a terrible idea, but ex post conditional on victory (rather than death/expulsion) it looks good? Well, this claims to be longitudinal and not find the sorts of correlations you'd expect from a survivorship. What else?

Looking it over, the sampling frame 1995-2020 itself is suspect: starting in 1995. Why did it start then? Well, that's when the wolves came back (very briefly mentioned in the article). The wolf population expanded rapidly 5-fold, and continues to oscillate a lot as packs rise and fold (ranging 8-14) and because of overall mortality/randomness on a small base (a pack is only like 10-20 wolves of all ages, so you can see why there would be a lot of volatility and problems with hard constraints like lower bounds):

Wolf population declines, when they occur, result from "intraspecific strife," food stress, mange, canine distemper, legal hunting of wolves in areas outside the park (for sport or for livestock protection) and in one case in 2009, lethal removal by park officials of a human-habituated wolf.[21]

So, we have at least two good possible explanations there: (a) it was genuinely reproductively-fit to take more risks than the basal wolf, but only because they were expanding into a completely-wolf-empty park and surrounding environs, and the pack-leader GLM they use doesn't include any variables for time period, so on reanalysis, we would find that the leader-effect has been fading out since 1995; and (b) this effect still exists, and risk-seeking individuals do form new packs and are more fit... but only temporarily because they occupied a low-quality pack niche and it goes extinct or does badly enough that they would've done better to stay in the original pack, and this wouldn't show up in a naive individual-level GLM like theirs, you would have to do more careful tracing of genealogies to notice that the new-pack lineages underperform.

Tools, not rules.

Or to put it another way, rules are tools.

What is happiness?

This is an extract from an interview with the guitarist Nilo Nuñez, broadcast yesterday on the BBC World Service. Nuñez was born and brought up in Cuba, and formed a rock band, but he and his group came more and more into conflict with the authorities. He finally decided that he had to leave. When the group received an invitation to tour in the Canary Islands, and the Cuban authorities gave them permission to go, they decided to take the opportunity to leave Cuba and not return. They only had temporary visas, so they stayed on in the Canaries illegally. The interviewer asks him what it was like.

Interviewer: And what would you do during the days?

Nuñez: I would always look for work. I would find small jobs as a cleaner and things like that, just like an undocumented migrant. If I had money because I had done some small jobs, I would eat. If I didn't have money to eat, I would just go hungry, but I wouldn't beg for money.

Int.: And when you wouldn't eat, what would that hunger feel like?

Nuñez: It was very hard, but I was happy. It was hard to live among British or other European tourists, seeing them at the beach, while you were living as a poor undocumented migrant. But I was happy. I had the most important thing I didn't have in Cuba. I had freedom.

Outlook: How The Beatles inspired me to rock against Cuba’s regime. Quoted passage begins at 34:54.

Nino Nuñez eventually continued his professional career as a guitarist and obtained Spanish citizenship.

Oh, lookee here. AI-generated spam.

From New Scientist, 14 Nov 2022, on a 50% fall in honeybee life expectancy since the 1970s:

“For the most part, honeybees are livestock, so beekeepers and breeders often selectively breed from colonies with desirable traits like disease resistance,” says Nearman.

“In this case, it may be possible that selecting for the outcome of disease resistance was an inadvertent selection for reduced lifespan among individual bees,” he says. “Shorter-lived bees would reduce the probability of spreading disease, so colonies with shorter lived bees would appear healthier.”

Another story of Conjuring an Evolution?