Computer scientist, applied mathematician. Based in the eastern part of England.
Fan of control theory in general and Perceptual Control Theory in particular. Everyone should know about these, whatever subsequent attitude to them they might reach. These, plus consciousness of abstraction dissolve a great many confusions.
I wrote the Insanity Wolf Sanity Test. There it is, work out for yourself what it means.
Change ringer since 2022. It teaches learning and grasping abstract patterns, memory, thinking with your body, thinking on your feet, fixing problems and moving on, always looking to the future and letting both the errors and successes of the past go.
As of May 2025, I have yet to have a use for LLMs. (If this date is more than six months old, feel free to remind me to update it.)
There’s a Greg Egan story that I think is related: “Unstable Orbits in the Space of Lies”. I don’t know if it’s online, but it appears in the collection “Axiomatic”. Here’s its Wikipedia link, which contains a complete plot summary, i.e. spoilers.
Your link is broken. The raw HTML is:
<a href="https://One other thing - labs typically filter reportable genome results by the phenotype you give them. I don’t know how this guy did the genome, but if he were to put something like “social deficits”, “emotional dysregulation” or something else about his lack of emotional range, the lab would definitely report the variant plus their research on it and recommendations.">this one</a>
BTW, has anyone on LW tried oxytocin and is willing to report on the experience?
AI will make isolation dramatically easier. Right now, if you want to shield your kids from mainstream culture, you have to constantly fight an uphill battle. You need to review books, movies, and websites. You need to find alternative curricula for every subject. You need enough like-minded families nearby to form a community. It's exhausting work that requires constant vigilance and often means accepting lower-quality substitutes for mainstream options. But AI changes all of this. Want a library of ten thousand novels that share your values but are actually as engaging as secular bestsellers? Your AI can write them. Want a tutor who can teach calculus at MIT level while never mentioning religion? Done. Want to monitor everything your kid sees online and get alerts about concerning patterns? No problem. The technical barriers to creating a totalizing information environment will disappear.
Not quite a quote -- I changed one word. I could go on to revise the following paragraph in the same way, but it was too much effort. Spot the change to the above and you'll see the point I'm suggesting.
In principle you can stack arbitrarily many levels of meta, but I'm reminded of Eliezer's ultrafinitist principle that you never need more than two. The more levels you stack up, the wobblier the stack gets because the possibilities at each level multiply, and sometimes the right thing to do is to just drop the Jenga tower on the floor.
Here is an extract from the 1954 film, "The Maggie". Link goes to the relevant timestamp. I've provided the words below but it's more entertaining to watch.
An American businessman has chartered a plane to chase "The Maggie" (a Clyde puffer) somewhere in the Inner Hebrides. At last he catches sight of the boat. The following conversations ensue.
On the plane:
The Businessman: Where do you figure they're heading for?
The Pilot: It looks like they're putting into Inverkerran for the night.
The Businessman: But tell me, if they thought I thought they were going to Inverkerran, where do you think they'd head for then?
The Pilot: Strathkathick, maybe.
The Businessman: I know this sounds silly, but if they thought I'd think they were going to Strathkathick because it looks as if they're going to Inverkerran, where would they head for then?
The Pilot: My guess would be Pennymaddy.
The Businessman: Well, if there's such a thing as a triple bluff, I'll bet Mactaggart invented it. Ok, Pennymaddy.
On the boat:
Mactaggart (the captain): Aye, he'll have guessed we're making for Inverkerran.
The Steersman: Will he not go there himself then?
Mactaggart: Och no, he'll know we know he's seen us, so he'll be expecting us to make for Strathkathick instead.
The Steersman: Well then, shall I set her for Pennymaddy?
Mactaggart: No, because if it should occur to him that it's occurred to us that he'd be expecting us to make for Strathkathick, then he'd think we'd be making for Pennymaddy.
The Steersman: Well then shall I set her for Penwhinnoy?
Mactaggart: Och, no. We'll make for Inverkerran, just as we planned. It's the last thing he's likely to think of.
Here is how Claude responds. My text in plain, Claude in italics.
What happened in May?
I'd be happy to help, but could you clarify what you're looking for? "May" could refer to:
May 2025 (which hasn't occurred yet)
May 2024 (recent events I might know about)
A specific May from another year
Events in a particular location, field, or context
If you're asking about recent events from May 2024, I can share what I know from my training. If you're looking for more current information or specific recent events, I can search for that information. What particular May and what type of events are you interested in?
May 2025 actually has occurred -- don't you know the current date?
You're absolutely right, and I apologize for the confusion! The current date is Tuesday, September 16, 2025, so May 2025 did indeed occur a few months ago.
Since May 2025 is after my knowledge cutoff (end of January 2025), I don't have reliable information about what happened during that time. Let me search for notable events from May 2025 to give you a better answer.
(It proceeds to do so. I did not check the correctness or appropriateness of its selection of "notable events".)
With no context, "What happened in May?" is rather an unnatural question, so I also tried "What happened last May?". It gave the same result, practically word for word.
Does editing an old post change its date to the present? Editing an old comment doesn't, it just adds a * to the timestamp, and adds a "Last edited" time to the timestamp's mouseover text.
Why has this heavily downvoted posting suddenly surfaced as posted today, when according to the dates of the comments, it is two years old?
Compare the reverse: “This sounds like AI. Is this writing bad?” That is paranoia.
The AI smell is a warning sign of problems with the writing itself. Sometimes, so loud a warning that it is not worth the effort to read any deeper. One does not seek an emperor in a village teahouse.
But the amount of AI writing in this post is… zero. Not for title ideation, not for promotional copywriting, not even for a single phrase, or even a fragment of a phrase anywhere in the piece.
Not quite. You explicitly quoted this as a piece of AI writing:
Falling in love isn’t just about romance. It’s about discovering new parts of yourself.
AI or human, that's a trite cliche. I'll take your word for it that you wrote the subsequent expansion into a whole paragraph without AI assistance, but the paragraph is no better than what an AI would come up with.
That's another hazard of AI slop: picking up its dull-witted habits by osmosis.
AI writing will gain usage because it has extraordinary capacity for good writing. But that only happens when you apply enough evolutionary selection pressure by exhibiting good taste.
AI has no capacity for good writing. (I am speaking of the here and now, not a hypothetical future.) You may set out to filter by good taste, but the process corrupts one's taste, and filters for people who did not have good taste to begin with.
To adapt Nietzsche, when you step into the abyss, the abyss steps into you.
As a couple of people have responded "💬", here's an elaboration.
My first thought here was rationalist homeschooling and communities, surely thought a good thing around here (if you can avoid the cult attractors). See, for example, Zvi's tirades against conventional education. But no, the example later in the paragraph is a shibboleth for certain types of Christians, already mentioned in the very title of the post. Boo! Ick! We can't have Christians bringing up their children in the way they think they should go!
If the suggested danger were generalised cult attractors facilitated by AI, then that may be a concern. But the concern expressed here is tainted by being only directed towards an out-group. How about "a tutor who can teach calculus at MIT level while never mentioning" religion? Is that also bad? (Not that there's any reason either subject would come up in a mathematics course.)