Get your shit together and go play the winners’ bracket.
No ; if I want to play I do, if I don't I don't.
That's success.
This whole framing in terms of games is misleading. It doesn't matter what bracket you're playing at, if you feel you have to play you've already lost.
Alas, memetic pressures and credential issuance and incentives are not particularly well aligned with truth or discovery, so this strategy fails predictably in a whole slew of places.
Can you provide specific examples of places where this fails predictably to illustrate? Better: can you make a few predictions of future failures?
If I understand correctly, your position is that we lose status points when we say weird (as in a few standard deviations outside the normal range) but likely true things, and it's useful to get the points back by being cool (=dressing well).
It seems true that there is only so much weird things you can say before people write you off as crazy.
Do you think a strategy where you try to not lose points in the first place would work? for example by letting your interlocutor come to the conclusion on their own by using the Socratic method?
Wow. We are literally witnessing the birth of a new replicator. This is scary.
High-level actions don’t screen off intent
, consequences do.
Chesterton's Missing Fence
Reading the title, I first thought of a situation related to the one you describe, where someone ponders the pros and cons of fencing an open path, and after giving it thoughtful consideration, decides not to, for good reason.
So it's not a question of removing the fence, but that it was never even built, it is "missing". Yet the next person that comes upon the path would be ill-advised to fence it without thoroughly weighing the pros and cons, given that someone else decided not to fence that path.
You may think this all sounds abstract, but if you program often this is actually a situation you come across: programmer P1 spends a lot of time considering the design of a data structure or a codebase and so on, rejects all considered possibilities but the one that they implement, and perhaps document if they have time. But they will usually not document why they rejected and did not implement the N other possibilities they considered.
P2 then comes in thinking "Gee that sure would be convenient if the code had feature F, I can't believe P1 didn't think of that! How silly of them!", not realizing that feature F was carefully considered and rejected, because if you implement it bad thing B happens. There's your missing fence, never was built in the first place, and with good reasons.
Restricting "comment space" to what a prompted LLM approves slightly worries me: I imagine a user tweaking its comment (that may have been flagged as a false positive) so that it fits in the mold of the LLM, and then commenters internalize what the LLM likes and doesn't like, and the comment section ends up filtered through the lens of whatever LLM is doing moderation. The thought of such a comment section does not bring joy.
Is there a post that reviews prior art on the topic of LLM moderation and its impacts? I think that would be useful before taking a decision.
Hypothetically one could spend a few decades researching how to make people smarter (or some other long term thing), unlock that tech, and all that is really good.
But what if you plan your path towards that long-term goal such that it is the unlocking of various lesser but useful techs that gets you there?
Well now that's even better: you get the benefit of reaching the end goal + all the smaller things you accomplished along the way. It gives you some hedge: in case you don't reach the end goal you still accomplished a lot. And cherry on top: it's more sustainable as you get motivation (and money?) from unlocking the intermediary tech.
So it looks like it's worth going out of your way to reap benefits regularly as you journey towards a long term goal.
it’s immediately clear when I've landed on the right solution (even before I execute it), because all of the constraints I’ve been holding in my head get satisfied at once. I think that’s the “clicking” feeling.
It's worth noting that insight does not guarantee you have the right solution: from the paper "The dark side of Eureka: Artificially induced Aha moments make facts feel true" by Laukkonen et al.
John Nash, a mathematician and Nobel laureate, was asked why he believed that he was being recruited by aliens to save the world. He responded, “…the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously”
and
we hypothesized that facts would appear more true if they were artificially accompanied by an Aha! moment elicited using an anagram task. In a preregistered experiment, we found that participants (n = 300) provided higher truth ratings for statements accompanied by solved anagrams even if the facts were false, and the effect was particularly pronounced when participants reported an Aha! experience (d = .629). Recent work suggests that feelings of insight usually accompany correct ideas. However, here we show that feelings of insight can be overgeneralized and bias how true an idea or fact appears, simply if it occurs in the temporal ‘neighbourhood’ of an Aha! moment. We raise the possibility that feelings of insight, epiphanies, and Aha! moments have a dark side, and discuss some circumstances where they may even inspire false beliefs and delusions, with potential clinical importance.
Insight is also relevant to mental illness, psychedelic experiences, and meditation so you might find some papers about it in these fields too.
Most things in life, especially in our technological civilization, are already sort of optimized
I want to nuance that point: in my experience, as soon as I stray one iota from the one size fits all (or no one) products provided by the mass market, things either suck, don't exist or are 10x the price.
Even the so-called optimized path sucks sometimes, for reasons described in Inadequate Equilibria. A tech example of that is Wirth's law:
Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.
There is a lot of software that is literally hundreds of times slower than it could be, because for example it runs on top of bloated frameworks that run on top of toy languages designed in 10 days (cough Javascript cough) that run on top of virtual machines, that run on top of OSes and use protocols designed for a bygone era.
I think that as civilization leverages economies of scale more and more, the gap between the quality/price ratio of custom goods and mass-produced goods increases, which leads to the disappearance of artisans, which means that as time goes on civilization is optimizing a narrower and narrower number of goods, and that sucks when you want a product with specific features that are actually useful for you.
Back to your point, I would say that civilization is often not optimized: we can literally do a hundred times better, but the issue is that often there is no clear path from "creating a better (or a custom) product" to "earning enough money to live".
Musings on human actions, chemical reactions and threshold potentials:
Chemical reactions don't occur unless a specific threshold of energy is reached ; that threshold is called the activation energy. Would it be fruitful to model human actions in the same way, as in they don't occur unless a specific activation energy is reached?
Chemistry has the concept of a catalyst: a substance that lowers the activation energy required for a reaction. Is there an equivalent for human action? On the top of my head I can think of a few:
These are all catalysts: they make it easier to get started on an action.
If from chemistry we go up one level on the ladder of abstraction, to neurons, triggering actions involves threshold potentials, for example to make neurons spike and tell the body to move. If we can measure these threshold potentials, could we look at our brain and go "yep, these neurons have a higher threshold potential, that's an ugh field." Could we then decide to lower that threshold by using a catalyst?