LESSWRONG
LW

Hastings
1530Ω281930
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3Hastings's Shortform
3y
23
Should we align AI with maternal instinct?
Hastings19h50

I suspect you are right, however to play devils advocate: in my opinion, the closest example we have of anyone stably aligning a superintelligent creature is housecats aligning humans, and co-opting maternal instinct is a large part of how they did it.

Reply
Hastings's Shortform
Hastings19h20

Obviously the incident when openAI’s voice mode started answering users in their own voices needs to be included- don’t know how I forgot it. That was the point where I explicitly took up the heuristic that if ancient folk wisdom says the Fae do X, the odds of LLMs doing X is not negligible.

Reply
Wei Dai's Shortform
Hastings7d42

intuitively, I would expect any hard coded psychological meta-rule that allows a wife to prevent her husband from day trading significant fractions of their wealth based on facts and logic to be a massive net positive to reproductive fitness over the past 3000 years. It clearly didn’t work this time, but that doesn’t mean it was a bad idea over a population.

Reply
An Introduction to Credal Sets and Infra-Bayes Learnability
Hastings8d20

Ah- the original set isn’t credal, but taking its hull doesn’t change behavior. Got it.

Reply
An Introduction to Credal Sets and Infra-Bayes Learnability
Hastings9d20

Checking my understnding: if I look at a knightian urn with 100 balls and believe that the probability could be 55 or 56 but not 55.5 (due to the lack of half balls) then this does not form a credal set, due to lack of convexity?

Reply
Rauno's Shortform
Hastings12d50

I strongly recommend mathr (https://mathr.co.uk/web/) for a blog far outside the rat sphere with a "working on a draft for a long time" ethos. 

Reply
Red-Thing-Ism
Hastings14d20

Counterpoint: science-as-she-is-played is extraordinarily robust to, and even thrives on, individuals going off on red-thing-ist tangents. The main requirements are that they do go to the amazon and look for red things, and write down and publish the raw observations that underly their red-thing-ist hypotheses.

Reply
GPT-5: The Reverse DeepSeek Moment
Hastings15d51

From a different angle, they spent something like 8 billion dollars on training compute while training GPT-5, so if GPT-5 was cheap to train, where did the billions go?

Reply
GPT-5: The Reverse DeepSeek Moment
Hastings15d20

It’s a matter of degree. There’s already shrinkage- gpt 4 took nearly a year to release

Reply
GPT-5: The Reverse DeepSeek Moment
Hastings15d30

In a race for clout, they could at any time grab six months from thin air in benchmark graphs by closing the internal-release-external-release gap. No idea if they have made this one time play.

Reply
Load More
38The Cats are On To Something
8h
3
387Playing in the Creek
4mo
13
29Agents don't have to be aligned to help us achieve an indefinite pause.
7mo
0
48Evaluating the truth of statements in a world of ambiguous language.
11mo
19
149What good is G-factor if you're dumped in the woods? A field report from a camp counselor.
2y
22
3Hastings's Shortform
3y
23
35Medical Image Registration: The obscure field where Deep Mesaoptimizers are already at the top of the benchmarks. (post + colab notebook)
3y
1
7What are our outs to play to?
3y
0