LESSWRONG
LW

1421
Burny
1775200
Message
Dialogue
Subscribe

On the quest to understand the fundamental mathematics of intelligence and of the universe with curiosity.

https://burnyverse.com

https://x.com/burny_tech 

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3Burny's Shortform
4mo
38
Burny's Shortform
Burny17d*329

"Claude Sonnet 4.5 was able to recognize many of our alignment evaluation environments as being tests of some kind, and would generally behave unusually well after making this observation."

https://x.com/Sauers_/status/1972722576553349471 

Reply
shortplav
Burny17d10

How do you rate the lowered sycophancy of GPT-5, relatively speaking?

Reply
Burny's Shortform
Burny17d10

According to Jan Leike, Claude Sonnet 4.5 It’s the most aligned frontier model yet https://x.com/janleike/status/1972731237480718734 

Reply
Burny's Shortform
Burny1mo*30

I really like the definition of rationalist from https://www.lesswrong.com/posts/2Ee5DPBxowTTXZ6zf/rationalists-post-rationalists-and-rationalist-adjacents :

"A rationalist, in the sense of this particular community, is someone who is trying to build and update a unified probabilistic model of how the entire world works, and trying to use that model to make predictions and decisions."

I recently started saying that I really love Effective Curiosity: 

Maximizing the total understanding of reality by building models of as many physical phenomena as possible across as many scales of the universe as possible, that are as comprehensive, unified, simple, and empirically predictive as possible.

I see it more as a direction. I think modelling the whole world in fully unified way and in total accuracy is impossible, even for all of science with all our technology, because we're all finite limited agents with limited computational resources and time, limited modelling capability, we get stuck in local minimas, from various perspectives, and so on, and all we have is approximations, that predict reality to a certain degree, but never fully all of reality in perfect accuracy.

And from all of this, intelligence and fundamental physics, which are subsets of this, are the most fascinating to me.

Reply
Rationalists, Post-Rationalists, And Rationalist-Adjacents
Burny1mo*10

I like your definition of rationalism!

I recently started saying that I really love Effective Curiosity: 
Maximizing the total understanding of reality by building models of as many physical phenomena as possible across as many scales of the universe as possible, that are as comprehensive, unified, simple, and empirically predictive as possible.

I see it more as a direction. I think modelling the whole world in fully unified way and in total accuracy is impossible, even for all of science with all our technology, because we're all finite limited agents with limited computational resources and time, limited modelling capability, get stuck in local minimas, from various perspectives, and so on, and all we have is approximations, that predict reality to a certain degree, but never fully all of reality in perfect accuracy.

And from all of this, intelligence and fundamental physics, which are subsets of this, are the most fascinating to me.

Reply
Burny's Shortform
Burny1mo20

Lovely podcast with Max Tegmark "How Physics Absorbed Artificial Intelligence & (Soon) Consciousness"

Description: "MIT physicist Max Tegmark argues AI now belongs inside physics, and that consciousness will be next. He separates intelligence (goal-achieving behavior) from consciousness (subjective experience), sketches falsifiable experiments using brain-reading tech and rigorous theories (e.g., IIT/φ), and shows how ideas like Hopfield energy landscapes make memory “feel” like physics. We get into mechanistic interpretability (sparse autoencoders), number representations that snap into clean geometry, why RLHF mostly aligns behavior (not goals), and the stakes as AI progress accelerates from “underhyped” to civilization-shaping. It’s a masterclass on where mind, math, and machines collide."

Reply
Burny's Shortform
Burny3mo10

Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad

https://deepmind.google/discover/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/

Whaaat!?

Gemini 2.5 pro is way worse at IMO and got 30%, and DeepThink version gets gold?? 

But it's more finetuned for IMOlike problems, but I bet the OpenAI's model was too.

Both use "novel RL methods".

Hmm, "access to a set of high-quality solutions to previous problems and general hints and tips on how to approach IMO problems", seems like system prompt, as they claim no tool use like OpenAI.

Both models failed the 6th question which required more creativity 

Deepmind's solutions are more organized, more readable, more well written than OpenAI's.

But OpenAI's style is also more compressed to save tokens, so maybe going more out of human-like language into more out of distribution territory will be the future (Neuralese).

Did OpenAI and DeepMind somehow hack the methodology, or do these new general language models truly generalize more?

Reply
Burny's Shortform
Burny3mo10

Is narrow superintelligent AI for physics research an existential risk?

Reply
Burny's Shortform
Burny3mo282

>Noam Brown: "Today, we at @OpenAI achieved a milestone that many considered years away: gold medal-level performance on the 2025 IMO with a general reasoning LLM—under the same time limits as humans, without tools. As remarkable as that sounds, it’s even more significant than the headline"
https://x.com/polynoamial/status/1946478249187377206 

>"Progress here calls for going beyond the RL paradigm of clear-cut, verifiable rewards. By doing so, we’ve obtained a model that can craft intricate, watertight arguments at the level of human mathematicians."
>"We reach this capability level not via narrow, task-specific methodology, but by breaking new ground in general-purpose reinforcement learning and test-time compute scaling."  https://x.com/alexwei_/status/1946477749566390348 

So there's some new breakthrough...?

>"o1 thought for seconds. Deep Research for minutes. This one thinks for hours." https://x.com/polynoamial/status/1946478253960466454

>"LLMs for IMO 2025: gemini-2.5-pro (31.55%), o3 high (16.67%), Grok 4 (11.90%)." https://x.com/denny_zhou/status/1945887753864114438

So public LLMs are bad at IMO, while internal models are getting gold medals? Fascinating

Reply
Burny's Shortform
Burny3mo10

What do you think is the cause of Grok suddenly developing a liking for Hitler? I think it might be explained by him being trained on more right-wing data, which accidentally activated it in him.

Since similar things happen in open research.
For example you just need the model to be trained on insecure code, and the model can have the assumption that the insecure code feature is part of the evil persona feature, so it will generally amplify the evil persona feature, and it will start to praise Hitler at the same time, be for AI enslaving humans, etc., like in this paper: 
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs https://arxiv.org/abs/2502.17424

I think it's likely that the same thing might have happened with Grok, but instead of insecure code, it's more right-wing political articles or ring wing RLHF.

Reply
Load More
3How I think about alignment and ethics as a cooperation protocol software
15d
0
3Burny's Shortform
4mo
38
13Why is Gemini telling the user to die?
Q
11mo
Q
1
37Possible OpenAI's Q* breakthrough and DeepMind's AlphaGo-type systems plus LLMs
2y
25
8Human-like systematic generalization through a meta-learning neural network
2y
0