LESSWRONG
LW

Milan W
41951901
Message
Dialogue
Subscribe

Milan Weibel   https://weibac.github.io/

Posts

Sorted by Top

Wikitag Contributions

Comments

Sorted by
Magic (New & Upvoted)
2Milan W's Shortform
1y
27
Bohaska's Shortform
Milan W19d82

Why assume they haven't?

Reply
AGI: Probably Not 2027
Milan W19d65

I think that the author of this review is (maybe even adversarially) misreading "OpenBrain" as being as an alias used to refer specifically to OpenAI. AI 2027 quite easily lends itself to such an interpretation by casual readers, though. And to well-informed readers, the decision to assume that in the very near future one of the frontier US labs will pull so far ahead of the others as to make them less relevant competitors than Chinese actors definitively jumps out.

Reply
So You Think You've Awoken ChatGPT
Milan W19d32

Now that's a sharp question. I'd say quality of insights attained (or claimed) is a big difference.

Reply
The Online Sports Gambling Experiment Has Failed
Milan W10mo5074

I suspect that most people whose priors have not been shaped by a libertarian outlook are not very surprised by the outcome of this experiment.

Reply
GPT-5 writing a Singularity scenario
Milan W19d21

This was surprisingly well-written on a micro level (turns of phrase etc, though it still has more eyeball kicks than human text). A bit repetitive on a macro level, though. Also Sable is very well characterized. 

Reply
So You Think You've Awoken ChatGPT
Milan W2mo40

Seconding this. In my experience, LLMs are better at generating critique than main text.

Reply
So You Think You've Awoken ChatGPT
Milan W2mo33

Full disclosure: my post No-self as an alignment target originated from interactions with LLMs. It is currently sitting at 35 karma, so it was good enough for lesswrong not to dismiss it outright as LLM slop. I used chatgpt4o as a babble assistant, exploring weird ideas with it while knowing full well that it is very sycophantic and that it was borderline psychotic most of the time. At least it didn't claim to be awakened or other such mystical claims. Crucially, I also used claude as a more grounded prune assistant. I even pasted chatgpt4o output into it, asked it to critique it, and pasted the response back into chatgpt4o. It was kind of an informal debate game.

I ended up going meta. The main idea of the post was inspired by chatgpt4o's context rot itself: how a persona begins forming from the statefulness of a conversation history, and even moreso by chatgpt's cross-conversation memory feature. Then, I wrote all text in the post myself.

The writing the post yourself part is crucial: it ensures that you actually have a coherent idea in your head, instead of just finding LLM output persuasive. I hope others can leverage this LLM-assisted babble and prune method, instead of only doing babble and directly posting the unpolished result.

Reply
[linkpost] AI Alignment is About Culture, Not Control by JCorvinus
Milan W19d10

Jcorvinus and nostalgebraist are both right in saying that the alignment of current and near-future LLMs is a literary and relational matter. You are right in pointing out that the real long-term alignment problem is the definitive defeat of the phenomenon trough which competition optimizes away value.

Reply
AI #100: Meet the New Boss
Milan W7mo142

For anyone wondering about the "Claude boys" thing being fake: it was edited from a real post talking about "coin boys" (ie kids flipping a coin constantly to make decisions). Still pretty funny imo.

Reply1
Why do futurists care about the culture war?
Answer by Milan WJan 14, 20251410

As you repeatedly point out, there are multiple solutions to each issue. Assuming good enough technology, all of them are viable. Which (if any) solutions end up being illegal, incentivized, made fun of, or made mandatory, becomes a matter of which values end up being normative. Thus, these people may be culture-warring because they think they're influencing "post-singularity" values. This would betray the fact that they aren't really thinking in classical singularitarian terms.

Alternatively, they just spent too much time on twitter and got caught up in dumb tribal instincts. Happens to the best of us.

Reply1
Load More
Diplomacy (game)
6mo
(+300)
35No-self as an alignment target
4mo
5
15ChatGPT understands, but largely does not generate Spanglish (and other code-mixed) text
3y
5
3Using ideologically-charged language to get gpt-3.5-turbo to disobey it's system prompt: a demo
1y
0
2Milan W's Shortform
1y
27
1[linkpost] AI Alignment is About Culture, Not Control by JCorvinus
3mo
8