LESSWRONG
LW

2166
Alexander Gietelink Oldenziel
6062Ω1982410160
Message
Dialogue
Subscribe

" (...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties." 
                                                                                                           - Saharon Shelah
"A little learning is a dangerous thing ;
Drink deep, or taste not the Pierian spring"                                                                                                           - Alexander Pope

 

As a true-born Dutchman I endorse  Crocker's rules.

For my most of my writing see my short-forms (new shortform, old shortform)

Twitter: @FellowHominid

Personal website: https://sites.google.com/view/afdago/home

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Singular Learning Theory
5Alexander Gietelink Oldenziel's Shortform
Ω
3y
Ω
621
Alexander Gietelink Oldenziel's Shortform
Alexander Gietelink Oldenziel11h5-10

Claude is smarter than you. Deal with it. 

There is an incredible amount of cope about the current abilities of AI. 

AI isn't infallible. Of course. And yet...

For 90% of queries a well-prompted AI has better responses than 99% of people.For some queries the number of people that could match the kind of deep, broad knowledge that AI has can be counted on two hands. Finally, obviously, there is no man alive on the face of the earth that comes even close to the breadth and depth of crystallized intelligence that AIs now have. 

People have developped a keen apprehension and aversion for " AI slop". The truth of the matter is that  LLMs are incredible writers and if you had presented AI slop as human writing to somebody ten six years ago they would say it is good if somewhat corporate writing all the way to inspired, eloquent, witty. 

Does AI sometimes make mistakes? Of course. So do humans. To be human is to err.

There is an incredible amount of cope about the current abilities of AI. Frankly, I find it embarassing. Witness the absurd call to flag AI-assisted writing. The widespread disdain for " @grok is this true?" . Witness how llm psychosis has gone from perhaps a real phenomenon to a generic slur for slightly kooky people. The endless moving of goalposts. The hysteria for the slopapocalypse. The almost complete lack of interest in integrating AI in life conversations. The widespread shame that people evidently seem to still feel when they use AI. Worst of all - it's not just academia in denial, or the unwashed masses. The most incredible thing to me is how much denial, cope and refusal there is in the AI safety space itself.  

I cannot escape the conclusion that inside each and everyone of us is an insecure ape that cannot bear to see itself usurped from the throne of creation. 

Reply1
Research Reflections
Alexander Gietelink Oldenziel4d112

Happy to hear ILIAD felt productive!

When we started ILIAD this was from a felt sense that there is in fact much more commonality and convergence in research directions than commonly assumed. 

Reply
Leaving Open Philanthropy, going to Anthropic
Alexander Gietelink Oldenziel5d9-19

Feels like something has gone wrong way before when one cares more about money than survival of the human race. 

If a man's judgement is really swayable by equity one cant stop to wonder whether he is the right man for the job in the first place.

Reply3
Drake Thomas's Shortform
Alexander Gietelink Oldenziel6d20

Did you see the movie before ?

Reply
Adele Lopez's Shortform
Alexander Gietelink Oldenziel19d2325

I mean paperclip maximization is of course much more memetic than 'tiny molecular squiggles'. 

Reply
Matthias Dellago's Shortform
Alexander Gietelink Oldenziel23d40

So indeed if you define coherence as the negative of arbitrage value.

There is a pretty close relation between thermodynamic free energy and arbitrgable value and degree to which an entity can be money pumped.

You might also be interested in :

https://www.lesswrong.com/posts/3xF66BNSC5caZuKyC/why-subagents

Reply
anaguma's Shortform
Alexander Gietelink Oldenziel24d2121

Looking over the comments, some of the most upvoted comments express the sentiment ththat Yudkowsky is not the best communicator. This is what the people say.

Reply1
johnswentworth's Shortform
Alexander Gietelink Oldenziel1mo40

For my interest, for these reallife latents with many different pieces contributing a small amount of information do you reckon Eisenstat's Condensation / some unpublished work you mentioned at ODYSSEY would be the right framework here?

Reply
Cole Wyeth's Shortform
Alexander Gietelink Oldenziel1mo30

From the moment I understood the weakness of my flesh, it disgusted me

Reply2
The real AI deploys itself
Alexander Gietelink Oldenziel1mo00

100 percent this. There is this perpetual miscommunication about the word "AGI". 

"When I say AGI, I really mean a general intellignintellignence not just a new app or tool."

Reply
Load More
78Proceedings of ILIAD: Lessons and Progress
6mo
5
80Announcing ILIAD2: ODYSSEY
Ω
7mo
Ω
1
99Timaeus in 2024
Ω
9mo
Ω
1
90Agent Foundations 2025 at CMU
Ω
10mo
Ω
10
67Timaeus is hiring!
Ω
1y
Ω
6
163Announcing ILIAD — Theoretical AI Alignment Conference
Ω
1y
Ω
18
20Are extreme probabilities for P(doom) epistemically justifed?
2y
12
173Timaeus's First Four Months
Ω
2y
Ω
6
59What's next for the field of Agent Foundations?
Ω
2y
Ω
23
188Announcing Timaeus
Ω
2y
Ω
15
Load More