757

LESSWRONG
LW

756

Cole Wyeth's Shortform

by Cole Wyeth
28th Sep 2024
1 min read
192

5

This is a special post for quick takes by Cole Wyeth. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Cole Wyeth's Shortform
48Cole Wyeth
7ACCount
3the gears to ascension
1ACCount
3Cole Wyeth
7Cole Wyeth
9Thane Ruthenis
4Cole Wyeth
2quetzal_rainbow
6Mateusz Bagiński
2Cole Wyeth
3quetzal_rainbow
5Cole Wyeth
2romeostevensit
9Cole Wyeth
38Cole Wyeth
30Cole Wyeth
6rotatingpaguro
6Cole Wyeth
6quetzal_rainbow
30Cole Wyeth
17David Hornbein
1David James
4wonder
27Cole Wyeth
1anaguma
3Cole Wyeth
25Cole Wyeth
8Alexander Gietelink Oldenziel
2ZY
8abramdemski
8Vladimir_Nesov
1Cole Wyeth
6Vladimir_Nesov
21Cole Wyeth
7Noosphere89
2cubefox
19Cole Wyeth
9the gears to ascension
1CstineSublime
4the gears to ascension
1CstineSublime
4Yair Halberstadt
1CstineSublime
14Cole Wyeth
2Karl Krueger
9gwern
2Cole Wyeth
1kaleb
3Cole Wyeth
2jenn
2Cole Wyeth
14Cole Wyeth
10CAC
12Aaron Staley
3Cole Wyeth
2Cole Wyeth
2CAC
2sam b
3Cole Wyeth
2O O
1prajwal
2Kabir Kumar
2Cole Wyeth
2Cole Wyeth
1StanislavKrym
3Vladimir_Nesov
2Cole Wyeth
15Vladimir_Nesov
2Cole Wyeth
4sam b
2Cole Wyeth
11Cole Wyeth
9Thane Ruthenis
5ryan_greenblatt
5Vladimir_Nesov
3Thane Ruthenis
5Cole Wyeth
4faul_sname
3StanislavKrym
2Cole Wyeth
2ryan_greenblatt
2Cole Wyeth
4ryan_greenblatt
2Cole Wyeth
4Joseph Miller
4Cole Wyeth
2Stephen Fowler
11Cole Wyeth
14Vladimir_Nesov
5Cole Wyeth
10Steven Byrnes
8Cole Wyeth
8Cole Wyeth
8Cole Wyeth
8Cole Wyeth
12habryka
1Cole Wyeth
6Vladimir_Nesov
3core_admiral
4Vladimir_Nesov
3cdt
3eigen
1Cole Wyeth
2cubefox
1Cole Wyeth
1Cole Wyeth
7Cole Wyeth
3Vladimir_Nesov
2the gears to ascension
1danielms
7Cole Wyeth
17Thomas Kwa
3Aaron Staley
3Vladimir_Nesov
3Cole Wyeth
11Vladimir_Nesov
7Cole Wyeth
1Person
7Cole Wyeth
20habryka
7Cole Wyeth
6Cole Wyeth
6Cole Wyeth
12Lucius Bushnaq
3ProgramCrafter
2Viliam
5Cole Wyeth
5Cole Wyeth
5Julian Bradshaw
2Cole Wyeth
2Julian Bradshaw
3Karl Krueger
4clone of saturn
2Nate Showell
5Cole Wyeth
5Cole Wyeth
4MondSemmel
2cdt
1Ariel Cheng
1CstineSublime
5Cole Wyeth
5Cole Wyeth
4Cole Wyeth
2Vladimir_Nesov
4Cole Wyeth
4Cole Wyeth
4Cole Wyeth
2Noosphere89
1Roman Malov
2Cole Wyeth
4Cole Wyeth
4Cole Wyeth
2Cole Wyeth
7Thane Ruthenis
4Cole Wyeth
2Noosphere89
4Cole Wyeth
3Cole Wyeth
3Cole Wyeth
15Thane Ruthenis
5Cole Wyeth
2Viliam
3Cole Wyeth
4Phiwip
2Cole Wyeth
2Viliam
2Cole Wyeth
3Cole Wyeth
3Cole Wyeth
2Cole Wyeth
3Cole Wyeth
3Cole Wyeth
3CstineSublime
2Alexander Gietelink Oldenziel
3Cole Wyeth
3Cole Wyeth
2Cole Wyeth
2Cole Wyeth
2Cole Wyeth
1Cole Wyeth
1Cole Wyeth
6Seth Herd
5gwern
2Hastings
1Cole Wyeth
0Cole Wyeth
-7Cole Wyeth
11the gears to ascension
6Cole Wyeth
2the gears to ascension
2Cole Wyeth
192 comments, sorted by
top scoring
Click to highlight new comments since: Today at 7:27 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Cole Wyeth14d*4812

Where is the hard evidence that LLMs are useful?

Has anyone seen convincing evidence of AI driving developer productivity or economic growth?

It seems I am only reading negative results about studies on applications.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

https://www.lesswrong.com/posts/25JGNnT9Kg4aN5N5s/metr-research-update-algorithmic-vs-holistic-evaluation

And in terms of startup growth: 

https://www.lesswrong.com/posts/hxYiwSqmvxzCXuqty/generative-ai-is-not-causing-ycombinator-companies-to-grow

apparently wider economic measurements are not clear?


Also agency still seems very bad, about what I would have expected from decent scaffolding on top of GPT-3:

https://www.lesswrong.com/posts/89qhQH8eHsrZxveHp/claude-plays-whatever-it-wants
 

(Plus ongoing poor results on Pokémon, modern LLMs still can only win with elaborate task-specific scaffolding)


Though performance on the IMO seems impressive, the very few examples of mathematical discoveries by LLMs don’t seem (to me) to be increasing much in either frequency or quality, and so far are mostly of type “get a better lower bound by combinatorially trying stuff” which seems to advantage computers with... (read more)

Reply
7ACCount13d
Are you looking for utility in all the wrong places? Recent news have quite a few mentions of: AI tanking the job prospects of fresh grads across multiple fields and, at the same time, AI causing a job market bloodbath in the usual outsourcing capitals of the world. That sure lines up with known AI capabilities. AI isn't at the point of "radical transformation of everything" yet, clearly. You can't replace a badass crew of x10 developers who can build the next big startup with AIs today. AI doesn't unlock all that many "things that were impossible before" either - some are here already, but not enough to upend everything. What it does instead is take the cheapest, most replaceable labor on the market, and make it cheaper and more replaceable. That's the ongoing impact.
3the gears to ascension13d
idk if these are good search results, but I asked claude to look up and see if citations seem to justify the claim and if we care about the results someone should read the articles for real
1ACCount13d
Yep, that's what I've seen. The "entry-level jobs" study looked alright at a glance. I did not look into the claims of outsourcing job losses in any more detail - only noted that it was claimed multiple times.
3Cole Wyeth13d
Citation needed
7Cole Wyeth13d
I’m not saying it’s a bad take, but I asked for strong evidence. I want at least some kind of source. 
9Thane Ruthenis13d
There's this recent paper, see Zvi's summary/discussion here. I have not looked into it deeply. Looks a bit weird to me. Overall, the very fact that there's so much confusion around whether LLMs are or are not useful is itself extremely weird. (Disclaimer: off-the-cuff speculation, no idea if that is how anything works.) I'm not sure how much I buy this narrative, to be honest. The kind of archetypical "useless junior dev" who can be outright replaced by an LLM probably... wasn't being hired to do the job anyway, but instead as a human-capital investment? To be transformed into a middle/senior dev, whose job an LLM can't yet do. So LLMs achieving short-term-capability parity with juniors shouldn't hurt juniors' job prospects, because they weren't hired for their existing capabilities anyway. Hmm, perhaps it's not quite like this. Suppose companies weren't "consciously" hiring junior developers as a future investments; that they "thought"[1] junior devs are actually useful, in the sense that if they "knew" they were just a future investment, they wouldn't have been hired. The appearance of LLMs who are as capable as junior devs would now remove the pretense that the junior devs provide counterfactual immediate value. So their hiring would stop, because middle/senior managers would be unable to keep justifying it, despite the quiet fact that they were effectively not being hired for their immediate skills anyway. And so the career pipeline would get clogged. Maybe that's what's happening? (Again, no idea if that's how anything there works, I have very limited experience in that sphere.) 1. ^ In a semi-metaphorical sense, as an emergent property of various social dynamics between the middle managers reporting on juniors' performance to senior managers who set company priorities based in part on what would look good and justifiable to the shareholders, or something along those lines.
4Cole Wyeth12d
This is the hardest evidence anyone has brought up in this thread (?) but I’m inclined to buy your rebuttal about the trend really starting in 2022 which it is hard to believe comes from LLMs. 
2quetzal_rainbow13d
I don't think it's reasonable to expect such evidence to appear after such short period of time. There were no hard evidence that electricity is useful in a sense you are talking about until 1920s. Current LLMs are clearly not AGIs in a sense that they can integrate into economy as migrant labor, therefore, productivity gains from LLMs are bottlenecked on users. 
6Mateusz Bagiński13d
I find this reply broadly reasonable, but I'd like to see some systematic investigations of the analogy between gradual adoption and rising utility of electricity and gradual adoption and rising utility of LLMs (as well as other "truly novel technologies").
2Cole Wyeth13d
That’s interesting, but adoption of LLMs has been quite fast.
3quetzal_rainbow12d
There is a difference between adoption as in "people are using it" and adoption as in "people are using it in economically productive way". I think supermajority of productivity from LLMs is realized as pure consumer surplus right now.
5Cole Wyeth12d
I understand your theory. However I am asking in this post for hard evidence. If there is no hard evidence, that doesn’t prove a negative, but it does mean a lot of LW is engaging in a heavy amount of speculation. 
2romeostevensit13d
My impression is that so far the kinds of people whose work could be automated aren't the kind to navigate the complexities of building bespoke harnesses to have llms do useful work. So we have the much slower process of people manually automating others.
9Cole Wyeth13d
The part where you have to build bespoke harnesses seems suspicious to me. What if, you know, something about how the job needs to be done changes?
[-]Cole Wyeth12d383

The textbook reading group on "An Introduction to Universal Artificial Intelligence," which introduces the necessary background for AIXI research, has started, and really gets underway this Monday (Sept. 8th) with sections 2.1 - 2.6.2. Now is about the last chance to easily jump in (since we have only read the intro chapter 1 so far). Please read in advance and be prepared to ask questions and/or solve some exercises. First session had around 20-25 attendees, will probably break up into groups of 5.

Meeting calendar is on the website: https://uaiasi.com/

Reach out to me in advance for a meeting link, DM or preferably colewyeth@gmail.com. Include your phone number if you want to be added to the WhatsApp group (optional). 

Pitch for reading the book from @Alex_Altair: https://www.lesswrong.com/posts/nAR6yhptyMuwPLokc/new-intro-textbook-on-aixi

This is following up on the new AIXI research community announcement: https://www.lesswrong.com/posts/H5cQ8gbktb4mpquSg/launching-new-aixi-research-community-website-reading-group

Reply
[-]Cole Wyeth3mo30-1

I wonder if the reason that polynomial time algorithms tend to be somewhat practical (not runtime n^100) is just that we aren’t smart enough to invent really necessarily complicated polynomial time algorithms.

Like, the obvious way to get n^100 is to nest 100 for loops. A problem which can only be solved in polynomial time by nesting 100 for loops (presumably doing logically distinct things that cannot be collapsed!) is a problem that I am not going to solve in polynomial time… 

Reply2
6rotatingpaguro3mo
Reasons I deem more likely: 1. Selection effect: if it's unfeasible you don't work on it/don't hear about it, in my personal experience n^3 is already slow 2. If in n^k k is high, probably you have some representation where k is a parameter and so you say it's exponential in k, not that it's polinomial
6Cole Wyeth3mo
1: Not true, I hear about exponential time algorithms! People work on all sorts of problems only known to have exponential time algorithms.  2: Yes, but the reason k only shows up as something we would interpret as a parameter and not as a result of the computational complexity of an algorithm invented for a natural problem is perhaps because of my original point - we can only invent the algorithm if the problem has structure that suggests the algorithm, in which case the algorithm is collapsible and k can be separated out as an additional input for a simpler algorithm.  
6quetzal_rainbow3mo
I think canonical high-degree polynomial problem is high-dimensional search. We usually don't implement exact grid search because we can deploy Monte Carlo or gradient descent. I wonder if there is any hard lower bounds on approximation hardness for polynomial time problems.
[-]Cole Wyeth7mo309

A fun illustration of survivorship/selection bias is that nearly every time I find myself reading an older paper, I find it insightful, cogent, and clearly written.

Reply
[-]David Hornbein7mo1715

Selection bias isn't the whole story. The median paper in almost every field is notably worse than it was in, say, 1985. Academia is less selective than it used to be—in the U.S., there are more PhDs per capita, and the average IQ/test scores/whatever metric has dropped for every level of educational attainment.

Grab a journal that's been around for a long time, read a few old papers and a few new papers at random, and you'll notice the difference.

Reply
1David James6mo
To what degree is this true regarding elite-level Ph.D. programs that are likely to lead to publication in (i) mathematics and/or (ii) computer science? Separately, we should remember that academic selection is a relative metric, i.e. graded on a curve. So, when it comes to Ph.D. programs, is the median 2024 Ph.D. graduate more capable (however you want to define it) than the corresponding graduate from 1985? This is complex, involving their intellectual foundations, depth of their specialized knowledge, various forms of raw intelligence, attention span, collaborative skills, communication ability (including writing skills), and computational tools? I realize what I'm about to say next may not be representative of the median Ph.D. student, but it feels to me the 2024 graduates of, say, Berkeley or MIT (not to mention, say, Thomas Jefferson High School) are significantly more capable than the corresponding 1985 graduates. Does my sentiment resonate with others and/or correspond to some objective metrics?
4wonder7mo
Based on my observations, I would also think some current publication chasing culture could get people push out papers more quickly (in some particular domains like CS), even though some papers may be partially completed
[-]Cole Wyeth11d*270

Rationality (and other) heuristics I've actually found useful for getting stuff done, but unfortunately you probably won't:

1: Get it done quickly and soon. Every step of every process outside of yourself will take longer than expected, so the effective deadline is sooner than you might think. Also if you don't get it done soon you might forget (or forget some steps).

1(A): 1 is stupidly important. 

2: Do things that set off positive feedback loops. Aggressively try to avoid doing other things. I said aggressively.

2(A): Read a lot, but not too much.*

3: You are probably already making fairly reasonable choices over the action set you are considering.  It's easiest to fall short(er) of optimal behavior by failing to realize you have affordances. Discover affordances.

4: Eat.

(I think 3 is least strongly held)

*I'm describing how to get things done. Reading more has other benefits, for instance if you don't know the thing you want to get done yet, and its pleasant and self-actualizing. 

Reply
1anaguma11d
What was 3?
3Cole Wyeth11d
2(A) lol, fixed now.
[-]Cole Wyeth8mo*256

The primary optimization target for LLM companies/engineers seems to be making them seem smart to humans, particularly the nerds who seem prone to using them frequently. A lot of money and talent is being spent on this. It seems reasonable to expect that they are less smart than they seem to you, particularly if you are in the target category. This is a type of Goodharting. 

In fact, I am beginning to suspect that they aren't really good for anything except seeming smart, and most rationalists have totally fallen for it, for example Zvi insisting that anyone who is not using LLMs to multiply their productivity is not serious (this is a vibe not a direct quote but I think it's a fair representation of his writing over the last year). If I had to guess, LLMs have 0.99x'ed my productivity by occasionally convincing me to try to use them which is not quite paid for by very rarely fixing a bug in my code. The number is close to 1x because I don't use them much, not because they're almost useful. Lots of other people seem to have much worse ratios because LLMs act as a superstimulus for them (not primarily a productivity tool). 

Certainly this is an impressive technology, surpris... (read more)

Reply1
8Alexander Gietelink Oldenziel7mo
I use LLMs throughout my personal and professional life. The productivity gains are immense. Yes hallucination is a problem but it's just as spam/ads/misinformation on wikipedia/internet - an small drawback that doesn't oblivate the ginormous potential of the internet/LLMs I am 95% certain you are leaving value on the table.  I do agree straight LLMs are not generally intelligent (in the sense of universal intelligence/AIXI) and therefore not completely comparable to humans. 
2ZY7mo
On LLMs vs search on internet: agree that LLMs are very helpful in many ways, both personally and professionally, but the worse parts of the misinformation in LLM comparing to wikipedia/internets in my opinion includes: 1) it is relatively more unpredictable when the model will hallucinate, whereas for wikipedia/internet, you would generally expect higher accuracy for simpler/purely factual/mathematical information. 2) it is harder to judge the credibility without knowing the source of the information, whereas on the internet, we could get some signals where the website domain, etc.
8abramdemski7mo
From my personal experience, I agree. I find myself unexcited about trying the newest LLM models. My main use-case in practice these days is Perplexity, and I only use it when I don't care much about the accuracy of the results (which ends up being a lot, actually... maybe too much). Perplexity confabulates quite often even with accurate references in hand (but at least I can check the references). And it is worse than me at the basics of googling things, so it isn't as if I expect it to find better references than me; the main value-add is in quickly reading and summarizing search results (although the new Deep Research option on Perplexity will at least iterate through several attempted searches, so it might actually find things that I wouldn't have). I have been relatively persistent about trying to use LLMs for actual research purposes, but the hallucination rate seems to go to 100% almost whenever an accurate result would be useful to me.  The hallucination rate does seem adequately low when talking about established mathematics (so long as you don't ask for novel implications, such as applying ideas to new examples). For this and for other reasons I think they can be quite helpful for people trying to get oriented to a subfield they aren't familiar with -- it can make for a great study partner, so long as you verify what it says be checking other references.  Also decent for coding, of course, although the same caveat applies -- coders who are already an expert in what they are trying to do will get much less utility out of it. I recently spoke to someone who made a plausible claim that LLMs were 10xing their productivity in communicating technical ideas in AI alignment with something like the following workflow: * Take a specific cluster of failure modes for thinking about alignment which you've seen often. * Hand-write a large, careful prompt document about the cluster of alignment failure modes, which includes many specific trigger-action patterns (i
8Vladimir_Nesov8mo
I expect he'd disagree, for example I vaguely recall him mentioning that LLMs are not useful in a productivity-changing way for his own work. And 10x specifically seems clearly too high for most things even where LLMs are very useful, other bottlenecks will dominate before that happens.
1Cole Wyeth8mo
10x was probably too strong but his posts are very clear he things it's a large productivity multiplier. I'll try to remember to link the next instance I see. 
6Vladimir_Nesov8mo
Found the following in the Jan 23 newsletter:
[-]Cole Wyeth1y213

Mathematics students are often annoyed that they have to worry about "bizarre or unnatural" counterexamples when proving things. For instance, differentiable functions without continuous derivative  are pretty weird. Particularly engineers tend to protest that these things will never occur in practice, because they don't show up physically. But these adversarial examples show up constantly in the practice of mathematics - when I am trying to prove (or calculate) something difficult, I will try to cram the situation into a shape that fits one of the theorems in my toolbox, and if those tools don't naturally apply I'll construct all kinds of bizarre situations along the way while changing perspective. In other words, bizarre adversarial examples are common in intermediate calculations - that's why you can't just safely forget about them when proving theorems. Your logic has to be totally sound as a matter of abstraction or interface design - otherwise someone will misuse it. 

Reply
7Noosphere891y
While I think the reaction against pathological examples can definitely make sense, and in particular there is a bad habit of some people to overfocus on pathological examples, I do think mathematics is quite different from other fields in that you want to prove that a property holds for all objects with a certain property, or prove that there exists an object with a certain property, and in these cases you can't ignore the pathological examples, because they can provide you with either solutions to your problem, or show why your approach can't work. This is why I didn't exactly like Dalcy's point 3 here: https://www.lesswrong.com/posts/GG2NFdgtxxjEssyiE/dalcy-s-shortform#qp2zv9FrkaSdnG6XQ
2cubefox1y
There is also the reverse case, where it is often common practice in math or logic to ignore bizarre and unnatural counterexamples. For example, first-order Peano arithmetic is often identified with Peano arithmetic in general, even though the first order theory allows the existence of highly "unnatural" numbers which are certainly not natural numbers, which are the subject of Peano arithmetic. Another example is the power set axiom in set theory. It is usually assumed to imply the existence of the power set of each infinite set. But the axiom only implies that the existence of such power sets is possible, i.e. that they can exist (in some models), not that they exist full stop. In general, non-categorical theories are often tacitly assumed to talk about some intuitive standard model, even though the axioms don't specify it. Eliezer talks about both cases in his Highly Advanced Epistemology 101 for Beginners sequence.
[-]Cole Wyeth2mo*195

From soares and fallenstein “towards idealized decision theory”:

“If someone cannot formally state what it means to find the best decision in theory, then they are probably not ready to construct heuristics that attempt to find the best decision in practice.”

This statement seems rather questionable. I wonder if it is a load-bearing assumption.

Reply
9the gears to ascension2mo
best seems to do a lot of the work there.
1CstineSublime2mo
I'm not sure what you mean. What is "best" is easily arrived at. If you're a financier and your goal is to make money, then any formal statement about your decision will maximize money. If you're a swimmer and your goal is to win an Olympic gold medal, then a formal statement of your decision will obviously include "win gold medal" - part of the plan to execute it may include "beat the current world record for swimming in my category" but "best" isn't doing the heavy lifting here - the actual formal statement that encapsulates all the factors is - such as what are the milestones. And if someone doesn't know what they mean when they think of what is best - then the statement holds true. If you don't know what is "best" then you don't know what practical heuristics will deliver you "good enough". To put it another way - what are the situations where not defining in clear terms what is best still leads to well constructed heuristics to find the best decision in practice? (I will undercut myself - there is something to be said for exploration [1]and "F*** Around and Find Out" with no particular goal in mind. ) 1. ^ Bosh! Stephen said rudely. A man of genius makes no mistakes. His errors are volitional and are the portals of discovery. - Ulysses, James Joyce
4the gears to ascension2mo
is your only goal in life to make money? is your only goal in life to win a gold medal? and if they are, how do you define the direction such that you're sure that among all possible worlds, maximizing this statement actually produces the world that maxes out goal-achievingness? that's where decision theories seem to me to come in. the test cases of decision theories are situations where maxing out, eg, CDT, does not in fact produce the highest-goal-score world. that seems to me to be where the difference Cole is raising comes up: if you're merely moving in the direction of good worlds you can have more complex strategies that potentially make less sense but get closer to the best world, without having properly defined a single mathematical statement whose maxima is that best world. argmax(CDT(money)) may be less than genetic_algo(policy, money, iters=1b) even though argmax is a strict superlative, if the genetic algo finds something closer to, eg, argmax(FDT(money)). edit: in other words, I'm saying "best" as opposed to "good". what is good is generally easily arrived at. it's not hard to find situations where what is best is intractable to calculate, even if you're sure you're asking for it correctly.
1CstineSublime2mo
by using the suffix "-(e)st". "The fastest" "the richest" "the purple-est" "the highest" "the westernmost".  That's the easy part - defining theoretically what is best. Mapping that theory to reality is hard.
4Yair Halberstadt2mo
I don't know what is in theory the best possible life I can live, but I do know ways that I can improve my life significantly.
1CstineSublime2mo
Can you rephrase that - because you're mentioning theory and possibility at once which sounds like an oxymoron to me. That which is in theory best implies that which is impossible or at least unlikely. If you can rephrase it I'll probably be able to understand what you mean. Also, if you had a 'magic wand' and could change a whole raft of things at once, do you have a vision of your "best" life that you preference? Not necessarily a likely or even possible one. But one that of all fantasies you can imagine is preeminent? That seems to me to be a very easy way to define the "best" - it's the one that the agent wants most. I assume most people have their visions of their own "best" lives, am I a rarity in this? Or do most people just kind of never think about what-ifs and have fantasies? And isn't that, or the model of the self and your own preferences that influences that fantasy going to similarly be part of the model that dictates what you "know" would improve your life significantly. Because if you consider it an improvement, then you see it as being better. It's basic English: Good, Better, Best.  
[-]Cole Wyeth24d141

I think that “ruggedness” and “elegance” are alternative strategies for dealing with adversity - basically tolerating versus preparing for problems. Both can be done more or less skillfully: low-skilled ruggedness is just being unprepared and constantly suffering, but the higher skilled version is to be strong, healthy, and conditioned enough to survive harsh circumstances without suffering. Low-skilled elegance is a waste of time (e.g. too much makeup but terrible skin) and high skilled elegance is… okay basically being ladylike and sophisticated. Yes I admit it this is mostly about gender.

Other examples: it’s rugged to have a very small number of high quality possessions you can easily throw in a backpack in under 20 minutes, including 3 outfits that cover all occasions. It’s elegant to travel with three suitcases containing everything you could possibly need to look and feel your best, including a both an ordinary and a sun umbrella.

I also think a lot of misunderstanding between genders results from these differing strategies, because to some extent they both work but are mutually exclusive. Elegant people may feel taken advantage of because everyone starts expecting them to do ... (read more)

Reply1
2Karl Krueger23d
This one highlights that the sense of "elegant" you mean is not the math & engineering sense, which is associated with minimalism.
9gwern23d
If you asked me to guess what would be the 'elegant' counterpoint to 'traveling with a carefully-curated of the very best prepper/minimalist/nomad/hiker set of gear which ensure a bare minimum of comfort' was, I would probably say something like 'traveling with nothing but cash/credit card/smartphone'. You have elegantly solved the universe of problems you encounter while traveling by choosing a single simple tool which can obtain nearly anything from the universe of solutions.
2Cole Wyeth23d
Maybe “grace” is a better term than elegance?
1kaleb23d
Your categories are not essentially gendered, although I understand why we feel that way. For example, in your travel-packing example my wife would be considered rugged while I would be considered elegant, under your definitions. I also think that in traditional Chinese culture, both of your definitions would be considered masculine. (Sorry women, I guess you get nothing lol) I also think that we apply these strategies unequally in different parts of our lives. I'd guess if you have to give a research talk at a conference, you'd take an 'elegant' approach of "let me prepare my talk well and try to anticipate possible questions the audience will have" instead of "let me do the minimal prep and then just power through any technical difficulties or difficult questions'.  Maybe our gender socialization leads us to favour different strategies in different situations along gendered lines?
3Cole Wyeth22d
I think these things mostly split along gender lines but there are many exceptions, just like pretty much everything else about gender.
2jenn20d
to complicate this along gender lines for fun, when i first read your first sentence i totally reversed the descriptions since it's rugged and masculine to tackle problems and elegant and feminine to tolerate them. per a random edgy tumblr i follow: that sounds more "rugged" than "elegant" by your definitions, no?
2Cole Wyeth20d
I also read that little edgy story and thought at the time that sentence made no sense. I still think that. 
[-]Cole Wyeth2mo*145

Since this is mid-late 2025, we seem to be behind the aggressive AI 2027 schedule? The claims here are pretty weak, but if LLMs really don’t boost coding speed, this description still seems to be wrong.

[edit: okay actually it’s pretty much mid 2025 still, months don’t count from zero though probably they should because they’re mod 12]

Reply
[-]CAC2mo10-3

I don't think there's enough evidence to draw hard conclusions about this section's accuracy in either direction, but I would err on the side of thinking ai-2027's description is correct.

Footnote 10, visible in your screenshot, reads:

For example, we think coding agents will move towards functioning like Devin. We forecast that mid-2025 agents will score 85% on SWEBench-Verified.

SOTA models score at:
• 83.86% (codex-1, pass@8)
• 80.2% (Sonnet 4, pass@several, unclear how many)
• 79.4% (Opus 4, pass@several)

(Is it fair to allow pass@k? This Manifold Market doesn't allow it for its own resolution, but here I think it's okay, given that the footnote above makes claims about 'coding agents', which presumably allow iteration at test time.)

Also, note the following paragraph immediately after your screenshot:

The agents are impressive in theory (and in cherry-picked examples), but in practice unreliable. AI twitter is full of stories about tasks bungled in some particularly hilarious way. The better agents are also expensive; you get what you pay for, and the best performance costs hundreds of dollars a month.11 Still, many companies find ways to fit AI agents into their workflows.12

AI tw... (read more)

Reply
[-]Aaron Staley2mo123

If I understand correctly, Claude's pass@X benchmarks mean multiple sampling and taking the best result.  This is valid so long as compute cost isn't exceeding equivalent cost of an engineer.  

codex's pass @ 8 score seems to be saying "the correct solution was present in 8 attempts, but the model doesn't actually know what the correct result is". That shouldn't count.

Reply
3Cole Wyeth2mo
Why do I see no higher than about 75% here? https://www.swebench.com  
2Cole Wyeth2mo
Yeah, I wanted to include that paragraph but it didn't fit in the screenshot. It does seem slightly redeeming for the description. Certainly the authors hedged pretty heavily.  Still, I think that people are not saving days by chatting with AI agents on slack. So there's a vibe here which seems wrong. The vibe is that these agents are unreliable but are offering very significant benefits. That is called into question by the METR report showing they slowed developers down. There are problems with that report and I would love to see some follow-up work to be more certain.  I appreciate your research on the SOTA SWEBench-Verified scores! That's a concrete prediction we can evaluate (less important than real world performance, but at least more objective). Since we're now in mid-late 2025 (not mid 2025), it appears that models are slightly behind their projections even for pass@k, but certainly they were in the right ballpark!  
2CAC2mo
Sorry, this is the most annoying kind of nitpicking on my part, but since I guess it's probably relevant here (and for your other comment responding to Stanislav down below), the center point of the year is July 2, 2025. So we're just two weeks past the absolute mid-point – that's 54.4% of the way through the year. Also, the codex-1 benchmarks released on May 16, while Claude 4's were announced on May 22 (certainly before the midpoint).
2sam b2mo
The prediction is correct on all counts, and perhaps slightly understates progress (though it obviously makes weak/ambiguous claims across the board). The claim that "coding and research agents are beginning to transform their professions" is straightforwardly true (e.g. 50% of Google lines of code are now generated by AI). The METR study was concentrated in March (which is early 2025).  And it is not currently "mid-late 2025", it is 16 days after the exact midpoint of the year.
3Cole Wyeth2mo
Where is that 50% number from? Perhaps you are referring to this post from google research. If so, you seem to have taken it seriously out of context. Here is the text before the chart that shows 50% completion: This is referring to inline code completion - so its more like advanced autocomplete than an AI coding agent. It's hard to interpret this number, but it seems very unlikely this means half the coding is being done by AI and much more likely that it is often easy to predict how a line of code will end given the first half of that line of code and the previous context. Probably 15-20% of what I type into a standard linux terminal is autocompleted without AI?  Also, the right metric is how much AI assistance is speeding up coding. I know of only one study on this, from METR, which showed that it is slowing down coding. 
2O O2mo
Progress wise this seems accurate but the usefulness gap is probably larger than the one this paints.
1prajwal2mo
Two days later, is this still a fail? ChatGPT agent is supposed to exactly that. There seems to be a research model within openAI that is capable of getting gold on IMO without any tools.  Maybe it does not meet the expectations yet. Maybe it will with GPT-5 release. We do not know if the new unreleased model is capable of helping with research. However, it’s worth considering the possibility that it could be on a slightly slower timeline and not a complete miss.
2Kabir Kumar2mo
i wonder to what extent leadership at openai see ai 2027 as a bunch of milestones that they need to meet, to really be as powerful/scary as they're said to be.  e.g. would investors/lenders be more hesitant if openai seems to be 'lagging behind' ai 2027 predictions?
2Cole Wyeth2mo
Yeah, I wouldn’t be surprised if these timelines are at least somewhat hyperstitious 
2Cole Wyeth2mo
Yeah, well, let’s wait and see what GPT-5 looks like.
1StanislavKrym2mo
But it isn't August or September yet. Maybe someone will end up actually creating capable agents. In addition, the amount of operations used for creating Grok 4 was estimated as 4e27--6e27, which seems to align with the forecast. The research boost rate by Grok 4 or a potentially tweaked model wasn't estimated. Maybe Grok 4 or an AI released in August will boost research speed? 
3Vladimir_Nesov2mo
It was indicated in the opening slide of Grok 4 release livestream that Grok 4 was pretrained with the same amount of compute as Grok 3, which in turn was pretrained on 100K H100s, so probably 3e26 FLOPs (40% utilization for 3 months with 1e15 FLOP/s per chip). RLVR has a 3x-4x lower compute utilization than pretraining, so if we are insisting on counting RLVR in FLOPs, then 3 months of RLVR might be 9e25 FLOPs, for the total of 4e26 FLOPs. Stargate Abilene will be 400K chips in GB200 NVL72 racks in 2026, which is 10x more FLOP/s than 100K H100s. So it'll be able to train 4e27-8e27 FLOPs models (pretraining and RLVR, in 3+3 months), and it might be early 2027 when they are fully trained. (Google is likely to remain inscrutable in their training compute usage, though Meta might also catch up by then.) (I do realize it's probably some sort of typo, either yours or in your unnamed source. But 10x is almost 2 years of even the current fast funding-fueled scaling, that's not a small difference.)
2Cole Wyeth2mo
We've been going on back and forth on this a bit - it seems like your model suggests AGI in 2027 is pretty unlikely? That is, we see the first generation of massively scaled RLVR around 2026/2027. So it kind of has to work out of the box for AGI to arrive that quickly? I suppose this is just speculation though. Maybe it's useful enough that the next generation is somehow much, much faster to arrive?
[-]Vladimir_Nesov2mo151

That is, we see the first generation of massively scaled RLVR around 2026/2027. So it kind of has to work out of the box for AGI to arrive that quickly?

By 2027, we'll also have 10x scaled-up pretraining compared to current models (trained on 2024 compute). And correspondingly scaled RLVR, with many diverse tool-using environments that are not just about math and coding contest style problems. If we go 10x lower than current pretraining, we get original GPT-4 from Mar 2023, which is significantly worse than the current models. So with 10x higher pretraining than current models, the models of 2027 might make significantly better use of RLVR training than the current models can.

Also, 2 years might be enough time to get some sort of test-time training capability started, either with novel or currently-secret methods, or by RLVRing models to autonomously do post-training on variants of themselves to make them better at particular sources of tasks during narrow deployment. Apparently Sutskever's SSI is rumored to be working on the problem (at 39:25 in the podcast), and overall this seems like the most glaring currently-absent faculty. (Once it's implemented, something else might end u... (read more)

Reply
2Cole Wyeth2mo
The section I shared is about mid 2025. I think August-September is late 2025.
4sam b2mo
Early: January, February, March, April Mid: May, June, July, August Late: September, October, November, December
2Cole Wyeth2mo
Okay yes but this thread of discussion has gone long enough now I think - we basically agree up to a month. 
[-]Cole Wyeth1mo111

It looks like the market is with Kokotajlo on this one (apparently this post must be expanded to see the market). 

Reply
9Thane Ruthenis1mo
For reference, I'd also bet on 8+ task length (on METR's benchmark[1]) by 2027. Probably significantly earlier; maybe early 2026, or even end of this year. Would not be shocked if OpenAI's IMO-winning model already clears that. You say you expect progress to stall at 4-16 hours because solving such problems would require AIs to develop sophisticated models of them. My guess is that you're using intuitions regarding at what task-lengths it would be necessary for a human. LLMs, however, are not playing by the same rules: where we might need a new model, they may be able to retrieve a stored template solution. I don't think we really have any idea at what task length this trick would stop working for them. I could see it being "1 week", or "1 month", or ">1 year", or "never". I do expect "<1 month", though. Or rather, that even if the LLM architecture is able to support arbitrarily big templates, the scaling of data and compute will run out before this point; and then plausibly the investment and the talent pools would dry up as well (after LLMs betray everyone's hopes of AGI-completeness). Not sure what happens if we do get to ">1 year", because on my model, LLMs might still not become AGIs despite that. Like, they would still be "solvers of already solved problems", except they'd be... able to solve... any problem in the convex hull of the problems any human ever solved in 1 year...? I don't know, that would be very weird; but things have already gone in very weird ways, and this is what the straightforward extrapolation of my current models says. (We do potentially die there.[2]) Aside: On my model, LLMs are not on track to hit any walls. They will keep getting better at the things they've been getting better at, at the same pace, for as long as the inputs to the process (compute, data, data progress, algorithmic progress) keep scaling at the same rate. My expectation is instead that they're just not going towards AGI, so "no walls in their way" doesn't matter;
5ryan_greenblatt1mo
Ok, but surely there has to be something they aren't getting better at (or are getting better at too slowly). Under your model they have to hit a wall in this sense. I think your main view is that LLMs won't ever complete actually hard tasks and current benchmarks just aren't measuring actually hard tasks or have other measurement issues? This seems inconsistent with saying they'll just keep getting better though unless your hypothesizing truely insane benchmark flaws right? Like, if they stop improving at <1 month horizon lengths (as you say immediately above the text I quoted) that is clearly a case of LLMs hitting a wall right? I agree that compute and resources running out could cause this, but it's notable that we expect ~1 month in not that long, like only ~3 years at the current rate.
5Vladimir_Nesov1mo
That's only if the faster within-RLVR rate that has been holding during the last few months persists. On my current model, 1 month task lengths at 50% happen in 2030-2032, since compute (being the scarce input of scaling) slows down compared to today, and I don't particularly believe in incremental algorithmic progress as it's usually quantified, so it won't be coming to the rescue. Compared to the post I did on this 4 months ago, I have even lower expectations that the 5 GW training systems (for individual AI companies) will arrive on trend in 2028, they'll probably get delayed to 2029-2031. And I think the recent RLVR acceleration of the pre-RLVR trend only pushes it forward a year without making it faster, the changed "trend" of the last few months is merely RLVR chip-hours catching up to pretraining chip-hours, which is already essentially over. Though there are still no GB200 NVL72 sized frontier models and probably no pretraining scale RLVR on GB200 NVL72s (which would get better compute utilization), so that might give the more recent "trend" another off-trend push first, perhaps as late as early 2026, but then it's not yet a whole year ahead of the old trend.
3Thane Ruthenis1mo
I distinguish "the LLM paradigm hitting a wall" and "the LLM paradigm running out of fuel for further scaling". Yes, precisely. Last I checked, we expected scaling to run out by 2029ish, no? Ah, reading the comments, I see you expect there to be some inertia... Okay, 2032 / 7 more years would put us at ">1 year" task horizons. That does make me a bit more concerned. (Though 80% reliability is several doublings behind, and I expect tasks that involve real-world messiness to be even further behind.) "Ability to come up with scientific innovations" seems to be one. Like, I expect they are getting better at the underlying skill. If you had a benchmark which measures some toy version of "produce scientific innovations" (AidanBench?), and you plotted frontier models' performance on it against time, you would see the number going up. But it currently seems to lag way behind other capabilities, and I likewise don't expect it to reach dangerous heights before scaling runs out. The way I would put it, the things LLMs are strictly not improving on are not "specific types of external tasks". What I think they're not getting better at – because it's something they've never been capable of doing – are specific cognitive algorithms which allow to complete certain cognitive tasks in a dramatically more compute-efficient manner. We've talked about this some before. I think that, in the limit of scaling, the LLM paradigm is equivalent to AGI, but that it's not a very efficient way to approach this limit. And it's less efficient along some dimensions of intelligence than along others. This paradigm attempts to scale certain modules a generally intelligent mind would have to ridiculous levels of power in order to make up for the lack of other necessary modules. This will keep working to improve performance across all tasks, as long as you keep feeding LLMs more data and compute. But there seems to be only a few "GPT-4 to GPT-5" jumps left, and I don't think it'd be enough.
5Cole Wyeth1mo
I think if this were right LLMs would already be useful for software engineering and able to make acceptable PRs. I also guess that the level of agency you need to actually beat Pokémon is require probably somewhere around 4 hours.  We’ll see who’s right - bet against me if you haven’t already! Though maybe it’s not a good deal anymore. I can see it going either way.
4faul_sname1mo
They are sometimes able to make acceptable PRs, usually when context gathering for the purpose of iteratively building up a model of the relevant code is not a required part of generating said PR.
3StanislavKrym1mo
It seems to me that current-state LLMs don't learn nearly anything from the context since they have trouble fitting it into their attention span. For example, GPT-5 can create fun stuff from just one prompt and an unpublished LLM solved five out of six problems of IMO 2025, while the six problems together can be expressed by using 3k bytes. However, METR found that "on 18 real tasks from two large open-source repositories, early-2025 AI agents often implement functionally correct code that cannot be easily used as-is, because of issues with test coverage, formatting/linting, or general code quality." I strongly suspect that this bottleneck will be ameliorated by using neuralese[1] with big internal memory. Neuralese with big internal memory The Meta paper which introduced neuralese had GPT-2 trained to have the thought at the end fed into the beginning. Alas, the amount of bits transferred is equal to the amount of bits in a FLOP number multiplied by the size of the final layer. A potential CoT generates ~16.6 extra bits of information per activation.  At the cost of absolute loss of interpretability, neuralese on steroids could have the LLM of GPT-3's scale transfer tens of millions of bits[2] in the latent space. Imagine GPT-3 175B (which had 96 layers and 12288 neurons in each) receiving an augmentation using the last layer's results as a steering vector at the beginning, the pre-last layer as a steering vector at the second layer, etc. Or passing the steering vectors through a matrix. These amplifications, at most, double the compute required to run GPT-3, while requiring extra millions of bytes of dynamic memory.   For comparison, the human brain's short-term memory alone is described by activations of around 86 billions of neurons. And that's ignoring the middle-term memory and the long-term one...   1. ^ However, there is Knight Lee's proposal where the AIs are to generate multiple tokens instead of using versions of neuralese. 2. ^
2Cole Wyeth1mo
People have been talking about neuralese since at least when AI 2027 was published and I think much earlier, but it doesn't seem to have materialized. 
2ryan_greenblatt1mo
I think LLMs can be useful for software engineering and can sometimes write acceptable PRs. (I've very clearly seen both of these first hand.) Maybe you meant something slightly weaker, like "AIs would be able to write acceptable PRs at a rate of >1/10 on large open source repos"? I think this is already probably true, at least with some scaffolding and inference time compute. Note that METR's recent results were on 3.7 sonnet.
2Cole Wyeth1mo
I'm referring to METR's recent results. Can you point to any positive results on LLMs writing acceptable PRs? I'm sure that they can in some weak sense e.g. a sufficiently small project with sufficiently low standards, but as far as I remember the METR study concluded zero acceptable PRs in their context.
4ryan_greenblatt1mo
METR found that 0/4 PRs which passed test cases and they reviewed were also acceptable to review. This was for 3.7 sonnet on large open source repos with default infrastructure. The rate at which PRs passed test cases was also low, but if you're focusing on the PR being viable to merge conditional on passing test cases, the "0/4" number is what you want. (And this is consistent with 10% or some chance of 35% of PRs being mergable conditional on passing test cases, we don't have a very large sample size here.) I don't think this is much evidence that AI can't sometimes write acceptable PRs in general and there examples of AIs doing this. On small projects I've worked on, AIs from a long time ago have written a big chunk of code ~zero-shot. Anecdotally, I've heard of people having success with AIs completing tasks zero-shot. I don't know what you mean by "PR" that doesn't include this.
2Cole Wyeth1mo
I think I already answered this:
4Joseph Miller1mo
Very little liquidity though
4Cole Wyeth1mo
Hence sharing here - I'm not buying (at least for now) because I'm curious where it ends up, but obviously I think "Wyeth wins" shares are at a great price right now ;)
2Stephen Fowler1mo
I've thrown on some limit orders if anyone is strongly pro-Kokotajlo.
[-]Cole Wyeth5mo1116

Particularly after my last post, I think my lesswrong writing has had bit too high of a confidence / effort ratio. Possibly I just know the norms of this site well enough lately that I don't feel as much pressure to write carefully. I think I'll limit my posting rate a bit while I figure this out.

Reply41
[-]Vladimir_Nesov5mo144

LW doesn't punish, it upvotes-if-interesting and then silently judges.

confidence / effort ratio

(Effort is not a measure of value, it's a measure of cost.)

Reply
5Cole Wyeth5mo
Yeah, I was thinking greater effort is actually necessary in this case. For context, my lower effort posts are usually more popular. Also the ones that focus on LLMs which is really not my area of expertise.
[-]Steven Byrnes5mo104

For context, my lower effort posts are usually more popular.

mood

Reply
[-]Cole Wyeth3mo80

The hedonic treadmill exists because minds are built to climb utility gradients - absolute utility levels are not even uniquely defined, so as long as your preferences are time-consistent you can just renormalize before maximizing the expected utility of your next decision. 

I find this vaguely comforting. It’s basically a decision-theoretic and psychological justification for stoicism. 

(must have read this somewhere in the sequences?)

I think self-reflection in bounded reasoners justifies some level of “regret,” “guilt,” “shame,” etc., but the basic reasoning above should hold to first order, and these should all be treated as corrections and for that reason should not get out of hand. 

Reply
[-]Cole Wyeth3mo82

AI-specific pronouns would actually be kind of helpful. “They” and “It” are both frequently confusing. “He” and “she” feel anthropomorphic and fake. 

Reply
[-]Cole Wyeth10mo88

Perhaps LLM's are starting to approach the intelligence of today's average human: capable of only limited original thought, unable to select and autonomously pursue a nontrivial coherent goal across time, learned almost everything they know from reading the internet ;)

Reply7
[-]Cole Wyeth10mo8-3

This doesn't seem to be reflected in the general opinion here, but it seems to me that LLM's are plateauing and possibly have already plateaued a year or so ago. Scores on various metrics continue to go up, but this tends to provide weak evidence because they're heavily gained and sometimes leak into the training data. Still, those numbers overall would tend to update me towards short timelines, even with their unreliability taken into account - however, this is outweighed by my personal experience with LLM's. I just don't find them useful for practically ... (read more)

Reply
[-]habryka10mo127

Huh o1 and the latest Claude were quite huge advances to me. Basically within the last year LLMs for coding went to "occasionally helpful, maybe like a 5-10% productivity improvement" to "my job now is basically to instruct LLMs to do things, depending on the task a 30% to 2x productivity improvement".

Reply
1Cole Wyeth10mo
I'm in Canada so can't access the latest Claude, so my experience with these things does tend to be a couple months out of date. But I'm not really impressed with models spitting out slightly wrong code that tells me what functions to call. I think this is essentially a more useful search engine. 
6Vladimir_Nesov10mo
Use Chatbot Arena, both versions of Claude 3.5 Sonnet are accessible in Direct Chat (third tab). There's even o1-preview in Battle Mode (first tab), you just need to keep asking the question until you get o1-preview. In general Battle Mode (for a fixed question you keep asking for multiple rounds) is a great tool for developing intuition about model capabilities, since it also hides the model name from you while you are evaluating the response.
3core_admiral10mo
Just an FYI unrelated to the discussion - all versions of Claude are available in Canada through Anthropic, you don't even need third party services like Poe anymore.  Source: https://www.anthropic.com/news/introducing-claude-to-canada 
4Vladimir_Nesov10mo
Base model scale has only increased maybe 3-5x in the last 2 years, from 2e25 FLOPs (original GPT-4) up to maybe 1e26 FLOPs[1]. So I think to a significant extent the experiment of further scaling hasn't been run, and the 100K H100s clusters that have just started training new models in the last few months promise another 3-5x increase in scale, to 2e26-6e26 FLOPs. Right, the metrics don't quite capture how smart a model is, and the models haven't been getting much smarter for a while now. But it might be simply because they weren't scaled much further (compared to original GPT-4) in all this time. We'll see in the next few months as the labs deploy the models trained on 100K H100s (and whatever systems Google has). ---------------------------------------- 1. This is 3 months on 30K H100s, $140 million at $2 per H100-hour, which is plausible, but not rumored about specific models. Llama-3-405B is 4e25 FLOPs, but not MoE. Could well be that 6e25 FLOPs is the most anyone trained for with models deployed so far. ↩︎
3cdt10mo
I've noticed they perform much better on graduate-level ecology/evolution questions (in a qualitative sense - they provide answers that are more 'full' as well as technically accurate). I think translating that into a "usefulness" metric is always going to be difficult though.
3eigen10mo
The last few weeks I felt the opposite of this. I kind of go back and forth on thinking they are plateauing and then I get surprised with the new Sonnet version or o1-preview. I also experiment with my own prompting a lot.
1Cole Wyeth10mo
I've noticed occasional surprises in that direction, but none of them seem to shake out into utility for me.
2cubefox10mo
Is this a reaction to OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows?
1Cole Wyeth10mo
No that seems paywalled, curious though?
1Cole Wyeth10mo
I've been waiting to say this until OpenAI's next larger model dropped, but this has now failed to happen for so long that it's become it's own update, and I'd like to state my prediction before it becomes obvious. 
[-]Cole Wyeth1mo70

An ASI perfectly aligned to me must literally be a smarter version of myself. Anything less than that is a compromise between my values and the values of society. Such a compromise at its extreme fills me with dread. I would much rather live in a society of some discord between many individually aligned ASI’s, than build a benevolent god. 

Reply11
3Vladimir_Nesov1mo
An ASI aligned to a group of people likely should dedicate sovereign slivers of compute (optimization domains) for each of those people, and those people could do well with managing their domain with their own ASIs aligned to each of them separately. Optimization doesn't imply a uniform pureed soup, it's also possible to optimize autonomy, coordination, and interaction, without mixing them up. Values judge what should be done, but also what you personally should be doing. An ASI value aligned to you will be doing the things that should be done (according to you, on reflection), but you wouldn't necessarily endorse that you personally should be doing those things. Like, I want the world to be saved, but I don't necessarily want to be in a position to need to try to save the world personally. So an ASI perfectly aligned to you might help uplift you into a smarter version of yourself as one of its top priorities, and then go on to do various other things you'd approve of on reflection. But you wouldn't necessarily endorse that it's the smarter version of yourself that is doing those other things, you are merely endorsing that they get done.
2the gears to ascension1mo
I'm confused about that. I think you might be wrong, but I've heard this take before. If what you want is something that looks like a benevolent god, but one according to your own design, then that's the "cosmopolitan empowerment by I just want cosmopolitanism" scenario. which I don't trust, and so if I had the opportunity to design an AI, I would do my best to guarantee it's cosmopolitanism-as-in-a-thing-others-actually-approve-of, for basically "values level LDT" reasons. see also interdimensional council of cosmopolitanisms
1danielms1mo
I think there's more leeway here. E.g. instead of a copy of you, a "friend" ASI.   A benevolent god that understands your individual values and respects them seems pretty nice to me. Especially compared to a world of competing, individually aligned ASIs. (if your values are in the minority)
[-]Cole Wyeth4mo70

@Thomas Kwa will we see task length evaluations for Claude Opus 4 soon?

Anthropic reports that Claude can work on software engineering tasks coherently for hours, but it’s not clear if this means it can actually perform tasks that would take a human hours. I am slightly suspicious because they reported that Claude was making better use of memory on Pokémon, but this did not actually cash out as improved play. This seems like a fairly decisive test of my prediction that task lengths would stagnate at this point; if it does succeed at hours long tasks, I will... (read more)

Reply
[-]Thomas Kwa4mo*170

I don't run the evaluations but probably we will; no timeframe yet though as we would need to do elicitation first. Claude's SWE-bench Verified scores suggest that it will be above 2 hours on the METR task set; the benchmarks are pretty similar apart from their different time annotations.

Reply1
3Aaron Staley4mo
That's a bit higher than I would have guessed.  I compared the known data points that have SWE-bench and METR medians (sonnet 3.5,3.6,3.7, o1, o3, o4-mini) and got an r^2 = 0.96 model assuming linearity between log(METR_median) and log(swe-bench-error). That gives an estimate more like 110 minutes for an Swe-bench score of 72.7%. Which works out to a sonnet doubling time of ~3.3 months.   (If I throw out o4-mini, estimator is ~117 minutes.. still below 120) Also would imply an 85% swe-bench score is something like a 6-6.5 hour METR median.
3Vladimir_Nesov4mo
Since reasoning trace length increases with more steps of RL training (unless intentionally constrained), probably underlying scaling of RL training by AI companies will be observable in the form of longer reasoning traces. Claude 4 is more obviously a pretrained model update, not necessarily a major RLVR update (compared to Claude 3.7), and coherent long task performance seems like something that would greatly benefit from RLVR if it applies at all (which it plausibly does). So I don't particularly expect Claude 4 to be much better on this metric, but some later Claude ~4.2-4.5 update with more RLVR post-training released in a few months might do much better.
3Cole Wyeth4mo
We can still check if it lies on the projected slower exponential curve before reasoning models were introduced.
[-]Vladimir_Nesov4mo110

Sure, but trends like this only say anything meaningful across multiple years, any one datapoint adds almost no signal, in either direction. This is what makes scaling laws much more predictive, even as they are predicting the wrong things. So far there are no published scaling laws for RLVR, the literature is still developing a non-terrible stable recipe for the first few thousand training steps.

Reply
[-]Cole Wyeth4mo70

It looks like Gemini is self-improving in a meaningful sense:

https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/

Some quick thoughts:

This has been going on for months; on the bullish side (for ai progress, not human survival) this means some form of self-improvement is well behind the capability frontier. On the bearish side, we may not expect a further speed up on the log scale (since it’s already factored in to some calculations).

I did not expect this degree of progress so soon; I am now much ... (read more)

Reply
1Person4mo
Heads up: I am not an AI researcher or even an academic, just someone who keeps up with AI But I do have quick thoughts as well; Kernel optimization (which they claim is what resulted in the 1% decrease in training time) is something we know AI models are great at (see RE-Bench and the multiple arXiv papers on the matter, including from DeepSeek).  It seems to me like AlphaEvolve is more-or-less an improvement over previous models that also claimed to make novel algorithmic and mathematical discoveries (FunSearch, AlphaTensor) notably by using better base Gemini models and a better agentic framework. We also know that AI models already contribute to the improvement of AI hardware. What AlphaEvolve seems to do is to unify all of that into a superhuman model for those multiple uses. In the accompanying podcast they give us some further information: * The rate of improvement is still moderate, and the process still takes months. They phrase it as an interesting and promising area of progress for the future, not as a current large improvement. * They have not tried to distill all that data into a new model yet, which seems strange to me considering they've had it for a year now. * They say that a lot of improvements come from the base model's quality. * They do present the whole thing as part of research rather than a product So yeah I can definitely see a path for large gains in the future, thought for now those are still on similar timetables as per their own admission. They expect further improvements when base models improve and are hoping that future versions of AlphaEvolve can in turn shorten the training time for models, the hardware pipeline, and improve models in other ways. And for your point about novel discoveries, previous Alpha models seemed to already be able to do the same categories of research back in 2023, on mathematics and algorithmic optimization. We need more knowledgeable people to weight in, especially to compare with previous models of
[-]Cole Wyeth5mo70

Unfortunate consequence of sycophantic ~intelligent chatbots: everyone can get their theories parroted back to them and validated. Particularly risky for AGI, where the chatbot can even pretend to be running your cognitive architecture. Want to build a neuro-quantum-symbolic-emergent-consciousness-strange-loop AGI? Why bother, when you can just put that all in a prompt!

Reply
[-]habryka5mo200

A lot of new user submissions these days to LW are clearly some poor person who was sycophantically encouraged by an AI to post their crazy theory of cognition or consciousness or recursion or social coordination on LessWrong after telling them their ideas are great. When we send them moderation messages we frequently get LLM-co-written responses, and sometimes they send us quotes from an AI that has evaluated their research as promising and high-quality as proof that they are not a crackpot.

Reply1
[-]Cole Wyeth5mo70

Basic sanity check: We can align human children, but can we align any other animals? NOT to the extent that we would trust them with arbitrary amounts of power, since they obviously aren't smart enough for this question to make much sense. Just like, are there other animals that we've made care about us at least "a little bit?" Can dogs be "well trained" in a way where they actually form bonds with humans and will go to obvious personal risk to protect us, or not eat us even if they're really hungry and clearly could? How about species further on the evolutionary tree like hunting falcons? Where specifically is the line?

Reply
[-]Cole Wyeth2mo60

As well as the "theoretical - empirical" axis, there is an "idealized - realistic" axis. The former distinction is about the methods you apply (with extremes exemplified by rigorous mathematics and blind experimentation, respectively). The later is a quality of your assumptions / paradigm. Highly empirical work is forced to be realistic, but theoretical work can be more or less idealized. Most of my recent work has been theoretical and idealized, which is the domain of (de)confusion. Applied research must be realistic, but should pragmatically draw on theory and empirical evidence. I want to get things done, so I'll pivot in that direction over time. 

Reply
[-]Cole Wyeth8mo63

Sometimes I wonder if people who obsess over the "paradox of free will" are having some "universal human experience" that I am missing out on. It has never seemed intuitively paradoxical to me, and all of the arguments about it seem either obvious or totally alien. Learning more about agency has illuminated some of the structure of decision making for me, but hasn't really effected this (apparently) fundamental inferential gap. Do some people really have this overwhelming gut feeling of free will that makes it repulsive to accept a lawful universe? 

Reply
[-]Lucius Bushnaq8mo122

I used to, as a child. I did accept a lawful universe, but I thought my perception of free will was in tension with that, so that perception must be "an illusion". 

My mother kept trying to explain to me that there was no tension between these things, because it was correct that my mind made its own decisions rather than some outside force. I didn't understand what she was saying though. I thought she was just redefining 'free will' from a claim that human brains effectively had a magical ability to spontaneously ignore the laws of physics to a boring tautological claim that human decisions are made by humans rather than something else.

I changed my mind on this as a teenager. I don't quite remember how, it might have been the sequences or HPMOR again. I realised that my imagination had still been partially conceptualising the "laws of physics" as some sort of outside force, a set of strings pulling my atoms around, rather than as a predictive description of me and the universe. Saying "the laws of physics make my decisions, not me" made about as much sense as saying "my fingers didn't move, my hand did." That was what my mother had been trying to tell me.

Reply
3ProgramCrafter8mo
I don't think so as I had success explaining away the paradox with concept of "different levels of detail" - saying that free will is a very high-level concept and further observations reveal a lower-level view, calling upon analogy with algorithmic programming's segment tree. (Segment tree is a data structure that replaces an array, allowing to modify its values and compute a given function over all array elements efficiently. It is based on tree of nodes, each of those representing a certain subarray; each position is therefore handled by several - specifically, O(logn) nodes.)
2Viliam8mo
This might be related to whether you see yourself as a part of the universe, or as an observer. If you are an observer, the objection is like "if I watch a movie, everything in the movie follows the script, but I am outside the movie, therefore outside the influence of the script". If you are religious, I guess your body is a part of the universe (obeys the laws of gravity etc.), but your soul is the impartial observer. Here the religion basically codifies the existing human intuitions. It might also depend on how much you are aware of the effects of your environment on you. This is a learned skill; for example little kids do not realize that they are hungry... they just get kinda angry without knowing why. It requires some learning to realize "this feeling I have right now -- it is hunger, and it will probably go away if I eat something". And I guess the more knowledge of this kind you accumulate, the easier it is to see yourself as a part of the universe, rather than being outside of it and only moved by "inherently mysterious" forces.
[-]Cole Wyeth10d51

Self-reflection allows self-correction.

If you can fit yourself inside your world model, you can also model the hypothesis that you are wrong in some specific systematic way.

A partial model is a self-correction, because it says “believe as you will, except in such a case.” 

This is the true significance of my results with @Daniel C:
https://www.lesswrong.com/posts/Go2mQBP4AXRw3iNMk/sleeping-experts-in-the-reflective-solomonoff-prior

That is, reflective oracles allow Solomonoff induction to think about ways of becoming less wrong. 

Reply1
[-]Cole Wyeth2mo50

If instead of building LLMs, tech companies had spent billions of dollars designing new competing search engines that had no ads but might take a few minutes to run and cost a few cents per query, would the result have been more or less useful?

Reply
5Julian Bradshaw2mo
Rather less useful to me personally as a software developer. Besides that, I feel like this question is maybe misleading? If ex. Google built a new search engine that could answer queries like its current AI-powered search summaries, or like ChatGPT, wouldn't that have to be some kind of language model anyway? Is there another class of thing besides AGI that could perform as well at that task? (I assume you're not suggesting just changing the pricing model of existing-style search engines, which already had a market experiment (ex. Kagi) some years ago with only mild success.)
2Cole Wyeth2mo
I am thinking it would NOT answer like its current AI-powered search summaries, but would rather order actual search results but VERY intelligently. 
2Julian Bradshaw2mo
I think that would require text comprehension too. I guess it's an interesting question if you can build an AI that can comprehend text but not produce it?
3Karl Krueger2mo
My impression is that the decline of search engines has little to do with search ads. It has more to do with a decline in public webpage authoring in favor of walled gardens, chat systems, etc.: new organic human-written material that once would have been on a public forum site (or home page!) is today often instead in an unindexable Discord chat or inside an app. Meanwhile, spammy content on the public Web has continued to escalate; and now LLMs are helping make more and more of it.
4clone of saturn2mo
But most of LLMs' knowledge comes from the public Web, so clearly there is still a substantial amount of useful content on it, and maybe if search engines had remained good enough at filtering spam fewer people would have fled to Discord.
2Nate Showell2mo
More useful. It would save us the step of having to check for hallucinations when doing research.
[-]Cole Wyeth3mo*50

To what extent would a proof about AIXI’s behavior be normative advice?

Though AIXI itself is not computable, we can prove some properties of the agent - unfortunately, there are fairly few examples because of the “bad universal priors” barrier discovered by Jan Leike. In the sequential case we only know things like e.g. it will not indefinitely keep trying an action that yields minimal reward, though we can say more when the horizon is 1 (which reduces to the predictive case in a sense). And there are lots of interesting results about the behavior of Solom... (read more)

Reply
[-]Cole Wyeth7mo50

Can AI X-risk be effectively communicated by analogy to climate change? That is, the threat isn’t manifesting itself clearly yet, but experts tell us it will if we continue along the current path.

Though there are various disanalogies, this specific comparison seems both honest and likely to be persuasive to the left?

Reply
4MondSemmel7mo
I don't like it. Among various issues, people already muddy the waters by erroneously calling climate change an existential risk (rather than what it was, a merely catastrophic one, before AI timelines made any worries about climate change in the year 2100 entirely irrelevant), and it's extremely partisan-coded. And you're likely to hear that any mention of AI x-risk is a distraction from the real issues, which are whatever the people cared about previously. I prefer an analogy to gain-of-function research. As in, scientists grow viruses/AIs in the lab, with promises of societal benefits, but without any commensurate acknowledgment of the risks. And you can't trust the bio/AI labs to manage these risks, e.g. even high biosafety levels can't entirely prevent outbreaks.
2cdt7mo
I agree that there is a consistent message here, and I think it is one of the most practical analogies, but I get the strong impression that tech experts do not want to be associated with environmentalists.
1Ariel Cheng7mo
I think it would be persuasive to the left, but I'm worried that comparing AI x-risk to climate change would make it a left-wing issue to care about, which would make right-wingers automatically oppose it (upon hearing "it's like climate change"). Generally it seems difficult to make comparisons/analogies to issues that (1) people are familiar with and think are very important and (2) not already politicized.
1CstineSublime7mo
I'm looking at this not from a CompSci point of view by a rhetoric point of view: Isn't it much easier to make tenuous or even flat out wrong links between Climate Change and highly publicized Natural Disaster events that have lot's of dramatic, visceral footage than it is to ascribe danger to a machine that hasn't been invented yet, that we don't know the nature or inclinations of? I don't know about nowadays but for me the two main pop-culture touchstones for me for "evil AI" are Skynet in Terminator, or HAL 9000 in 2001: A Space Odyssey (and by inversion - the Butlerian Jihad in Dune). Wouldn't it be more expedient to leverage those? (Expedient - I didn't say accurate)
[-]Cole Wyeth9mo52

Most ordinary people don't know that no one understands how neural networks work (or even that modern "Generative A.I." is based on neural networks). This might be an underrated message since the inferential distance here is surprisingly high. 

It's hard to explain the more sophisticated models that we often use to argue that human dis-empowerment is the default outcome but perhaps much better leveraged to explain these three points: 

1) No one knows how A.I models / LLMs / neural nets work (with some explanation of how this is conceptually possibl... (read more)

Reply
[-]Cole Wyeth1y50

"Optimization power" is not a scalar multiplying the "objective" vector. There are different types. It's not enough to say that evolution has had longer to optimize things but humans are now "better" optimizers:  Evolution invented birds and humans invented planes, evolution invented mitochondria and humans invented batteries. In no case is one really better than the other - they're radically different sorts of things.

Evolution optimizes things in a massively parallel way, so that they're robustly good at lots of different selectively relevant things ... (read more)

Reply
[-]Cole Wyeth25d45

The most common reason I don’t use LLMs for stuff is that I don’t trust them. Capabilities are somewhat bottlenecked on alignment. 

Reply
2Vladimir_Nesov25d
Human texts also need reasons to trust the takeaways from them, things like bounded distrust from reputational incentives, your own understanding after treating something as steelmanning fodder, expectation that the authors are talking about what they actually observed. So it's not particularly about alignment with humans either. Few of these things apply to LLMs, and they are not yet good at writing legible arguments worth verifying, though IMO gold is reason to expect this to change in a year or so.
[-]Cole Wyeth2mo41

LLM coding assistants may actually slow developers down, contrary to their expectations: 

https://www.lesswrong.com/posts/9eizzh3gtcRvWipq8/measuring-the-impact-of-early-2025-ai-on-experienced-open

(Epistemic status: I am signal boosting this with an explicit one-line summary that makes clear it is bearish for LLMs, because scary news about LLM capability acceleration is usually more visible/available than this update seems to be. Read the post for caveats.)

Reply
[-]Cole Wyeth3mo41

Optimality is about winning. Rationality is about optimality.  

Reply21
[-]Cole Wyeth3mo*40

I guess Dwarkesh believes ~everything I do about LLMs and still think we probably get AGI by 2032:

https://www.dwarkesh.com/p/timelines-june-2025

Reply1
2Noosphere893mo
@ryan_greenblatt made a claim that continual learning/online training can already be done, but that right now it's not super-high returns and requires annoying logistical/practical work to be done, and right now AI issues are elsewhere like sample efficiency and robust self-verification. That would explain the likelihood of getting AGI by the 2030s being pretty high: https://www.lesswrong.com/posts/FG54euEAesRkSZuJN/#pEBbFmMm9bvmgotyZ Ryan Greenblatt's original comment: https://www.lesswrong.com/posts/FG54euEAesRkSZuJN/#xMSjPgiFEk8sKFTWt
1Roman Malov3mo
What are your timelines?
2Cole Wyeth3mo
My distribution is pretty wide, but I think probably not before 2040. 
[-]Cole Wyeth4mo42

This is not the kind of news I would have expected from short timeline worlds in 2023: https://www.techradar.com/computing/artificial-intelligence/chatgpt-is-getting-smarter-but-its-hallucinations-are-spiraling

Reply
[-]Cole Wyeth5mo40

I still don't think that a bunch of free-associating inner monologues talking to each other gives you AGI, and it still seems to be an open question whether adding RL on top just works.

The "hallucinations" of the latest reasoning models look more like capability failures than alignment failures to me, and I think this points towards "no." But my credences are very unstable; if METR task length projections hold up or the next reasoning model easily zero-shots Pokemon I will just about convert. 

Reply1
2Cole Wyeth4mo
Investigating preliminary evaluations of o3 and o4-mini I am more convinced that task length is scaling as projected.  Pokémon has fallen, but as far as I can tell this relied on scaffolding improvements for Gemini 2.5 pro customized during the run, NOT a new smarter model. Overall, I am already questioning my position one week later.
7Thane Ruthenis4mo
Pokémon is actually load-bearing for your models? I'm imagining a counterfactual world in which Sonnet 3.7's initial report involved it beating Pokémon Red, and I don't think my present-day position would've been any different in it. Even aside from tons of walkthrough information present in LLMs' training set, and iterative prompting allowing to identify and patch holes in LLMs' pretrained instinctive game knowledge, Pokémon is simply not a good test of open-ended agency. At the macro-scale, the game state can only progress forward, and progressing it requires solving relatively closed-form combat/navigational challenges. Which means if you're not too unlikely to blunder through each of those isolated challenges, you're fated to "fail upwards". The game-state topology doesn't allow you to progress backward or get stuck in a dead end: you can't lose a badge or un-win a boss battle. I. e.: there's basically an implicit "long-horizon agency scaffold" built into the game. Which means what this tests is mainly the ability to solve somewhat-diverse isolated challenges in sequence. But not the ability to autonomously decompose long-term tasks into said isolated challenges in a way such that the sequence of isolated challenges implacably points at the long-term task's accomplishment.
4Cole Wyeth4mo
Hmm, maybe I’m suffering from having never played Pokémon… who would’ve thought that could be an important hole in my education? 
2Noosphere895mo
I think the hallucinations/reward hacking is actually a real alignment failure, but an alignment failure that happens to degrade capabilities a lot, though at least some of the misbehavior is probably due to context, but I have seen evidence that the alignment failures are more deliberate than regular capabilities failures. That said, if this keeps happening, the likely answer is because capabilities progress is to a significant degree bottlenecked on alignment progress, such that you need a significant degree of progress on preventing specification gaming to get new capabilities, and this would definitely be a good world for misalignment issues if the hypothesis is true (which I put some weight on) (Also, it's telling that the areas where RL has worked best are areas where you can basically create unhackable reward models like many games/puzzles, and once reward hacking is on the table, capabilities start to decrease).
[-]Cole Wyeth6mo41

GDM has a new model: https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/#advanced-coding

At a glance, it is (pretty convincingly) the smartest model overall. But progress still looks incremental, and I continue to be unconvinced that this paradigm scales to AGI. If so, the takeoff is surprisingly slow. 

Reply
[-]Cole Wyeth13d39

I’m worried about Scott Aaronson since he wrote “Deep Zionism.”

https://scottaaronson.blog/?p=9082

I think he’s coming from a good place, I can understand how he got here, but he really, really needs to be less online. 

Reply
[-]Cole Wyeth2mo30

That moment when you’ve invested in building a broad and deep knowledge base instead of your own agency and then LLMs are invented. 

it hurts

Reply
[-]Thane Ruthenis2mo158

I don't see it that way. Broad and deep knowledge is as useful as ever, and LLMs are no substitutes for it.

This anecdote comes to mind:

Dr. Pauling taught first-year chemistry at Cal Tech for many years. All of his exams were closed book, and the students complained bitterly. Why should they have to memorize Boltzmann’s constant when they could easily look it up when they needed it? I paraphrase Mr. Pauling’s response: I was always amazed at the lack of insight this showed. It’s what you have in your memory bank—what you can recall instantly—that’s important. If you have to look it up, it’s worthless for creative thinking.

He proceeded to give an example. In the mid-1930s, he was riding a train from London to Oxford. To pass the time, he came across an article in the journal, Nature, arguing that proteins were amorphous globs whose 3D structure could never be deduced. He instantly saw the fallacy in the argument—because of one isolated stray fact in his memory bank—the key chemical bond in the protein backbone did not freely rotate, as was argued. Linus knew from his college days that the peptide bond had to be rigid and coplanar.

He began doodling, and by the time he reached Oxford,

... (read more)
Reply2
5Cole Wyeth2mo
This is probably right. Though perhaps one special case of my point remains correct: the value of a generalist as a member of a team may be somewhat reduced. 
2Viliam2mo
The value of a generalist with shallow knowledge is reduced, but you get a chance to become a generalist with relatively deep knowledge of many things. You already know the basics, so you can start the conversation with LLMs to learn more (and knowing the basics will help you figure out when the LLM hallucinates).
[-]Cole Wyeth5mo30

Back-of-the-envelope math indicates that an ordinary NPC in our world needs to double their power like 20 times over to become a PC. That’s a tough ask. I guess the lesson is either give up or go all in. 

Reply
4Phiwip5mo
Can you expand on this? I'm not sure what you mean but am curious about it.
2Cole Wyeth5mo
There are around 8 billion humans, so an ordinary person has a very small fraction of the power needed to steer humanity in any particular direction. A very large number of doublings are required to be a relevant factor. 
2Viliam5mo
That's an interesting idea. However, people who read this comments probably already have power much greater than the baseline -- a developed country, high intelligence, education, enough money and free time to read websites... Not sure how many of those 20 doublings still remain.
2Cole Wyeth5mo
I thought the statement was pretty clearly not about the average lesswronger.  But in terms of the “call to action” - 20 was pretty conservative, so I think it’s still in that range, and doesn’t change the conclusions one should draw much. 
[-]Cole Wyeth6mo30

That moment when you want to be updateless about risk but updateful about ignorance, but the basis of your epistemology is to dissolve the distinction between risk and ignorance.

(Kind of inspired by @Diffractor)

Reply
[-]Cole Wyeth7mo30

Did a podcast interview with Ayush Prakash on the AIXI model (and modern AI), very introductory/non-technical:

Reply
2Cole Wyeth7mo
Some errata: The bat thing might have just been Thomas Nagel, I can't find the source I thought I remembered. At one point I said LLMs forget everything they thought previously between predicting (say) token six and seven and half to work from scratch. Because of the way the attention mechanism works it is actually a little more complicated (see the top comment from hmys). What I said is (I believe) still overall right but I would put that detail less strongly.      Hofstadter apparently was the one who said a human-level chess AI would rather talk about poetry.
[-]Cole Wyeth8mo35

Gary Kasparov would beat me at chess in some way I can't predict in advance. However, if the game starts with half his pieces removed from the board, I will beat him by playing very carefully. The first above-human level A.G.I. seems overwhelmingly likely to be down a lot of material - massively outnumbered, running on our infrastructure, starting with access to pretty crap/low bandwidth actuators in the physical world and no legal protections (yes, this actually matters when you're not as smart as ALL of humanity - it's a disadvantage relative to even the... (read more)

Reply21
[-]Cole Wyeth9mo30

I suspect that human minds are vast (more like little worlds of our own than clockwork baubles) and even a superintelligence would have trouble predicting our outputs accurately from (even quite) a few conversations (without direct microscopic access) as a matter of sample complexity.

Considering the standard rhetoric about boxed A.I.'s, this might have belonged in my list of heresies: https://www.lesswrong.com/posts/kzqZ5FJLfrpasiWNt/heresies-in-the-shadow-of-the-sequences

Reply
3CstineSublime9mo
There is a large body of non-AI literature that already addresses this, for example the research of Gerd Gigerenzer which shows that often heuristics and "fast and frugal" decision trees substantially outperform fine grained analysis because of the sample complexity matter you mention. Pop frameworks which elaborate on this, and how it may be applied include David Snowden's  Cynefin framework which is geared for government and organizations and of course Nicholas Nassim Taleb's Incerto.  I seem to recall also that the gist of Dunbar's Number, and the reason why certain Parrots and Corvids seem to have larger pre-frontal-crotex equivalents than non-monogamous birds, is basically so that they can have a internal model of their mating partner. (This is very interesting to think about in terms of intimate human relationships, what I'd poetically describe as the "telepathy" when wordlessly you communicate, intuit, and predict a wide range of each-other's complex and specific desires and actions because you've spent enough time together). The scary thought to me is that a superintelligence would quite simply not need to accurately model us, it would just need to fine tune it's models in a way not dissimilar from the psychographic models utilized by marketers. Of course that operates at scale so the margin of error is much greater but more 'acceptable'. Indeed dumb algorithms already to this very well - think about how 'addictive' people claim their TikTok or Facebook feeds are. The rudimentary sensationalist clickbait that ensures eyeballs and clicks. A superintelligence doesn't need accurate modelling - this is without having individual conversations with us, to my knowledge (or rather my experience) most social media algorithms are really bad at taking the information on your profile and using things like sentiment and discourse analysis to make decisions about which content to feed you; they rely on engagement like sharing, clicking like, watch time and rudimentary
2Alexander Gietelink Oldenziel9mo
One can showcase very simple examples of data that is easy to generate ( simple data soirce) yet very hard to predict. E.g. there is a 2-state generating hidden markov model whose optimal prediction hidden markov model is infinite. Ive heard it explained as follows: it's much harder for the fox to predict where the hare is going than it is for the hare to decide where to go to shake off the fox.
[-]Cole Wyeth11mo32

I'm starting a google group for anyone who wants to see occasional updates on my Sherlockian Abduction Master List. It occurred to me that anyone interested in the project would currently have to check the list to see any new observational cues (infrequently) added - also some people outside of lesswrong are interested. 

Reply
[-]Cole Wyeth1y32

Presented the Sherlockian abduction master list at a Socratica node:

Image
Reply
[-]Cole Wyeth1mo2-1

Thinking times are now long enough that in principle frontier labs could route some API (or chat) queries to a human on the backend, right? Is this plausible? Could this give them a hype advantage if in the medium term, if they picked the most challenging (for LLMs) types of queries effectively, and if so, is there any technical barrier? I can see this kind of thing eventually coming out, if the Wentworth “it’s bullshit though” frame turns out to be partially right. 

(I’m not suggesting they would do this kind of blatant cheating on benchmarks, and I have no inside knowledge suggesting this has ever happened)

Reply
[-]Cole Wyeth3mo20

In MTG terms, I think Mountainhead is the clearest example I’ve seen of a mono-blue dystopia.

@Duncan Sabien (Inactive) 

Reply
[-]Cole Wyeth5mo20

I seem to recall EY once claiming that insofar as any learning method works, it is for Bayesian reasons. It just occurred to me that even after studying various representation and complete class theorems I am not sure how this claim can be justified - certainly one can construct working predictors for many problems that are far from explicitly Bayesian. What might he have had in mind?

Reply
[-]Cole Wyeth9mo10

A "Christmas edition" of the new book on AIXI is freely available in pdf form at http://www.hutter1.net/publ/uaibook2.pdf 

Reply
[-]Cole Wyeth10mo19

Over-fascination with beautiful mathematical notation is idol worship. 

Reply
6Seth Herd10mo
So is the fascination with applying math to complex real-world problems (like alignment) when the necessary assumptions don't really fit the real-world problem.
5gwern10mo
(Not "idle worship"?)
2Hastings10mo
Beauty of notation is an optimization target and so should fail as a metric, but especially compared to other optimization targets I’ve pushed on, in my experience it seems to hold up. The exceptions appear to be string theory and category theory and two failures in a field the size of math is not so bad.
[-]Cole Wyeth1y10

I wonder if it’s true that around the age of 30 women typically start to find babies cute and consequently want children, and if so is this cultural or evolutionary? It’s sort of against my (mesoptimization) intuitions for evolution to act on such high-level planning (it seems that finding babies cute can only lead to reproductive behavior through pretty conscious intermediary planning stages). Relatedly, I wonder if men typically have a basic urge to father children, beyond immediate sexual attraction?

Reply
[-]Cole Wyeth2mo0-4

Eliezer’s form of moral realism about good (as a real but particular shared concept of value which is not universally compelling to minds) seems to imply that most of us prefer to be at least a little bit evil, and can’t necessarily be persuaded otherwise through reason.

Seems right.

And Nietzsche would probably argue the two impulses towards good and evil aren't really opposites anyway. 

Reply
[+]Cole Wyeth2mo-76
Moderation Log
More from Cole Wyeth
View more
Curated and popular this week
192Comments
Mentioned in
55Alignment as uploading with more steps