Sorted by New

Wiki Contributions



Sutskever's response to Dwarkesh in their interview was a convincing refutation of this argument for me:

Dwarkesh Patel
So you could argue that next-token prediction can only help us match human performance and maybe not surpass it? What would it take to surpass human performance?

Ilya Sutskever
I challenge the claim that next-token prediction cannot surpass human performance. On the surface, it looks like it cannot. It looks like if you just learn to imitate, to predict what people do, it means that you can only copy people. But here is a counter argument for why it might not be quite so. If your base neural net is smart enough, you just ask it — What would a person with great insight, wisdom, and capability do? Maybe such a person doesn't exist, but there's a pretty good chance that the neural net will be able to extrapolate how such a person would behave. Do you see what I mean?

Dwarkesh Patel
Yes, although where would it get that sort of insight about what that person would do? If not from…

Ilya Sutskever
From the data of regular people. Because if you think about it, what does it mean to predict the next token well enough? It's actually a much deeper question than it seems. Predicting the next token well means that you understand the underlying reality that led to the creation of that token. It's not statistics. Like it is statistics but what is statistics? In order to understand those statistics to compress them, you need to understand what is it about the world that creates this set of statistics? And so then you say — Well, I have all those people. What is it about people that creates their behaviors? Well they have thoughts and their feelings, and they have ideas, and they do things in certain ways. All of those could be deduced from next-token prediction. And I'd argue that this should make it possible, not indefinitely but to a pretty decent degree to say — Well, can you guess what you'd do if you took a person with this characteristic and that characteristic? Like such a person doesn't exist but because you're so good at predicting the next token, you should still be able to guess what that person who would do. This hypothetical, imaginary person with far greater mental ability than the rest of us


"Dwarkesh chose excellent questions throughout, displaying an excellent sense of when to follow up and how, and when to pivot."

This is the basic essence of why Dwarkesh does such good interviews. He does the groundwork to be able to ask relevant and interesting questions, ie. actually reading their books/works, and seems to consistently have put actual thought into analysing the worldview of his subjects.


The unambitiousness of modern geoengineering in general is dismaying.

For my Australian perspective: in the early 1900s there were people discussing how make use of the massive tracts of desert wasteland than make up most of the outback (ie: None of this stuff could be considered today - one tree getting chopped down is liable to make the news:

Hard to escape the conclusion that we should all go lie in a ditch so as to guarantee that no impact to anything occurs ever.


This seems about right. Sam is a bit of a cowboy and probably doesn't bother involving the board more than he absolutely has to.


Stefánsson's "The Fat of the Land" is not really worth reading for any scientific insight today, but it's entertaining early 1900s anthropology. 

I don't have much of an opinion on any specific diet approach, but I can tell you my own experience with weight loss: I've always been between 15-22% bodyfat, but I have always tended to slowly gain weight if not actively dieting. My routine for about 10 years now has been to diet to 15%, and then at some point notice that I've been getting fatter and diet back down to 15% by counting calories and CICO logic. I find dieting annoying but consistent, predictable, and doable. 

This routine isn't ideal, so I too am a 'victim' of the weight gain phenomenon. I can't say that I've established a truly sustainable diet for myself - but it works well enough.

I have no satisfying answers for "why are we getting fatter" or "what makes caloric deficits so hard to maintain". I appreciate the diet blogging community that tries to tackle these questions with citizen science.


I assume you're familiar with Vilhjálmur Stefánsson's work if you are interested in low protein carnivore diets, but I was really was surprised to see how similar the 'ex150' sounds compared to the classic ~80:20 fat:protein experiments. These aren't really new ideas - although I'm sure there's a lot more information available on the details.

Anyway, dieting seems like something where while people on average fail, you do see some individual successes, so it's worth poking around the edges and giving things a go. It's always nice to see results from the coalface.

Ultimately the new GLP-1 agonist weightloss drugs seem to be awesome by both data and anecdata. So the food composition experimentation might fade away a bit over the next few years for the express purpose of weight loss.


Good post. 

For Westerners looking to get a palatable foothold on the priorities and viewpoints of the CCP (and Xi), I endorse "The Avoidable War" written last year by Kevin Rudd (former Prime Minister of Australia, speaks mandarin, has worked in China, loads of diplomatic experience - imo about as good of a person that exists to interpret Chinese grand strategy and explain it from a Western viewpoint). The book is (imo, for a politician), impressively objective in its analysis.

Some good stuff in there explaining the nature of Chinese cynicism about foreign motivations that echoes some of what is written in this post, but with a bit more historical background and strategic context. 


Yeah - it's odd, but TC is a self-professed contrarian after all.

I think the question here is: why doesn't he actually address the fundamentals of the AGI doom case? The "it's unlikely / unknown" position is really quite a weak argument which I doubt he would make if he actually understood EY's position. 

Seeing the state of the discourse on AGI risk just makes it more and more clear that the AGI risk awareness movement has failed to express its arguments in terms that non-rationalists can understand. 

People like TC should the first type of public intellectual to grok it, because EY's doom case is is highly analogous to market dynamics. And yet.


For example, a major point of disagreement between me and Eliezer is that Eliezer often dismisses plans as “too complicated to work in practice,” but that dismissal seems divorced from experience with getting things to work in practice (e.g. some of the ideas that Eliezer dismisses are not much more complex than RLHF with AI assistants helping human raters). In fact I think that you can implement complex things by taking small steps—almost all of these implementation difficulties do improve with empirical feedback.

EY's counter to this?


@Gerald Monroe On the question of Japan's unique lack of variation, I think it's unlikely to be decisive here. The 'monoculture' argument may have some merit, but even a genetically 'homogenous' population has plenty of variation - especially one 125m strong like the Japanese. 

Fertility related traits are just so fundamental to genetic fitness that selection is guaranteed to wring out the higher fertility alleles where the environment allows.

Load More