angmoh

Posts

Sorted by New

Wiki Contributions

Comments

angmoh10

If your opponent makes bad assumptions or bad decisions, your decisions won't be rewarded properly, and it can take you a very long time indeed to figure out from first principles that that is happening. If you are playing with a player who thinks that "all reds" is a strong hand, it can take you many, many hands to figure out that they're overestimating their hands instead of just getting anomalously lucky with their hidden cards while everyone else folds!

(Is someone who knows more about poker than I do going to tell me that this specific example is wrong-ish? We'll find out!)

I'll take the bait since this is one of the cool meta aspects of poker!

There's a saying in online poker: "move up to where they respect your raises" - it's poking fun at the notion that it's possible to play well without modelling your opponents. The idea is that it's not valid to conclude that if you lose to a poor player, that you weren't "rewarded properly" - it is in fact your fault for lacking the situational awareness to adjust your strategy.

For a good player sitting with a person who thinks 'all reds' is a good hand, it'll be obvious before you ever see their cards. 

 

Anyway your point is right about the difficulty of learning 'organically' where you only play bad players.  A common failure mode in online poker involved players getting stuck at local maximums strategically - they'd adopt an autopilot-style strategy that did very well at lower limits surrounded by 'all reds' types, but get owned when they moved up to higher stakes and failed to adjust.

angmoh114

You're right but I like the chef example anyway. Even if cherry picked, it does get at a core truth - this kind of intuition evolves in every field. I love the stories of old hands intuitively seeing things a mile away.

angmoh40

Sutskever's response to Dwarkesh in their interview was a convincing refutation of this argument for me:

Dwarkesh Patel
So you could argue that next-token prediction can only help us match human performance and maybe not surpass it? What would it take to surpass human performance?

Ilya Sutskever
I challenge the claim that next-token prediction cannot surpass human performance. On the surface, it looks like it cannot. It looks like if you just learn to imitate, to predict what people do, it means that you can only copy people. But here is a counter argument for why it might not be quite so. If your base neural net is smart enough, you just ask it — What would a person with great insight, wisdom, and capability do? Maybe such a person doesn't exist, but there's a pretty good chance that the neural net will be able to extrapolate how such a person would behave. Do you see what I mean?

Dwarkesh Patel
Yes, although where would it get that sort of insight about what that person would do? If not from…

Ilya Sutskever
From the data of regular people. Because if you think about it, what does it mean to predict the next token well enough? It's actually a much deeper question than it seems. Predicting the next token well means that you understand the underlying reality that led to the creation of that token. It's not statistics. Like it is statistics but what is statistics? In order to understand those statistics to compress them, you need to understand what is it about the world that creates this set of statistics? And so then you say — Well, I have all those people. What is it about people that creates their behaviors? Well they have thoughts and their feelings, and they have ideas, and they do things in certain ways. All of those could be deduced from next-token prediction. And I'd argue that this should make it possible, not indefinitely but to a pretty decent degree to say — Well, can you guess what you'd do if you took a person with this characteristic and that characteristic? Like such a person doesn't exist but because you're so good at predicting the next token, you should still be able to guess what that person who would do. This hypothetical, imaginary person with far greater mental ability than the rest of us

angmoh10

"Dwarkesh chose excellent questions throughout, displaying an excellent sense of when to follow up and how, and when to pivot."

This is the basic essence of why Dwarkesh does such good interviews. He does the groundwork to be able to ask relevant and interesting questions, ie. actually reading their books/works, and seems to consistently have put actual thought into analysing the worldview of his subjects.

angmoh2711

The unambitiousness of modern geoengineering in general is dismaying.

For my Australian perspective: in the early 1900s there were people discussing how make use of the massive tracts of desert wasteland than make up most of the outback (ie: https://en.wikipedia.org/wiki/Bradfield_Scheme). None of this stuff could be considered today - one tree getting chopped down is liable to make the news: https://www.bbc.com/news/world-australia-54700074

Hard to escape the conclusion that we should all go lie in a ditch so as to guarantee that no impact to anything occurs ever.

angmoh40

This seems about right. Sam is a bit of a cowboy and probably doesn't bother involving the board more than he absolutely has to.

angmoh10

Stefánsson's "The Fat of the Land" is not really worth reading for any scientific insight today, but it's entertaining early 1900s anthropology. 

I don't have much of an opinion on any specific diet approach, but I can tell you my own experience with weight loss: I've always been between 15-25% bodyfat, yoyoing around. This routine isn't ideal, so I too am a 'victim' of the weight gain phenomenon. 

I have no satisfying answers for "why are we getting fatter" or "what makes caloric deficits so hard to maintain". I appreciate the diet blogging community that tries to tackle these questions with citizen science.

angmoh76

I assume you're familiar with Vilhjálmur Stefánsson's work if you are interested in low protein carnivore diets, but I was really was surprised to see how similar the 'ex150' sounds compared to the classic ~80:20 fat:protein experiments. These aren't really new ideas - although I'm sure there's a lot more information available on the details.

Anyway, dieting seems like something where while people on average fail, you do see some individual successes, so it's worth poking around the edges and giving things a go. It's always nice to see results from the coalface.

Ultimately the new GLP-1 agonist weightloss drugs seem to be awesome by both data and anecdata. So the food composition experimentation might fade away a bit over the next few years for the express purpose of weight loss.

angmoh10

Good post. 

For Westerners looking to get a palatable foothold on the priorities and viewpoints of the CCP (and Xi), I endorse "The Avoidable War" written last year by Kevin Rudd (former Prime Minister of Australia, speaks mandarin, has worked in China, loads of diplomatic experience - imo about as good of a person that exists to interpret Chinese grand strategy and explain it from a Western viewpoint). The book is (imo, for a politician), impressively objective in its analysis.

Some good stuff in there explaining the nature of Chinese cynicism about foreign motivations that echoes some of what is written in this post, but with a bit more historical background and strategic context. 

angmoh21

Yeah - it's odd, but TC is a self-professed contrarian after all.

I think the question here is: why doesn't he actually address the fundamentals of the AGI doom case? The "it's unlikely / unknown" position is really quite a weak argument which I doubt he would make if he actually understood EY's position. 

Seeing the state of the discourse on AGI risk just makes it more and more clear that the AGI risk awareness movement has failed to express its arguments in terms that non-rationalists can understand. 

People like TC should the first type of public intellectual to grok it, because EY's doom case is is highly analogous to market dynamics. And yet.

Load More