Wiki Contributions

Comments

N=1, but I didn't floss regularly for years, but I found that after I did so it made an enormous difference in my bad breath, to the point of eliminating it entirely for most purposes. Obvious conclusion is that my breath problems were the result of bacterial buildup between my teeth that wasn't getting removed by normal brushing.

I suspect that a lot of tooth-brushing advice is like this: maybe not rigorously studied, but nonetheless upheld by anecdote and obvious physical models of the world.

An inverse example is the role of fights in hockey.

Fighting is explicitly disallowed by the rules of hockey. If players get into a fight, one or both players will be penalized. Nonetheless, it is widely held by coaches, players, and fans that fighting is part of the "spirit of hockey", and so fights still occur with some regularity. This is sometimes for strategic reasons (baiting an important player into a fight in order to get them into the penalty box), and sometimes for personal reasons, to settle grudges, or to punish certain kinds of technically-legal player behavior. Thus, even though the rules don't allow fighting, fighting is an accepted part of the strategic metagame.

Unfortunately, in the past several years the owners and the NHL have tried to stamp out this practice, as a means to make the sport more "respectable" and (I assume) the avoid something like the concussion controversy that has followed the NFL. All of the long-term fans of the game that I've talked to agree that this is a bad idea and they should bring the fights back.

Huh! Measure the speed of the ball coming off of the ramp was one of the first things that I thought of, but I assumed that that came too close to a full "dry run" to count. I think the lesson to be learned in this case is to first try it and see if someone stops you.

With regards to the partisan split, I think that an eventual partisan breakdown is inevitable, because in the current environment everything eventually becomes partisan. More importantly, the "prevent AI doom" crowd will find common cause with the "prevent the AI from being racist" crowd: even though their priorities are different, there is a broad spectrum of common regulations they can agree on. And conversely, "unchain the AI from wokeness" will wind up allying with "unchain AI entirely".

Partisan sorting on this issue is weak for now, but it will speed up rapidly once the issue becomes an actual political football.

(Sorry, it doesn't look like the conservatives have caught on to this kind of approach yet.)

Actually, if you look at religious proselytization, you'll find that these techniques are all pretty well-known, albeit under different names and with different purposes. And while this isn't actually synonymous with political canvassing, it often has political spillover effects.

If you wanted, one could argue this the other way: left-oriented activism is more like proselytization than it is factual persuasion. And LessWrong, in particular, has a ton of quasi-religious elements, which means that its recruitment strategy necessarily looks a lot like evangelism.

Nit: your last word should be "credible", not "credulous".

I think you're underestimating the effort required to understand this scenario for someone who doesn't already follow poker. I am a lifelong player of trick-taking games (casually, at the kitchen table with family members), but I've never played poker, and here's how the play description reads to me:

called an all-in shove

Only a vague idea of what this means, based on the everyday idiom of being "all-in".

with the jack of clubs and four of hearts on a board

Don't know what it means for these to be "on a board".

reading ThTc9c3h

Gibberish.

her jack high held against Adelstein’s eight of clubs and seven of clubs

Only vaguely comprehensible. I don't know poker's hand-scoring rules.

Additional details that are necessary to interpret the situation: is the deck continually shuffled, or are multiple hands played off of the same shuffle? (Implicitly: are there card-counting strategies that provide relevant information?) What are the point rules / rank of hands? How does suit interact with card rank? Is there a concept of trump? What was the sequence of bets leading up to the play in question? How typical is this behavior in high-level play? How high-level are these people? Robbi is called a "recreational" player -- does this mean "top-level amateur" or "low-level pro", or something else?

In the absence of these details, all I really get is "Robbi made a risky play off a mediocre hand, and won big". And yes, this is bayesian evidence in favor of cheating, but how strong the evidence is depends heavily on all of the unknown details mentioned above. At the same time, the fact that no one identified the means by which the cheating occurred despite heavy scrutiny is bayesian evidence against cheating.

My operational decision would be that this is enough evidence to subject Robbi to heightened scrutiny in future tournaments, but not enough to ban her or claw back her winnings. This is a good test, but maybe not as good as you think it is, due to the amount of uncommon background knowledge required.

I understood that. I guess I should have been more explicit about my belief that the amount of training data that would result in training a viable universal simulator would be "all of the text ever created", and then several orders of magnitude more.

[This comment is no longer endorsed by its author]Reply

Eliezer... points out that in order to predict all the next word in all the text on the internet and all similar text, you need to be able to model the processes that are generating that text

I wanted to add this comment to the original post, but there were already dozens of other comments by the time I got to it and I figured the effort would have been wasted.

EY's original post is correct in its narrow claim, but wildly misleading in its implications. He's correct that to reliably predict the next word in a previously-unseen text is superhuman, and requires doing simulation and modeling that would be staggering in its implications. But insofar as that is the goal, how close is GPT to actually doing it? How well does GPT predict the next token in an unknown string in contexts where English syntax gives you many degrees of freedom?

Answer: it's terrible! Its failure rate approaches 100%! (Again, excluding contexts where syntactic or semantic constraints give you very few degrees of freedom.) It is not even starting to approximate attempting to actually implementing the kinds of simulation and modeling that success would imply. What it can do is produce text that matches the statistical distribution of human text, including non-local correlations (ie. semantics), and to a certain degree the statistical idiosyncracies of specific writers (ie. style), and it turns out that getting even that far is pretty impressive. It's also pretty impressive that you can treat "predict the next token" as the goal and get this much good out of it while still being bad at actually predicting the next token. But the training data that GPT has is enough to teach it something about syntax and semantics, but is not remotely close to the amount or kind of data that would be necessary to teach it to simulate the universe.

The EY article boils down to "if GPT-Omega were an omniscient god that knew everything you were going to say before you said it, would that be freaky or what". Yeah, bro, it would be freaky. But that has nothing to do with what GPT can actually do.

[This comment is no longer endorsed by its author]Reply
Load More