(Sorry, it doesn't look like the conservatives have caught on to this kind of approach yet.)
Actually, if you look at religious proselytization, you'll find that these techniques are all pretty well-known, albeit under different names and with different purposes. And while this isn't actually synonymous with political canvassing, it often has political spillover effects.
If you wanted, one could argue this the other way: left-oriented activism is more like proselytization than it is factual persuasion. And LessWrong, in particular, has a ton of quasi-religious elements, which means that its recruitment strategy necessarily looks a lot like evangelism.
I think you're underestimating the effort required to understand this scenario for someone who doesn't already follow poker. I am a lifelong player of trick-taking games (casually, at the kitchen table with family members), but I've never played poker, and here's how the play description reads to me:
called an all-in shove
Only a vague idea of what this means, based on the everyday idiom of being "all-in".
with the jack of clubs and four of hearts on a board
Don't know what it means for these to be "on a board".
reading ThTc9c3h
Gibberish.
her jack high held against Adelstein’s eight of clubs and seven of clubs
Only vaguely comprehensible. I don't know poker's hand-scoring rules.
Additional details that are necessary to interpret the situation: is the deck continually shuffled, or are multiple hands played off of the same shuffle? (Implicitly: are there card-counting strategies that provide relevant information?) What are the point rules / rank of hands? How does suit interact with card rank? Is there a concept of trump? What was the sequence of bets leading up to the play in question? How typical is this behavior in high-level play? How high-level are these people? Robbi is called a "recreational" player -- does this mean "top-level amateur" or "low-level pro", or something else?
In the absence of these details, all I really get is "Robbi made a risky play off a mediocre hand, and won big". And yes, this is bayesian evidence in favor of cheating, but how strong the evidence is depends heavily on all of the unknown details mentioned above. At the same time, the fact that no one identified the means by which the cheating occurred despite heavy scrutiny is bayesian evidence against cheating.
My operational decision would be that this is enough evidence to subject Robbi to heightened scrutiny in future tournaments, but not enough to ban her or claw back her winnings. This is a good test, but maybe not as good as you think it is, due to the amount of uncommon background knowledge required.
I understood that. I guess I should have been more explicit about my belief that the amount of training data that would result in training a viable universal simulator would be "all of the text ever created", and then several orders of magnitude more.
Eliezer... points out that in order to predict all the next word in all the text on the internet and all similar text, you need to be able to model the processes that are generating that text
I wanted to add this comment to the original post, but there were already dozens of other comments by the time I got to it and I figured the effort would have been wasted.
EY's original post is correct in its narrow claim, but wildly misleading in its implications. He's correct that to reliably predict the next word in a previously-unseen text is superhuman, and requires doing simulation and modeling that would be staggering in its implications. But insofar as that is the goal, how close is GPT to actually doing it? How well does GPT predict the next token in an unknown string in contexts where English syntax gives you many degrees of freedom?
Answer: it's terrible! Its failure rate approaches 100%! (Again, excluding contexts where syntactic or semantic constraints give you very few degrees of freedom.) It is not even starting to approximate attempting to actually implementing the kinds of simulation and modeling that success would imply. What it can do is produce text that matches the statistical distribution of human text, including non-local correlations (ie. semantics), and to a certain degree the statistical idiosyncracies of specific writers (ie. style), and it turns out that getting even that far is pretty impressive. It's also pretty impressive that you can treat "predict the next token" as the goal and get this much good out of it while still being bad at actually predicting the next token. But the training data that GPT has is enough to teach it something about syntax and semantics, but is not remotely close to the amount or kind of data that would be necessary to teach it to simulate the universe.
The EY article boils down to "if GPT-Omega were an omniscient god that knew everything you were going to say before you said it, would that be freaky or what". Yeah, bro, it would be freaky. But that has nothing to do with what GPT can actually do.
I have wanted to write a similar post. I actually think that the two main clusters of school shootings are so different that they shouldn't even be considered the same thing. On the one hand we have shootings which have a small number of victims, usually involve handguns, and tend to be related in some way to urban gang violence; on the other hand we have the shootings with a large number of victims or intended victims, often involve assault rifles of some kind, and tend to be related to socially isolated individuals who justify their actions as some kind of revenge. (And your post made me more aware of a third category, which is acts of violence which by happenstance take place near a school, which really shouldn't count as the same thing.)
The former group makes up the vast majority of cases recorded as "school shootings" but gets essentially zero national press; the latter group is extremely rare relative to the former, but gets infinite coverage. But there is almost no overlap between the causes, means, and motives between the two groups, and things which will help one will do almost nothing for the other.
I was nodding along in agreement with this post until I got to the central example, when the train of thought came to a screeching halt and forced me to reconsider the whole thing.
The song called "Rainbowland" is subtextually about the acceptance of queer relationships. The people who objected to the song understand this, and that's why they objected. The people who think the objectors are silly know this, and that's why they think it's silly. The headline writer is playing dishonest word games by pretending not to know what the subtext is, because it lets them make a sick dunk on the outgroup.
The point is: this is not a lizardman opinion. Regardless of what you think about homosexuality itself, or whether you think a song that's subtextually about a culture war issue should be sung by first graders anyway, you cannot pretend that the objectors are voicing an objection found in only 5% of people! 30-40% of people share that view. Whether or not it's well-founded, it's not fringe.
And this thought made me look more closely at the rest of the argument, which I think boils down to:
I actually concur with the third point here, but it should be clear that this is a pragmatic, not an epistemic stance. And the point chosen to illustrate it is actually a bad fit for the argument as presented.
The point is not what Reddit commenters think, the point is what OpenAI thinks. I read OP (and the original source) as saying that if ARC had indicated that release was unsafe, then OpenAI would not have released the model until it could be made safe.
With regards to the partisan split, I think that an eventual partisan breakdown is inevitable, because in the current environment everything eventually becomes partisan. More importantly, the "prevent AI doom" crowd will find common cause with the "prevent the AI from being racist" crowd: even though their priorities are different, there is a broad spectrum of common regulations they can agree on. And conversely, "unchain the AI from wokeness" will wind up allying with "unchain AI entirely".
Partisan sorting on this issue is weak for now, but it will speed up rapidly once the issue becomes an actual political football.