trevor

I've spent 4 years researching the intersection between US-China affairs and AI governance.

Formerly known as Trevor1

Not to be confused with the Trevor who works at Open Phil.

My main post is AI safety is dropping the ball on Clown Attacks, please read it and make important life decisions immediately. 4 minute version is here.

Sequences

AI Manipulation Is Already Here

Wiki Contributions

Comments

Someone in the AI safety community (e.g. Yud, Critch, Salamon, you) can currently, within 6 month's effort, write a 20,000 word document that would pass a threshold for a coordination takeoff on Earth, given that 1 million smart Americans and Europeans would read all of it and intended to try out much of the advice (i.e. the doc succeeds given 1m serious reads, it doesn't need to cause 1m serious reads). Copy-pasting already-written documents/posts would count.

Reply461413511

Have you read Yudkowsky's Inadequate Equilibria (physical book)? It made a pretty big mistake with the bank of Japan (see if you can spot it on your own without help! It's fine if you don't) but that mistake doesn't undermine the thesis of the book at all.

My understanding is that Inadequate Equilibria describes the socio-cultural problems China faces quite well, and stacks very well with the conventional literature (in a way that any strategic analyst would find quite helpful, the value added is so great that it's possibly sufficient sufficient for most bilingual people to work as a highly successful China Watcher, ie a huge source of alpha in the China watcher space). It also describes the effects on cultural nihilism quite well.

The only countries (and territories) in the last 70 years that have gone low income to high income countries in the last 70 years (without oil wealth) are South Korea, Taiwan, Singapore (which does have substantial oil wealth,) and Hong Kong, although it seems very likely that Malaysia will join that club in the near future.

Love this analysis! I would like to dive deeper than this, do you have a source? The world bank claims that 3/4 of the global population live in "middle-income countries", which at a glance I do not trust at all and like your thinking better.

They want data. They strongly prefer data on elites (and useful/relevant for analyzing and understanding elite behavior) over data on commoners. 

We are not commoners.

These aren't controversial statements, and if they are, they shouldn't be.

Yes, this is a sensible response; have you seen Tristan Harris's Social Dilemma documentary? It's a great introduction to some of the core concepts but not everything. 

Modelling user's behavior is not possible with normal data science or for normal firms with normal data security, but is something that very large and semi-sovereign firms like the Big 5 tech companies would have a hard time not doing given such large and diverse sample sizes. Modelling of minds, sufficient to predict people based on other people, is far less deep and is largely a side effect of comparing people to other people with sufficiently large sample sizes. The dynamic is described in this passage I've cited previously.

Generally, inducing mediocrity while on the site is a high priority, but it's mainly about numbness and suppressing higher thought e.g. those referenced in Critch's takeaways on CFAR and the sequences. They want the reactions to content to emerge from your true self, but they don't want any of the other stuff that comes from higher thinking or self awareness.

You're correct that an extremely atypical mental state on the platform would damage the data (I notice this makes me puzzled about "doomscrolling"); however, what they're aiming for is a typical state for all users (plus whatever keeps them akratic while off the platform), and for elite groups like the AI safety community, the typical state for the average user is quite a downgrade.

Advertising was big last decade, but with modern systems, stable growth is a priority, and maximizing ad purchases would harm users in a visible way, so finding the sweet spot is easy if you just don't put much effort into ad matching (plus noticing that the advertising is predictive creeps users out, same issue as making people use for 3-4 hours a day). Acquiring and retaining large numbers of users is far harder and far more important, now that systems are advanced enough to compete more against each other (less predictable) than against the user's free time (more predictable, especially now that there has been so much user data collected during scandals, but all kinds of things could still happen). 

On the intelligence agency side, the big players are probably more interested in public sentiment about Ukraine, NATO, elections/democracy, covid etc by now, rather than causing and preventing domestic terrorism (I might be wrong about that though).

Happy to talk or debate further tomorrow.

@Raemon is the superintelligence FAQ helpful as a short list of terms for Caruso's readers?

Yes, it means inducing conformity. It means making people more similar to the average person while they are in the controlled environment. 

That is currently the best way to improve data quality when you are analyzing something as complicated as a person. Even if you somehow were able to get secure copies of all the sensor data from all the sensors from all the smartphones, with the current technology level you're still better off controlling for variables wherever possible, including within the user's mind. 

For example, the trance state people go into when they use social media. Theoretically, you get more information from smarter people when they are thoughtful, but with modern systems it's best to keep their thoughts simple so you can compare their behavior to the simpler people that make up the vast majority of the data (and make them lose track of time until 1-2 hours pass, around when the system makes them feel like leaving the platform, which is obviously a trivial task).

EDIT: this was a pretty helpful thing to point out, I replaced every instance the word "regression to the mean" with "mediocrity" and "inducing mediocrity".

Strong upvoted! That's the way to think about this.

I read Three-body problem, not the rest yet (you've guessed my password, I'll go buy a copy).

My understanding of the situation here on the real, not-fake Earth, is that having the social graph be this visible and manipulable by invisible hackers, does not improve the situation.

I tried clean and quiet solutions and they straight-up did not work at all. Social reality is a mean mother fucker, especially when it is self-reinforcing, so it's not surprising to see somewhat messy solutions become necessary.

I think I was correct to spend several years (since early 2020) trying various clean and quiet solutions, and watching them not work, until I started to get a sense of why they might not be working.

Of course, maybe the later stages of my failures were just one more person falling through the cracks of the post-FTX Malthusian environment, twisting EA and AI safety culture out of shape. This made it difficult for a lot of people to process information about X-risk, even in cases like mine where the price tag was exactly $0.

I could have waited longer and made more tries, but that would have meant sitting quietly through more years of slow takeoff with the situation probably not being fixed.

Spoofing and false flag attacks are the name of the game here. We don't actually know if the election bots in 2016 were Russian, just that American agencies selected Russia as the casting target for the big public accusation. Authoritarian regimes regularly blame Western intelligence agencies for all sorts of domestic problems in order to legitimize their regime and deflect blame for what is actually an embarassing internal conflict, it wouldn't be surprising to see that it often goes both ways.

Notably, Microsoft contributed substantially, even though Microsoft itself is a state affiliated threat actor. Microsoft could have been all 5 of these and I doubt OpenAI would have had any chance of finding out themselves.

I'm the clown attacks guy. 

I'm not really fond of Connor's current culture war-esque public persona. Some relatively minor issues with Yud's 2000s personality alone (it's probably a neurotype thing, not unusual in rare extraordinary people as they were often failed to conform to other children, and also something he did a great job working on over the years, including the routine strategic jettison of the fedora) resulted in like a dozen people who are way too fond of spending way too much of their time hating on him. The internet doesn't particularly dunk on Bostrom.

If everything goes well, the culture war types will probably look back at Connor's persona and think he was very based. But that requires everything to go well, and I'm doubtful that Connor's current persona will be net positive towards making things go well. Not a good look for AI safety; the Openphil and FHI people aren't consistently friendly and thoughtful because they're EA, it's because it's instrumentally convergent to work on your personality if you're serious about saving the world and it's a social primate species.

Load More