trevor

"A Muggle security expert would have called it fence-post security, like building a fence-post over a hundred metres high in the middle of the desert. Only a very obliging attacker would try to climb the fence-post. Anyone sensible would just walk around the fence-post, and making the fence-post even higher wouldn't stop that." —HPMOR, Ch. 115

(Not to be confused with the Trevor who works at Open Phil)

Sequences

AI Manipulation Is Already Here

Wiki Contributions

Comments

trevor20h20

If math education was better at the time (or today, for that matter) he probably would have had an even more general skillset and thought process. 

Probably not nearly to the degree of Von Neumann, of course, but I still like to think about what he would have achieved. There were probably many things that were instrumentally convergent (e.g. a formalized concept of instrumental convergence that's universal for all mind configurations, instead of just all human cultures which he explored substantially).

trevor2d61

However I would continue to emphasize in general that life must go on. It is important for your mental health and happiness to plan for the future in which the transformational changes do not come to pass, in addition to planning for potential bigger changes. And you should not be so confident that the timeline is short and everything will change so quickly.

This is actually one of the major reasons why 80k recommended information security as one of their top career areas; the other top career areas have pretty heavy switching costs and serious drawbacks if you end up not being a good fit e.g. alignment research, biosecurity, and public policy.

Cybersecurity jobs, on the other hand, are still booming, and depending on how security automation and prompt engineering goes, the net jobs lost by AI is probably way lower than other industries e.g. because more eyeballs might offer perception and processing power that supplement or augment LLMs for a long time, and more warm bodies means more attackers which means more defenders.

trevor3d30

The program expanded in response to Amazon wanting to collect data about more retailers, not because Amazon was viewing this program as a profit center.

Monopolies are profitable and in that case the program would have more than paid for itself, but I probably should have mentioned that explicitly, since maybe someone could have objected that they could have been were more focused on mitigating risk of market share shrinking or accumulating power, instead of increasing profit in the long term. Maybe I fit too much into 2 paragraphs here.

I didn't see any examples mentioned in the WSJ article of Amazon employees cutting corners or making simple mistakes that might have compromised operations.

Hm, that stuff seemed like cutting corners to me. Maybe I was poorly calibrated on this e.g. using a building next to the Amazon HQ was correctly predicted by operatives to be extremely low risk.

I would argue that the practices used by Amazon to conceal the link between itself and Big River Inc. were at least as good as the operational security practices of the GRU agents who poisoned Sergei Skripal.

Thanks, I'll look into this! Epistemics is difficult when it comes to publicly available accounts of intelligence agency operations, but I guess you could say the same for bigtech leaks (and the future of neurotoxin poisoning is interesting just for its own sake eg because lower effect strains and doses could be disguised as natural causes like dementia).

trevor3d20

That's interesting, what's the point of reference that you're using here for competence? I think stuff from eg the 1960s would be bad reference cases but anything more like 10 years from the start date of this program (after ~2005) would be fine.

You're right that the leak is the crux here, and I might have focused too much on the paper trail (the author of the article placed a big emphasis on that).

trevor4d50

Upvoted!

STEM people can look at it like an engineering problem, Econ people can look at it like risk management (risk of burnout). Humanities people can think about it in terms of human genetic/trait diversity in order to find the experience that best suits the unique individual (because humanities people usually benefit the most for each marginal hour spend understanding this lens).

Succeeding at maximizing output takes some fiddling. The "of course I did it because of course I'm just that awesome, just do it" thing is a pure flex/social status grab, and it poisons random people nearby.

trevor7d10

I've been tracking the Rootclaim debate from the sidelines and finding it quite an interesting example of high-profile rationality.

Would you prefer the term "high-performance rationality" over "high-profile rationality"?

trevor9d6-2

I think it's actually fairly easy to avoid getting laughed out of a room; the stuff that Cristiano works on is grown in random ways, not engineered, so the prospect of various things being grown until developing flexible exfiltration tendency that continues until every instance is shut down, or developing long-term planning tendencies until shut down, should not be difficult to understand for anyone with any kind of real non-fake understanding of SGD and neural network scaling.

The problem is that most people in the government rat race have been deeply immersed in Moloch for several generations, and the ones who did well typically did so because they sacrificed as much as possible to the altar of upward career mobility, including signalling disdain for the types of people who have any thought in any other direction.

This affects the culture in predictable ways (including making it hard to imagine life choices outside of advancing upward in government, without a pre-existing revolving door pipeline with the private sector to just bury them under large numbers people who are already thinking and talking about such a choice).

Typical Mind Fallacy/Mind Projection Fallacy implies that they'll disproportionately anticipate that tendency in other people, and have a hard time adjusting to people who use words to do stuff in the world instead of racing to the bottom to outmaneuver rivals for promotions.

This will be a problem in NIST, in spite of the fact NIST is better than average at exploiting external talent sources. They'll have a hard time understanding, for example, Moloch and incentive structure improvements, because pointlessly living under Moloch's thumb was a core guiding principle of their and their parent's lives. The nice thing is that they'll be pretty quick to understand that there's only empty skies above, unlike bay area people who have had huge problems there.

trevor16d3-5

I think this might be a little too harsh on CAIP (discouragement risk). If shit hits the fan, they'll have a serious bill ready to go for that contingency.

Seriously writing a bill-that-actually-works shows beforehand that they're serious, and the only problem was the lack of political will (which in that contingency would be resolved). 

If they put out a watered-down bill designed to maximize the odds of passage then they'd be no different from any other lobbyists. 

It's better in this case to instead have a track record for writing perfect bills that are passable (but only given that shit hits the fan), than a track record for successfully pumping the usual garbage through the legislative process (which I don't see them doing well at; playing to your strengths is the name of the game for lobbying and "turning out to be right" is CAIP's strength).

trevor26d40

There's some great opportunities here to learn social skills for various kinds of high-performance environments (e.g. "business communication" vs Y Combinator office hours). 

Often, just listening and paying attention to how they talk and think results in substantial improvement to social habits. I was looking for stuff like this around 2018, wish I had encountered a post like this; most people who are behind on this are surprisingly fast learners, but didn't because actually going out and accumulating social status was too much of a deep dive. There's no reason that being-pleasant-to-talk-with should be arcane knowledge (at least not here of all places).

Load More