Trevor1

Wiki Contributions

Comments

Against Relying on Evolution to Forecast AI Outcomes (Part 1)

5: Evolution could not have succeeded anyways

Evolution had to succeed. In order for evolution to be noticed and/or modeled by anything, the patterns of neurons had to align perfectly, even if there was a one-in-a-trillion chance of something like neurons randomly forming the correct general intelligence, anywhere, ever. The fact that we came from neuron brute forcing doesn't tell us that much about whether neuron brute forcing can create general intelligence.

Animals and insects aren't evidence at all; given that intelligence evolved, there would be plenty of offshoots. 

DeepMind alignment team opinions on AGI ruin arguments

This looks like it's worth a whole lot of funding

Jack Clark on the realities of AI policy

Reading this has been an absolute fever dream. That's not something that happens when it's mostly or totally inaccurate, like various clickbait articles from major news outlets covering AI safety.

One thing it seems to get wrong is the typical libertarian impulse to overestimate the sovereignty of major tech companies. In the business world, they are clearly the big fish, but in the international world, it's pretty clear that their cybersecurity departments are heavily dependent on logistical/counterintelligence support from various military and intelligence agencies. Corporations might be fine at honeypots, but they aren't well known for being good at procuring agents willing to spend years risking their lives by operating behind enemy lines.

There are similar and even stronger counterparts in Chinese tech companies. Both sides of the Pacific have a pretty centralized and consistent obsession with minimizing the risk of being weaker on AI, starting 2018 at the latest (see page 10).

Why I Am Skeptical of AI Regulation as an X-Risk Mitigation Strategy

AI engineers seem to be a particularly sensitive area to me, they're taken very seriously as a key strategic resource.

Why I Am Skeptical of AI Regulation as an X-Risk Mitigation Strategy

Agreed wholeheartedly. Even thinking about trying to pull something like this could directly cause catastrophically bad outcomes.

Why I Am Skeptical of AI Regulation as an X-Risk Mitigation Strategy

When it comes to global industries that have ludicrously massive national security implications. such as the tech industry, regulation is more likely to function properly if it is good for national security, and regulation is less likely to function properly if it is bad for national security.

Weakening domestic AI industries is bad for national security. This point has repeatedly been made completely clear to the AI governance people with relevant experience in this area, since 2018 at the latest and probably years before. every year since then, they keep saying that they're not going to slow down AI via regulations.

This is obviously not the full story, but it's probably the most critical driving factor in AI regulation; at least, the one with the biggest information undersupply in the Lesswrong community, and possibly AI safety people in general. I'm really glad to see people approaching this problem from a different angle, diving into the details on the ground, instead of just making broad statements about international dynamics.

$20K In Bounties for AI Safety Public Materials

I meant, reading them and making bullet pointed lists of all valuable statements, in order to minimize the risk of forgetting something that could have been a valuable addition.  You make a very good point that there's pitfalls with this strategy, like having a summary of too many details when the important thing is galaxy-brain framing that will demonstrate the problem to different types of influential people with the maximum success rate.

I think actually reading (and taking notes) on most/all of the 7 recommended papers that you guys listed is generally a winning strategy, both for winning the contest and for winning at solving alignment in time. But only for people who can do it without forgetting that they're making something optimal/inspirational for minimizing absurdity heuristic, not fitting as many cohesive logic statements as they can onto a single sheet of paper.

In my experience, constantly thinking about the reader (and even getting test-readers) is a pretty fail-safe way to get that right.

$20K In Bounties for AI Safety Public Materials

For those who missed it, all of the AI Safety Arguments that won the competition can be found here, randomly ordered.

If you or anyone you know is ever having any sort of difficulty explaining anything about AI safety to anyone, these arguments are a good place to look for inspiration; other people have already done most of the work for you.

But if you really want to win this current contest, I highly recommend using bullet-pointed summaries of the 7 works stated in this post, as well as deeply reading the instructions instead of skimming them (because this post literally tells you how to win).

What do ML researchers think about AI in 2022?

It seems to me that the proportion will take a while to move up much further from 3% since that 3% is low hanging fruit. But considering the last 3 years of AI improvement e.g. GPT-3, it would make sense for that proportion to be vastly higher than 3% in 10-20 years.

What do ML researchers think about AI in 2022?

The median respondent’s probability of x-risk from humans failing to control AI1 was 10%, weirdly more than median chance of human extinction from AI in general2, at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%.

This absolutely reeks of a pretty common question wording problem, namely that a fairly small proportion of AI workers have ever cognitively processed the concept that very smart AI would be difficult to control (or at least, they have never processed that concept for the 10-20 seconds they would need to in order to slap a probability on it).

Load More