SoerenMind

SoerenMind's Comments

AGI in a vulnerable world

I'm using the colloquial meaning of 'marginal' = 'not large'.

AGI in a vulnerable world

Hmm, in my model most of the x-risk is gone if there is no incentive to deploy. But I expect actors will deploy systems because their system is aligned with a proxy. At least this leads to short-term gains. Maybe the crux is that you expect these actors to suffer a large private harm (death) and I expect a small private harm (for each system, a marginal distributed harm to all of society)?

AGI in a vulnerable world

I agree that coordination between mutually aligned AIs is plausible.

I think such coordination is less likely in our example because we can probably anticipate and avoid it for human-level AGI.

I also think there are strong commercial incentives to avoid building mutually aligned AGIs. You can't sell (access to) a system if there is no reason to believe the system will help your customer. Rather, I expect systems to be fine-tuned for each task, as in the current paradigm. (The systems may successfully resist fine-tuning once they become sufficiently advanced.)

I'll also add that two copies of the same system are not necessarily mutually aligned. See for example debate and other self-play algorithms.

AGI in a vulnerable world

This reasoning can break if deployment turns out to be very cheap (i.e. low marginal cost compared to fixed cost); then there will be lots of copies of the most impressive system. Then it matters a lot who uses the copies. Are they kept secret and only deployed for internal use? Or are they sold in some form? (E.g. the supplier sells access to its system so customers can fine-tune e.g. to do financial trading.)

AGI in a vulnerable world
And once there is at least one AGI running around, things will either get a lot worse or a lot better very quickly.

I don't expect the first AGI to have that much influence (assuming gradual progress). Here's an example of what fits my model: there is one giant-research-project AGI that costs $10b to deploy (and maybe $100b to R&D), 100 slightly worse pre-AGIs that cost perhaps $100m each to deploy, and 1m again slightly worse pre-AGIs that cost $10k to each copy. So at any point in time we have a lot of AI systems that, together, are more powerful than the small number of most impressive systems.

AGI in a vulnerable world

Small teams can also get cheap access to impressive results by buying it from large teams. The large team should set a low price if it has competitors who also sell to many customers.

What would be the consequences of commoditizing AI?

Would be pretty interested in your ideas about how to commoditize AI.

March Coronavirus Open Thread

Right now I expect they just used hospital admission forms. If I was self-reporting 5 pages of medical history while I'm critically ill I'd probably skip some fields. Interesting that they did find high rates of diabetes etc though.

March Coronavirus Open Thread

Data point: There were no asthma patients among a group of 140 hospitalized COVID-19 cases in Wuhan.

But nobody had other allergic diseases either. No hay fever? Seems curious.

Load More