Jono

Wiki Contributions

Comments

Jono52

I don't know if you have already, but this might be the time to take a long and hard look at the probblem and consider whether deep learning is the key to solving it.

What is the problem?

  • reckless unilateralism? -> go work for policy or chip manufacturing
  • inabillity to specify human values? -> that problem looks not DL at all to me
  • powerful hackers stealing all the proto-AGIs in the next 4 years? -> go cybersec
  • deception? -> (why focus there? why make an AI that might deceive you in the first place?) but that's pretty ML, though I'm not sure interp is the way to go there
  • corrigibility? -> might be ML, though I'm not sure all theoretical squiggles are ironed out yet
  • OOD behavior? -> probably ML
  • multi-agent dynamics? -> probably ML

At the very least you ought to have a clear output channel if you're going to work with hazardous technology. Do you have the safety-mindset that prevents you from having you dual-use tech on the streets? You're probably familiar with the abysmal safety / capabilities ratio of people working in the field, any tech that helps safety as much as capability, will therefore in practice help capability more, if you don't distribute it carefully.

I personally would want some organisation to step up to become the keeper of secrets. I'd want them to just go all-out on cybersec, have a web of trust and basically be the solution to the unilateralists curse. That's not ML though.

I think this problem has a large ML-part to it, but the problem is being tackled nearly-solely by ML people. I think whatever part of the problem can be tackled with ML, won't necessarily benefit by having more ML people on it.

Jono31

ai-plans.com aims to collect research agendas and have people comment on their strengths and vulnerabilities. The discord also occasionally hosts a critique-a-ton, where people discuss specific agendas.

JonoΩ63-2

We do not know, that is the relevant problem.

Looking at the output of a black box is insufficient. You can only know by putting the black box in power, or by deeply understanding it.
Humans are born into a world with others in power, so we know that most humans care about each other without knowing why.
AI has no history of demonstrating friendliness in the only circumstances where that can be provably found. We can only know in advance by way of thorough understanding.

A strong theory about AI internals should come first. Refuting Yudkowsky's theory about how it might go wrong is irrelevant.

Jono00

Layman here 👋
Iiuc we cannot trust the proof of an unaligned simulacra's suggestion because if it is smarter than us.
Would that be a non-issue if verifying the proof is easier than making it?
If we can know how hard it is to verify a proof without verifying, then we can find a safe protocol for communicating with this simulacra. Is this possible?

Jono20

it might be the case that any kind of meaningful values would be reasonably encodable as answers to the question "what next set of MPIs should be instantiated?" 

What examples of (meaningless) values are not answers to "What next set of MPIs should be instantiated?"

Jono20

What does our world (a decade) after the employment of a successfully aligned AGI look like?

Jono10

Thank you plex, I was not aware of this wiki.
The pitch is nice, I'll incorporate it.

Jono75

Why do I care if the people around me care about AI risk?

1. when AI is going to rule we'd like the people to somehow have some power I reckon.
I mean creating any superintelligence is a powergrab. Making one in secret is quite hostile, shouldn't people get a say or at least insight in what their future holds?

2. Nobody still really knows what we'd like the superint to do. I think an ML researcher is as capable of voicing their desires for the future as an artist. The field surely can benefit from interdisciplinary approaches.

3. As with nuclear war, I'm sure politicians will care more when the people care more. AI governance is a big point. Convincing AI devs to not make the superint seems easier when a big percentage of humanity is pressuring them not to do it.

4. Maybe this also extends to international relations. Seeing that the people of a democratic country care about the safety, makes the ventures from that country seem more reliable.

5. I get bummed out when nobody knows what I'm talking about.

Jono10

I'm concerned with the ethics.
Is it wrong to doom speak to strangers? Is that the most effective thing here? I'd be lying if I said I was fine, but would it be best to tell them I'm "mildly concerned"?

How do convey these grave emotions I have while maximally getting the people around me to care about mitigating AI risk?

Should I compromise on truth and downplay my concerns if that will get someone to care more? Should I expect people to be more receptive to the message of AI risk if I'm mild about it? 

Load More