LESSWRONG
LW

Jono
63Ω14200
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Jono's Shortform
8mo
7
Jono's Shortform
Jono3mo111

Is everyone dropping the ball on cryonics? I'm considering career directions and my P(doom | no pause) is high and my P(doom | I work against X-risk) is close enough to my vanilla P(doom) that I wonder I should pick up this ball instead.

Reply
Winning the power to lose
Jono3mo132

Directed at the rest of the comment section: Cryogenic Suspension is an option for those who would die before the AGI launch.

If you don't like the odds that your local Suspension service preserves people well enough, then you still have the option to personally improve it before jumping to other, potentially catastrophic solutions.

The value difference commenters keep pointing out, needs to be far bigger then they represent it to be, to be relevant in a discussion on whether we should increase X-risk for some other gain.

The fact we don't live in a world where ~all accelerationalists invest in cryo suspensions, makes me think they are in fact not looking at what they're steering towards.

Reply
Fake thinking and real thinking
Jono4mo10

Tangentially, I fear that the model of psychology of "human agency originates from trying to correct the world such that it fits our false beliefs about it" is correct.

I hate systematically believing false things, but do notice that my times of greatest labor have a strong sense of "trying to correct the world towards how it should be", where the "should be" is epistemically flavored. Meditations that aim to grok how the world is not how it should be, seem to diminish my drive to rectify the flaw and yield acceptance instead.

Reply
Fake thinking and real thinking
Jono4mo10

Thank you. I want to stress (you already spent some words on this) that we don't always think with the aim to find truth, or find some particular truth.

Thinking as how to achieve some goal looks different from thinking as how to best describe some facet of the world. When evaluating your thoughts (or when someone else evaluates yours) I think it paramount to know what motivated your reasoning in the first place.

My tag — not for getting to think real, but for knowing whether your thought was good — is to understand the constraints the thought was under.
Who can learn the thought? 
In what way do observers look at the thought (is there access to the direct thought or only its outcome in some form?)
What consequences would counterfactual thoughts have had?

The failure mode I'm trying to prevent is misidentifying any failures about your past thinking due to forgetting the context under which the thinking occurred. You might evaluate that some past thought wasn't world-saving enough, but at the time that thought might not have been trying to be that. That's not a flaw of the thought-generation mechanism, but of the drive which invoked that.

More generally, "motivated reasoning" is a horrible term to describe misguided cognition. Have motives and know them (not that this post used this phrase).

Reply
AI Safety Memes Wiki
Jono4mo10

Very cool, I'm not seeing a table of contents on aisafety.info however.

Reply
Jono's Shortform
Jono5mo10

Thank you very much.
I imagined the forecasting AI to not be smart enough to be able to simulate a tiler that knows it is being simulated by us.
Perhaps that constraint is so large that the forecasts cannot be reliable.

Reply
Jono's Shortform
Jono5mo30

Does this help outer alignment?

Goal: tile the universe with niceness, without knowing what niceness is.

Method

We create:
- a bunch of formulations of what niceness is.
- a tiling AI, that given some description of niceness, tiles the universe with it.
- a forecasting AI, that given a formulation of niceness, a description of the tiling AI, a description of the universe and some coordinates in the universe, generates a prediction of what the part of the universe at the coordinates looks like after the tiling AI has tiled it with the formulation of niceness.

Following that, we feed our formulations of niceness into the forecasting AI, randomly sample some coordinates and evaluate whether the resulting predictions look nice.
From this we infer which formulations of niceness are truly nice.

Weaknesses:
- Can we recognize utopia by randomly sampled predictions about parts of it?
- Our forecasting AI is magnitudes weaker than the tiling AI. Can formulations of niceness turn perverse when a smarter agent optimizes for them?

Strengths:
- Less need to solve the ELK problem.
- We have multiple tries at solving outer alignment.

Reply
Finishing The SB-1047 Documentary In 6 Weeks
Jono5mo10

I have looked around a bit and have not seen any updates since November, which estimated this be finished in early February.
Could you give another update, or link a more recent one if it exists?

Reply
Jono's Shortform
Jono8mo30

P(doom) can be approximately measured. 
If reality fluid describes the territory well, we should be able to see close worlds that already died off.

For nuclear war we have some examples.
We can estimate the odds that the Cuban missile crisis and Petrov's decision went badly. If we accept that luck was a huge factor in us surviving those events (or not encountering events like it), we can see how unlikely our current world is to still live.

A high P(doom) implies that we are about to (or already did) encounter some very unlikely events that worked out suspiciously well for our survival. I don't know how public a registry of events like this should be, but it should exist.

Our self-reporting murderers or murder-witnesses should be extraordinarily well protected from leaks however, which in part seems like a software question.

Yes, this seems unlikely to happen, but again if your P(doom) is high, then we are only to survive in unlikely worlds. Working on this, to me, seems dignified: a way to make those unlikely worlds a bit less unlikely.

Reply
What should I do? (long term plan about starting an AI lab)
Jono1y52

I don't know if you have already, but this might be the time to take a long and hard look at the probblem and consider whether deep learning is the key to solving it.

What is the problem?

  • reckless unilateralism? -> go work for policy or chip manufacturing
  • inabillity to specify human values? -> that problem looks not DL at all to me
  • powerful hackers stealing all the proto-AGIs in the next 4 years? -> go cybersec
  • deception? -> (why focus there? why make an AI that might deceive you in the first place?) but that's pretty ML, though I'm not sure interp is the way to go there
  • corrigibility? -> might be ML, though I'm not sure all theoretical squiggles are ironed out yet
  • OOD behavior? -> probably ML
  • multi-agent dynamics? -> probably ML

At the very least you ought to have a clear output channel if you're going to work with hazardous technology. Do you have the safety-mindset that prevents you from having you dual-use tech on the streets? You're probably familiar with the abysmal safety / capabilities ratio of people working in the field, any tech that helps safety as much as capability, will therefore in practice help capability more, if you don't distribute it carefully.

I personally would want some organisation to step up to become the keeper of secrets. I'd want them to just go all-out on cybersec, have a web of trust and basically be the solution to the unilateralists curse. That's not ML though.

I think this problem has a large ML-part to it, but the problem is being tackled nearly-solely by ML people. I think whatever part of the problem can be tackled with ML, won't necessarily benefit by having more ML people on it.

Reply
Load More
2Jono's Shortform
8mo
7
6Is anyone developing optimisation-robust interpretability methods?
Q
1y
Q
0
15Closed-Source Evaluations
1y
4
22AI demands unprecedented reliability
2y
5