jacob_cannell

jacob_cannell's Comments

Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world?

The evolution of the human mind did not create a better world from the perspective of most species of the time - just ask the dodo, most megafauna, countless other species, etc. In fact, the evolution of humanity was/is a mass extinction event.

Don't Fear the Reaper: Refuting Bostrom's Superintelligence Argument

Agreed the quoted "we found" claim overreaches. The paper does have a good point though: the recalcitrance of further improvement can't be modeled as a constant, it necessarily scales with current system capability. Real world exponentials become sigmoids; mold growing in your fridge and a nuclear explosion are both sigmoids that look exponential at first: the difference is a matter of scale.

Really understanding the dynamics of a potential intelligence explosion requires digging deep into the specific details of an AGI design vs the brain in terms of inference/learning capabilities vs compute/energy efficiency, future hardware parameters, etc. Can't show much with vague broad stroke abstractions.

Open thread, Feb. 13 - Feb. 19, 2017

The levels of misunderstanding in these types of headlines is what is scary. The paper is actually about a single simple model trained for a specific purpose, unrelated to the hundreds of other models various deepmind researchers have trained. But somehow that all too often just get's reduced to "Deepmind's AI", as if it's a monolothic thing. And here it's even worse, where somehow the fictional monolothic AI and Deepmind the company are now confused into one.

Choosing prediction over explanation in psychology: Lessons from machine learning

If you instead claim that the "input" can also include observations about interventions on a variable, t

Yes - general prediction - ie a full generative model - already can encompass causal modelling, avoiding any distinctions between dependent/independent variables: one can learn to predict any variable conditioned on all previous variables.

For example, consider a full generative model of an ATARI game, which includes both the video and control input (from human play say). Learning to predict all future variables from all previous automatically entails learning the conditional effects of actions.

For medicine, the full machine learning approach would entail using all available data (test measurements, diet info, drugs, interventions, whatever, etc) to learn a full generative model, which then can be conditionally sampled on any 'action variables' and integrated to generate recommended high utility interventions.

then your predictions will certainly fail unless the algorithm was trained in a dataset where someone actually intervened on X (i.e. someone did a randomized controlled trial)

In any practical near term system, sure. In theory though, a powerful enough predictor could learn enough of the world physics to invent de novo interventions wholecloth. ex: AlphaGo inventing new moves that weren't in its training set that it essentially invented/learned from internal simulations.

Progress and Prizes in AI Alignment

I came to a similar conclusion a while ago: it is hard to make progress in a complex technical field when progress itself is unmeasurable or worse ill-defined.

Part of the problem may be cultural: most working in the AI safety field have math or philosophy backgrounds. Progress in math and philosophy is intrinsically hard to measure objectively; success is mostly about having great breakthrough proofs/ideas/papers that are widely read and well regarded by peers. If your main objective is to convince the world, then this academic system works fine - ex: Bostrom. If your main objective is to actually build something, a different approach is perhaps warranted.

The engineering oriented branches of Academia (and I include comp sci in this) have a very different reward structure. You can publish to gain social status just as in math/philosophy, but if your idea also has commercial potential there is the powerful additional motivator of huge financial rewards. So naturally there is far more human intellectual capital going into comp sci than math, more into deep learning than AI safety.

In a sane world we'd realize that AI safety is a public good of immense value that probably requires large-scale coordination to steer the tech-economy towards solving. The X-prize approach essentially is to decompose a big long term goal into subgoals which are then contracted to the private sector.

The high level abstract goal for the Ansari XPrize was "to usher in a new era of private space travel". The specific derived prize subgoal was then "to build a reliable, reusable, privately financed, manned spaceship capable of carrying three people to 100 kilometers above the Earth's surface twice within two weeks".

AI safety is a huge bundle of ideas, but perhaps the essence could be distilled down to: "create powerful AI which continues to do good even after it can take over the world."

For the Ansari XPrize, the longer term goal of "space travel" led to the more tractable short term goal of "100 kilometers above the Earth's surface twice within two weeks". Likewise, we can replace "the world" in the AI safety example:

AI Safety "XPrize": create AI which can take over a sufficiently complex video game world but still tends to continue to do good according to a panel of human judges.

To be useful, the video game world should be complex in the right ways: it needs to have rich physics that agents can learn to control, it needs to permit/encourage competitive and cooperative strategic complexity similar to that in the real world, etc. So more complex than pac-man, but simpler than the Matrix. Something in the vein of a minecraft mod might have the right properties - but there are probably even more suitable open-world MMO games.

The other constraint on such a test is we want the AI to be superhuman in the video game world, but not our world (yet). Clearly this is possible - ala AlphaGo. But naturally the more complex the video game world is in the direction of our world, both the harder the goal becomes and the more dangerous.

Note also that the AI should not know that it is being tested; it shall not know it inhabits a simulation. This isn't likely to be any sort of problem for the AI we can actually build and test in the near future, but it becomes an interesting issue later on.

DeepMind is now focusing on Starcraft, OpenAI has universe, so we already on a related path. Competent AI for open-ended 3D worlds with complex physics - like minecraft - is still not quite here, but is probably realizable in just a few years.

[Link] White House announces a series of workshops on AI, expresses interest in safety

Other way around. Europe started HBP started first, then US announced the BI. The HBP is centered around Markham's big sim project. The BI is more like a bag of somewhat related grants, focusing more on connectome mapping. From what I remember, both projects are long term, and most of the results are expected to be 5 years out or so, but they are publishing along the way.

Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited

We are in a vast, seemingly-empty universe. Models which predict the universe should be full of life should be penalised with a lower likelihood.

The only models which we can rule out are those which predict the universe is full of life which leads to long lasting civs which expand physically, use lots of energy, and rearrange on stellar scales. That's an enormous number of conjunctions/assumptions about future civs. Models where the universe is full of life, but life leads to tech singularities which end physical expansion (transcension) perfectly predict our observations, as do models where civs die out, as do models where life/civs are rare, and so on. . ..

But this is all a bit off-topic now because we are ignoring the issue I was responding to: the evidence from the timing of the origin of life on earth

If we find that life arose instantly, that is evidence which we can update our models on, and leads to different likelihoods then finding that life took 2 billion years to evolve on earth. The latter indicates that abiogenesis is an extremely rare chemical event that requires a huge amount of random molecular computations. The former indicates - otherwise.

Imagine creating a bunch of huge simulations that generate universes, and exploring the parameter space until you get something that matches earth's history. The time taken for some evolutionary event reveals information about the rarity of that event.

Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited

"Anthropic selection bias" just filters out observations that aren't compatible with our evidence. The idea that "anthropic selection bias" somehow equalizes the probability of any models which explain the evidence is provably wrong. Just wrong. (There are legitimate uses of anthropic selection bias effects, but they come up in exotic scenarios such as simulations.)

If you start from the perspective of an ideal bayesian reasoner - ala Solomonoff, you only consider theories/models that are compatible with your observations anyway.

So there are models where abiogenesis is 'easy' (which is really too vague - so let's define that as a high transition probability per unit time, over a wide range of planetary parameters.)

There are also models where abiogenesis is 'hard' - low probability per unit time, and generally more 'sparse' over the range of planetary parameters.

By Baye's Rule, we have: P(H|E) = P(E|H)P(H) / P(E)

We are comparing two hypothesises, H1, and H2, so we can ignore P(E) - the prior of the evidence, and we have:

P(H1|E) )= P(E|H1) P(H1)

P(H2|E) )= P(E|H2) P(H2)

)= here means 'proportional'

Assume for argument's sake that the model priors are the same. The posterior then just depends on the likelihood - P(E|H1) - the probability of observing the evidence, given that the hypothesis is true.

By definition, the model which predicts abiogenesis is rare has a lower likelihood.

One way of thinking about this: Abiogenesis could be rare or common. There are entire sets of universes where it is rare, and entire sets of universes where it is common. Absent any other specific evidence, it is obviously more likely that we live in a universe where it is more common, as those regions of the multiverse have more total observers like us.

Now it could be that abiogenesis is rare, but reaching that conclusion would require integrating evidence from more than earth - enough to overcome the low initial probability of rarity.

Load More