ADifferentAnonymous

Posts

Sorted by New

Wiki Contributions

Comments

Christiano, Cotra, and Yudkowsky on AI progress

An interesting parallel might be a parallel Earth making nanotechnology breakthroughs instead of AI breakthroughs, such that it's apparent they'll be capable of creating gray goo and not apparent they'll be able to avoid creating gray goo.

I guess a slow takeoff could be if, like, the first self-replicators took a day to double, so if somebody accidentally made a gram of gray goo you'd have weeks to figure it out and nuke the lab or whatever, but self-replication speed went down as technology improved, and so accidental unconstrained replicators happened periodically but could be contained until one couldn't be.

Whereas hard takeoff could be if you had nanobots that built stuff in seconds but couldn't self-replicate using random environmental mass, and then the first nanobot that can do that, can do it in seconds and eats the planet.

Should we consider the second scenario less likely because of smooth trend lines? Does Paul think we should? (I'm pretty sure Eliezer thinks that Paul thinks we should)

Why Study Physics?

One major pattern of thought I picked up from (undergraduate) physics is respect for approximation. I worry that those who have this take it for granted, but the idea of a rigorous approximation that's provably accurate in certain limits, as opposed to a casual guess, isn't obvious until you've encountered it.

Yudkowsky and Christiano discuss "Takeoff Speeds"

My question after reading this is about Eliezer's predictions in a counterfactual without regulatory bottlenecks on economic growth. Would it change the probable outcome, or would we just get a better look at the oncoming AGI train before it hit us? (Or is there no such counterfactual well-defined enough to give us an answer?) ETA: Basically trying to get at whether that debate's actually a crux of anything.

Average probabilities, not log odds

Oof, rookie mistake. I retract the claim that averaging log odds is 'the correct thing to do' in this case

Still—unless I'm wrong again—the average log odds would converge to the correct result in the limit of many forecasters, and the average probabilities wouldn't? Making the post title bad advice in such a case?

(Though median forecast would do just fine)

Ngo and Yudkowsky on alignment difficulty

+1 to the question.

My current best guess at an answer:

There are easy safe ways, but not easy safe useful-enough ways. E.g. you could make your AI output DNA strings for a nanosystem and absolutely do not synthesize them, just have human scientists study them, and that would be a perfectly safe way to develop nanosystems in, say, 20 years instead of 50, except that you won't make it 2 years without some fool synthesizing the strings and ending the world. And more generally, any pathway that relies on humans achieving deep understanding of the pivotal act will take more than 2 years, unless you make 'human understanding' one of the AI's goals, in which case the AI is optimizing human brains and you've lost safety.

Average probabilities, not log odds

In contrast, there are no conditions under which average log odds is the correct thing to do


Taking that as a challenge, can we reverse-engineer a situation where this would be the correct thing to do?

We can first sidestep the additivity-of-disjoint-events problem by limiting the discussion to a single binary outcome.

Then we can fulfill the condition almost trivially by saying our input probabilities are produced by the procedure 'take the true log odds, add gaussian noise, convert to probability'.

Is that plausible? Well, a Bayesian update is an additive shift to the log odds. So if your forecasters each independently make a bunch of random updates (and would otherwise be accurate), that would do it. A simple model is that the forecasters all have the same prior and a good sample of the real evidence, which would make them update to the correct posterior, except that each one also accepts N bits of fake evidence, each of which has a 50/50 chance of supporting X or ~X (and the fake evidence is independent between forecasters). 

That's not a good enough toy model to convince me to use average log odds for everything, but it is good enough that I'd accept it if average log odds seemed to work in a particular domain.

An Unexpected Victory: Container Stacking at the Port of Long Beach

Further update: the port of Long Beach has made a deal with Union Pacific to haul containers to Salt Lake City that would previously have been picked up by trucks.

An Unexpected Victory: Container Stacking at the Port of Long Beach

I'm curious too. FWIW this news story says "yesterday, there were 74 containerships anchored in San Pedro Bay waiting for berth space at LB or Los Angeles, down from a high of 80 last weekend," so it at least sounds like the queue is getting shorter rather than longer?

An Unexpected Victory: Container Stacking at the Port of Long Beach

As a follow-up to point 8, it looks like the ports are going to start charging money for extended use of their scarce storage capacity: https://polb.com/port-info/news-and-press/san-pedro-bay-ports-announce-new-measure-to-clear-cargo-10-25-2021/

Seems like broadly the right call on an econ 101 level, though the fact that this was ever free is a Chesterton's Fence, $100 seems like a suspiciously round number, and I'm not sure about the differing thresholds for truck-bound vs. rail-bound containers (or thresholds at all, really). 

Maybe someone who knows anything about the shipping industry can comment?

Self-Integrity and the Drowning Child

Knot-twisting is indeed the outcome I was imagining.

(Your translation spell might be handling the words "convince" and "should" optimistically... maybe try them with the scare quotes?)

Load More