Conor Sullivan

Posts

Sorted by New

Wiki Contributions

Comments

Christiano, Cotra, and Yudkowsky on AI progress

Right, and history sides with Paul. The earliest steam engines were missing key insights and so operated slowly, used their energy very inefficiently, and were limited in what they could do. The first steam engines were used as pumps, and it took a while before they were powerful enough to even move their own weight (locomotion). Each progressive invention, from Savery to Newcomen to Watt dramatically improved the efficiency of the engine, and over time engines could do more and more things, from pumping to locomotion to machining to flight. It wasn't just one sudden innovation and now we have an engine that can do all the things including even lifting itself against the pull of Earth's gravity. It took time, and progress on smooth metrics, before we had extremely powerful and useful engines that powered the industrial revolution. That's why the industrial revolution(s) took hundreds of years. It wasn't one sudden insight that made it all click.

human psycholinguists: a critical appraisal

I think Gary Marcus wanted AI research to uncover lots of interesting rules like "in English, you make verbs past tense by adding -ed, except ..." because he wants to know what the rules are, and because engineering following psycholinguistic research is much more appealing to him than the other way around. Machine learning (without interpretability) doesn't give us any tools to learn what the rules are. 

Yudkowsky and Christiano discuss "Takeoff Speeds"

I think you're 100% right. Most (>>80%) of the bets I see on Long Bets, or predictions on MetaCalculus, are underspecified to the point where where a human mediator would have to make a judgement call that can be considered unfair to someone. I don't expect that to change no matter how much work I do, unless I make bets on specific statistics from well known sources, e.g. the stock market, or the CIA World Factbook. 

There are possible futures where prediction (3) is obvious. For example, if someone predicted that 50% of trips will be self driving in 2021 (many people did predict that 5 years ago) we can easily prove them wrong without having to debate whether Tesla is L2 or L5 and whether that matters. Teslas are not 50% of the cars on the road, nor are Waymos, so you can easily see that most trips in 2021 are not self driving by any definition. I think there are also future worlds were 95% of cars and trips are L5, most cars can legally autonomously drive anywhere without any humans inside, etc, and in that world there isn't much to debate about unless you're really petty. So we could make bets hoping that things will be that obvious, but I don't think either of us want to do the work to avoid this kind of ambiguity. 

I'm happy to consider my bets as paid in Bayes points without any need for future adjudication. So, for all the Bayes points, I'd love to hear what your equivalent predictions are for 2026.

For what it's worth, here's my revised (3): Greater than 10% of cars on the road will be legally capable of either L4/L5 OR legally L2/L3 but disengagements will be uncommon, less than once in a typical trip. (Meaning, if you watch a video from the AI DRIVR YouTube channel, there's less than one disengagement per 20 minutes of driving time.) 

Christiano, Cotra, and Yudkowsky on AI progress

Finally a definitely of The Singularity that actually involves a mathematical singularity! Thank you.

Christiano, Cotra, and Yudkowsky on AI progress

Apologies for my ignorance, does EA mean Effective Altruist?

Christiano, Cotra, and Yudkowsky on AI progress

It seems to me that Eliezer's model of AGI is bit like an engine, where if any important part is missing, the entire engine doesn't move. You can move a broken steam locomotive as fast as you can push it, maybe 1km/h. The moment you insert the missing part, the steam locomotive accelerates up to 100km/h. Paul is asking "when does the locomotive move at 20km/h" and Eliezer says "when the locomotive is already at full steam and accelerating to 100km/h." There's no point where the locomotive is moving at 20km/h and not accelerating, because humans can't push it that fast, and once the engine is working, it's already accelerating to a much faster speed. 

In Paul's model, there IS such a thing as 95% AGI, and it's 80% or 20% or 2% as powerful on some metric we can measure, whereas in Eliezer's model there's no such thing as 95% AGI. The 95% AGI is like a steam engine that's missing it's pistons, or some critical valve, and so it doesn't provide any motive power at all. It can move as fast as humans can push it, but it doesn't provide any power of it's own.

Christiano, Cotra, and Yudkowsky on AI progress

Excuse my ignorance, what does a hyperbolic function look like? If an exponential is f(x) = r^x, what is f(x) for a hyperbolic function?

Yudkowsky and Christiano discuss "Takeoff Speeds"

6 and 7 are definitely non-predictions, or a prediction that nothing interesting will happen. 1, 2, 4 and 5 are softly almost true today: 

(1) AI Programming -- I heard a rumor (don't have a source on this) that something like 30% of new GitHub commits involve Co-Pilot. I can't imagine that is really true, seems so implausible, but my prediction can come true if AI code completion becomes very popular. 

(2) Household Robots -- Every year for the last decade or so some company has demoed some kind of home robot at an electronics convention, but if any of them have actually made it to market, the penetration is very small. Eventually someone will make one that's useful enough to sell a few hundred or more units. I don't think a Roomba should qualify as meeting my prediction, which is why I specified a "humanoid" robot.

(3) Self Driving -- I stand by what I said, nothing to expand on. I believe that Tesla and Waymo, at least, already have self driving tech good enough, so this is mostly about institutional acceptance.

(4) DRL learning games from pixels -- EfficientZero essentially already does this, but restricted to the 57 Atari games. My prediction is that there will be an EfficientZero for all video games.

(5) Turing Test -- I think that the Turing test is largely a matter of how long the computer can fool the judge for, in addition to the judge knowing what to look for. Systems from the 70s could probably fool a judge for about 30 seconds. Modern chatbots might be able to fool a competent judge for 10 minutes, and an incompetent judge (naive casual user) for a couple hours at the extreme. I think by 2026 chatbots will be able to fool competent judges for at least 30 minutes, and will be entertaining to naive casual users indefinitely (i.e., people can make friends with their chatbots and it doesn't get boring quickly if ever.) 
 

For 6 and 7, I'm going to make concrete predictions.

(6) Some research institute or financial publication of repute will claim that AI technology (not computers generally, just AI) will have "added X Trillion Dollars" to the US or world economy, where X is at least 0.5% of US GDP or GWP, respectively. Whether this is actually true might be controversial, but someone will have made the claim. GWP will not be significantly above trendline.

(7) At least two job titles will have less than 50% the number of workers as 2019. The most likely jobs to have been affected are drivers, cashiers, fast food workers, laundry workers, call center workers, factory workers, workers in petroleum-related industries*, QA engineers, and computer programmers. These jobs might shift within the industry such that the number of people working in industry X is similar, but there has to be a job title that shrunk by 50%. For example, the same X million people still work in customer service, but most of them are doing something like prompt engineering on AI chatbots, as opposed to taking phone calls directly.

* This one has nothing to do with AI, but I expect it to happen by 2026 nonetheless.

Let me know if you want to formalize a bet on some website. 

Yudkowsky and Christiano discuss "Takeoff Speeds"

I would say I agree more with Christiano.

By 2026:

  • At least 50% of programming work that would have been done by a human programmer in 2019 will be done by systems like Codex or Co-Pilot.
  • Humaniod robotic maids, butlers and companions will be for sale in some form, although they will be limited and underwhelming, and few people will have them in their homes.
  • Self driving will finally be practical and applied widely. In the USA, between 10 and 70% of automobile trips will be autonomous or in self driving mode. Humans will not be banned from driving anywhere in the world, that's more of a 2030s+ thing.
  • AI will beat human grandmasters at nearly every video game or formal game. There might be 1-5 games which AI still struggles with, and they will be notable exceptions. Or there might be 0 such games. RL systems can learn most games from pixels in less than a GPU-day (using 2026-era GPUs, consuming less than 1000 watts and costing less than $4,000 USD2019 adjusted for inflation.) RL research will be focused on beating humans in sports and physical games like soccer, basketball, golf, etc.
  • Chatbots will regularly pass Turing tests, although it will remain controversial whether that means anything. Publicly available chatbots will be about as good as GPT-3 in grammar and competence, but unlike GPT-3 they will have consistent personalities and memory over time -- i.e., the limitations of the 2048 token window will be overcome somehow. Good chatbots will be available to the public, and will be ubiquitous in customer service, but whether they are popular as companions or personal assistants will depend on public acceptance. This is the same problem faced by AR: the tech will definitely be there, but the public might not be interested and might be somewhat hostile.
  • I personally am not sure if GWP growth will be significantly above historical baselines. I think AI will have progressed significantly, but we also know that, even going back to the 90s, information technology has made an underwhelming impact on productivity. The world economy is such a weird mess right now for reasons that have nothing to do with AI, so it's hard to make predictions.
  • There won't be significant unemployment due to technology (yet), but some careers will be significantly altered, including drivers and programmers.

 

I consider these predictions to be pretty conservative. I would not be surprised to be surprised by AI progress, but I would be very disappointed if we didn't meet 5/7 of my predictions.

Whole Brain Emulation: No Progress on C. elgans After 10 Years

It seems to me that this task has an unclear goal. Imagine I linked you a github repo and said "this is a 100% accurate and working simulation of the worm." How would you verify that? If we had a WBE of Ray Kurzweil, we could at least say "this emulated brain does/doesn't produce speech that resembles Kurzweil's speech." What can you say about the emulated worm? Does it wiggle in some recognizable way? Does it move towards the scent of food?

Load More