Matthew Barnett

Someone who is interested in learning and doing good.

My main blog: https://matthewbarnett.substack.com/

Sequences

Daily Insights

Wiki Contributions

Comments

Biology-Inspired AGI Timelines: The Trick That Never Works

I had mixed feelings about the dialogue personally. I enjoy the writing style and think Eliezer is a great writer with a lot of good opinions and arguments, which made it enjoyable.

But at the same time, it felt like he was taking down a strawman. Maybe you’d label it part of “conflict aversion”, but I tend to get a negative reaction to take-downs of straw-people who agree with me.

To give an unfair and exaggerated comparison, it would be a bit like reading a take-down of a straw-rationalist in which the straw-rationalist occasionally insists such things as “we should not be emotional” or “we should always use Bayes’ Theorem in every problem we encounter.” It should hopefully be easy to see why a rationalist might react negatively to reading that sort of dialogue.

Matthew Barnett's Shortform

NVIDIA's stock price is extremely high right now. It's up 134% this year, and up about 6,000% since 2015! Does this shed light on AI timelines?

Here are some notes,

  • NVIDIA is the top GPU company in the world, by far. This source says that they're responsible for about 83% of the market, with 17% coming from their primary competition, AMD.
  • By market capitalization, it's currently at $764.86 billion, compared to the largest company, Apple, at $2.655 trillion.
  • This analysis estimates their projected earnings based on their stock price on September 2nd and comes up with a projected growth rate of 22.5% over the next 10 years. If true, that would imply that investors believed that revenue will climb by about 10x by 2031. And the stock price has risen 37% since then.
  • Unlike in prior cases of tech stocks going up, this rise really does seem driven by AI, at least in large part. From one article, "CEO Jensen Huang said, "Demand for NVIDIA AI is surging, driven by hyperscale and cloud scale-out, and broadening adoption by more than 25,000 companies." 

    During the recent GTC 2021 presentation, Nvidia unveiled Omniverse Avatar, a platform for creating interactive avatars for 3D virtual worlds powered by artificial intelligence."
  • NVIDIA's page for Omniverse describes a plan to unroll AI services that many Lesswrongers believe have huge potential, including giant language models.
Shulman and Yudkowsky on AI progress

My tentative conclusion: by quantitative metrics, DeepL is in the same league as Google Translate, and might be better by some metrics. Which is still an impressive achievement by DeepL, considering the fact that they have orders-of-magnitude less data, compute, and researchers than Google.

Do they though? Google is a large company, certainty, but they might not actually give Google Translate researchers a lot of funding. Google gets revenue from translation by offering it as a cloud service, but I found this thread from 2018 where someone said,

Google Translate and Cloud Translation API are two different products doing some similar functions. It is safe to assume they have different algorithms and differences in translations are not only common but expected.

From this, it appears that there is little incentive for Google to improve the algorithms on Google Translate.

Shulman and Yudkowsky on AI progress

My understanding is that the correct line is something like, "The COVID-19 vaccines were developed and approved unprecedentedly fast, excluding influenza vaccines." If you want to find examples of short vaccine development, you don't need to go all the way back to the 1957 influenza pandemic. For the 2009 Swine flu pandemic,

Analysis of the genetic divergence of the virus in samples from different cases indicated that the virus jumped to humans in 2008, probably after June, and not later than the end of November,[38] likely around September 2008... By 19 November 2009, doses of vaccine had been administered in over 16 countries.

And more obviously, the flu shot is modified yearly to keep up-to-date with new variants. Wikipedia notes that influenza vaccines were first successfully distributed in the 1940s, after developement began in 1931.

When considering vaccines other than influenza shot, this 2017 EA forum post from Peter Wildeford is informative. He tracks the development history of "important" vaccines, as he notes,

This is not intended to be an exhaustive list of all vaccines, but is intended to be exhaustive of all vaccines that would be considered "important", such as the vaccines on the WHO list of essential medicines and notable vaccines under current development.

His bottom line:

Taken together and weighing these three sources of evidence evenly, this suggests an average of 29 years for the typical vaccine.

No vaccine on his list had been researched, manufactured, and distributed in less than one year. The closest candidate is the Rabies vaccine, which had a 4 year timeline, from 1881-1885.

Biology-Inspired AGI Timelines: The Trick That Never Works

It may help to visualize this graph with the line for Platt's Law drawn in.

Overall I find the law to be pretty much empirically validated, at least by the standards I'd expect from a half in jest Law of Prediction.

TurnTrout's shortform feed

Reading EY's dath ilan glowfics, I can't help but think of how poor English is as a language to think in.

This is an interesting question, and one that has been studied by linguists.

Tears Must Flow

If someone cries about the Holocaust…, and then also cries about “civilizational inadequacy”, then the reasonable conclusion to draw is that this person’s tears are essentially meaningless

Be careful. You might be modeling other people as too similar to yourself. Just because you don’t experience grief in that way doesn’t mean everyone is like you.

Yudkowsky and Christiano discuss "Takeoff Speeds"

My honest guess is that most predictors didn’t see that condition and the distribution would shift right if someone pointed that out in the comments.

Tears Must Flow

I think they see a fairer analogy as driving to something like Arlington National Cemetery for Memorial Day, then going to the car in grief during the service.

Who would get upset that someone needed to go to the car and cry during an Arlington National Cemetery service on Memorial Day!? If that's a fairer analogy, then I'm still confused.

This seems quite disanalogous to deluks917's analogy to me because (1) It is socially acceptable and often expected that people will cry during a cemetery service, (2) There is no "call to action" that such behavior is signaling, and there are no demands being put on others. The main way it's analogous is that it might just be a mild inconvenience to others.

I would sum up the other commenter's position as "You shouldn't take actions that you have reasonable belief will hurt you and then ask others to change".

Yes, but this argument doesn't follow, at least if you admit you are morally permitted to act disruptively in my cryonics scenario. In that case, your actions would be reasonably expected to harm your reputation, and possibly insult others, and yet even still, it is permissible, because your parent's life is on the line. The principle is simple: your parent's life > minor reputational harm.

Similarly, one may believe that {some amount of animal misery > minor reputational harm}. Of course, as highly social and conformist animals ourselves, it may be hard to actually walk-the-walk and act in accordance with this belief. But that's not an argument against doing so, if we are indeed committed to the principle.

Taking Clones Seriously

More concretely, if we were to clone Paul Christiano what's the chance the clone would work on AGI safety research?

From this study from 1993,

The authors administered inventories of vocational and recreational interests and talents to 924 pairs of twins who had been reared together and to 92 pairs separated in infancy and reared apart. Factor analysis of all 291 items yielded 39 identifiable factors and 11 superfactors. The data indicated that about 50% of interests variance (about two thirds of the stable variance) was associated with genetic variation.

Load More