All of teradimich's Comments + Replies

I can provide several links. And you choose those that are suitable. If suitable. The problem is that I retained not the most complete justifications, but the most ... certain and brief. I will try not to repeat those that are already in the answers here.

Ben Goertzel

Jürgen Schmidhuber

Peter J.Bentley

Richard Loosemore

Jaron Lanier and Neil Gershenfeld


Magnus Vinding and his list

Tobias Baumann

Brian Tomasik
 

Maybe Abram Demski? But he changed his mind, probably.
Well, Stuart Russell. But this is a book. I can quote.

I do think that I’m an optimist. I think the

... (read more)
1Optimization Process3mo
Yeah, if you have a good enough mental index to pick out the relevant stuff, I'd happily take up to 3 new bounty-candidate links, even though I've mostly closed submissions! No pressure, though!

I have collected many quotes with links about the prospects of AGI. Most people were optimistic.

2Optimization Process3mo
Thanks for the collection! I wouldn't be surprised if it links to something that tickles my  sense of "high-status monkey presenting a cogent argument that AI progress is good," but didn't see any on a quick skim, and there are too many links to follow all of them; so, no bounty, sorry!

Glad you understood me. Sorry for my english!
Of course, the following examples themselves do not prove the opportunity to solve the entire problem of AGI alignment! But it seems to me that this direction is interesting and strongly underestimated. Well, someone smarter than me can look at this idea and say that it is bullshit, at least.

Partly this is a source of intuition for me, that the creation of aligned superintellect is possible. And maybe not even as hard as it seems.
We have many examples of creatures that follow the goals of someone more stupid. An... (read more)

It seems to me that the brains of many animals can be aligned with the goals of someone much more stupid themselves.
People and pets. Parasites and animals. Even ants and fungus.
Perhaps the connection that we would like to have with superintellence, is observed on a much smaller scale.

3Cameron Berg1y
I think this is an incredibly interesting point.  I would just note, for instance, in the (crazy cool) fungus-and-ants case, this is a transient state of control that ends shortly thereafter in the death of the smarter, controlled agent. For AGI alignment, we're presumably looking for a much more stable and long-term form of control, which might mean that these cases are not exactly the right proofs of concept. They demonstrate, to your point, that "[agents] can be aligned with the goals of someone much stupider than themselves," but not necessarily that agents can be comprehensively and permanently aligned with the goals of someone much stupider than themselves. Your comment makes me want to look more closely into how cases of "mind control" work in these more ecological settings and whether there are interesting takeaways for AGI alignment.

I apologize for the stupid question. But…

Do we have more chances to survive in the world, which is closer to Orwell's '1984'?
It seems to me that we are moving towards more global surveillance and control. China's regime in 2021 may seem extremely liberal for an observer in 2040.

6[anonymous]1y
Welcome to 2021, where 1984 is Utopian fiction.

I guess I missed the term gray goo. I apologize for this and for my bad English.
Is it possible to replace it on the 'using nanotechnologies to attain a decisive strategic advantage'?
I mean the discussion of the prospects for nanotechnologies on SL4 20+ years ago. This is especially:

My current estimate, as of right now, is that humanity has no more than a 30% chance of making it, probably less. The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.

I understand that since then the views of EY have changed in many ways. But I a... (read more)

4Rob Bensinger1y
Makes sense, thanks for the reference! :)

Nanosystems are definitely possible, if you doubt that read Drexler’s Nanosystems and perhaps Engines of Creation and think about physics. 

Is there something like the result of a survey of experts about the feasibility of drexlerian nanotechnology? Are there any consensus among specialists about the possibility of a gray goo scenario?

Drexler and Yudkowsky both extremely overestimated the impact of molecular nanotechnology in the past.

9PeterMcCluskey1y
A survey of leading chemists would likely produce dismissals based on a strawmanned version of Drexler's ideas. If you could survey people who demonstrably understood Drexler, I'm pretty sure they'd say it's feasible, but critics would plausibly complain about selection bias. The best analysis of gray goo risk seems to be Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations [http://www.rfreitas.com/Nano/Ecophagy.htm]. They badly overestimated how much effort would get put into developing nanotech. That likely says more about the profitability of working on early-stage nanotech than it says about the eventual impact.
7Rob Bensinger1y
I don't think anyone (e.g., at FHI or MIRI) is worried about human extinction via gray goo anymore. Like, they expected nanotech to come sooner? Or something else? (What did they say, and where?)

not an expert, but I think life is an existence proof for the power of nanotech, even if the specifics of a grey goo scenario seem less than likely possible. Trees turn sunlight and air into wood, ribosomes build peptides and proteins, and while current generation models of protein folding are a ways from having generative capacity, it's unclear how many breakthroughs are between humanity and that general/generative capacity.  
 

I do not know the opinions of experts on this issue. And I lack competence for such conclusions, sorry.

AlexNet was the first publication that leveraged graphical processing units (GPUs) for the training run

Do you mean the first of the data points on the chart? The GPU was used for DL long before AlexNet. References: [1], [2], [3], [4], [5].

1lennart1y
Thanks for the correction and references. I just followed my "common sense" from lectures and other pieces. What do you think made AlexNet stand out? Is it the depth and use of GPUs?
0johnlawrenceaspden10mo
Those really don't look too bad to me! (It's 2022). We're all starting to think AI transcendence is 'within the decade', even though no-one's trying to do it deliberately any more. And nanowar before 2015? Well we just saw (2019) an accidental release of a probably-engineered virus. How far away can a deliberate release be? Not bad for 1999.   In 2010, I wrote: https://johnlawrenceaspden.blogspot.com/2010/12/all-dead-soon.html [https://johnlawrenceaspden.blogspot.com/2010/12/all-dead-soon.html] At the time Eliezer was still very optimistic, but I thought that things would take longer than he thought, but also that the AI alignment project was hopeless. As I remember I thought that AI was unlikely to kill me personally, but very likely to kill my friends' children. Updating after ten years, I was less wrong about the hopelessness, and he was less wrong about the timelines.
4Daniel Kokotajlo2y
Sweet, thanks!

Probably that:

When we didn’t have enough information to directly count FLOPs, we looked GPU training time and total number of GPUs used and assumed a utilization efficiency (usually 0.33)

This can be useful:

We trained the league using three main agents (one for each StarCraft race), three main exploiter agents (one for each race), and six league exploiter agents (two for each race). Each agent was trained using 32 third-generation tensor processing units (TPUs) over 44 days

Perhaps my large collection of quotes about the impact of AI on the future of humanity here will be helpful.

Then it is worth considering the majority of experts from the FHI to be extreme optimists, the same 20%? I really tried to find all the publicly available forecasts of experts and those who were confident that AI would lead to the extinction of humanity, there were very few among them. But I have no reason not to believe you or Luke Muehlhauser who introduced AI safety experts as even more confident pessimists: ’Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction’ . The reason may... (read more)

What about this and this? Here, some researchers at the FHI give other probabilities.

4Rob Bensinger3y
Yeah, I've seen that photo before; I'm glad we have a record of this kind of thing! It doesn't cause me to think that the thing I said in 2017 was false, though it suggests to me that most FHI staff overall in 2014 (like most 80K staff in 2017) probably would have assigned <10% probability to AGI-caused extinction (assuming there weren't a bunch of FHI staff thinking "AGI is a lot more likely to cause non-extinction existential catastrophes" and/or "AGI has a decent chance of destroying the world, but we definitely won't reach AGI this century").

I meant the results of such polls: https://www.thatsmags.com/china/post/15129/happy-planet-index-china-is-72nd-happiest-country-in-the-world. Well, it doesn’t matter.
I think that I could sleep better if everyone recognized the reduction of existential risks in a less free world.

I’m not sure that I can trust news sources that are interested in outlining China.
In any case, this does not seem to stop the Chinese people from feeling happier than the US people.
I cited this date just to contrast with your forecast. My intuition is more likely to point to AI in the 2050-2060 years.
And yes, I expect that in 2050 it will be possible to monitor the behavior of each person in countries 24/7. I can’t say that it makes me happy, but I think that the vast majority will put up with this. I don't believe in a liberal democratic utopia, but the end of the world seems unlikely to me.

1Logan Zoellner3y
Lots of happy people [https://www.scmp.com/news/hong-kong/politics/article/3083454/rivals-refuse-budge-hours-ahead-hong-kong-legislative] in China. Call me a crazy optimist, but I think we can aim higher than: Yes, you will be monitored 24/7, but at least humanity won't literally go extinct.

Just wondering. Why are some so often convinced that the victory of China in the AGI race will lead to the end of humanity? The Chinese strategy seems to me much more focused on long terms.
The most prominent experts give a 50% chance of AI in 2099 (https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/book-review-architects-of-intelligence). And I can expect that the world in 80 years will be significantly different from the present. Well, you can call this a totalitarian hell, but I think that the probability of an existential disaster in t... (read more)

1Logan Zoellner3y
I don't think a Chinese world order will result in the end of humanity, but I do think it will make stuff like this [https://www.pbs.org/wgbh/frontline/article/how-chinas-government-is-using-ai-on-its-uighur-muslim-population/] much more common. I am interested in creating a future I would actually want to live in. How much would you be willing to bet that AI will not exist in 2060, and at what odds? Are you arguing that a victory for Chinese totalitarianism makes Human extinction less likely than a liberal world order?

How about paying attention to discontinuous progress in tasks that are related to DL? It is very easy to track with https://paperswithcode.com/sota . And https://sotabench.com/ is showing diminishing returns.

(I apologize in advance for my English). Well, only the fifth column shows an expert’s assessment of the impact of AI on humanity. Therefore, any other percentages can be quickly skipped. It took me a few seconds to examine 1/10 of the table through Ctrl+F, so it would not take long to fully study the table by such a principle. Unfortunately, I can't think of anything better.

It may be useful.

’Actually, the people Tim is talking about here are often more pessimistic about societal outcomes than Tim is suggesting. Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that it’s only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.’ — Luke Muehlhauser, https://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/

’... (read more)

I have collected a huge number of quotes from various experts about AGI. About the timing of AGI, about the possibility of a quick takeoff of AGI and its impact on humanity. Perhaps this will be useful to you.

https://docs.google.com/spreadsheets/d/19edstyZBkWu26PoB5LpmZR3iVKCrFENcjruTj7zCe5k/edit?fbclid=IwAR1_Lnqjv1IIgRUmGIs1McvSLs8g34IhAIb9ykST2VbxOs8d7golsBD1NUM#gid=1448563947

2ioannes3y
Awesome.

Then AI will have to become really smarter than very large groups of people who will try to control the world. And people by that time will surely be ready more than now. I am sure that the laws of physics allow the quick destruction of humanity, but it seems to me that without a swarm of self-reproducing nanorobots, the probability of our survival after the creation of the first AGI exceeds 50%.

It seems that this option leaves more chances for the victory for humanity than the gray goo scenario. And even if we screw up for the first time, it can be fixed. Of course, this does not eliminate the need for AI alignment efforts anyway.

3cousin_it3y
Yeah, if gray goo is impossible, the AI can't use that particular insta-win move. Though I think if the AI is smarter than humans, it can find other moves that will let it win slower but pretty much as surely.

Is AI Foom possible if even the godlike superintelligence cannot create gray goo? Some doubt that nanobots so quickly reproducing are possible. Without this, the ability for AI to quickly take over the world in the coming years will be significantly reduced.

4cousin_it3y
Foom is more about growth in intelligence, which could be possible with existing computing resources and research into faster computers. Even if gray goo is impossible, once AI is much smarter than humans, it can manipulate humans so that most of the world's productive capacity ends up under the AI's control.

Is AI Foom possible if even the godlike superintelligence cannot create ’gray goo’? Some doubt that nanobots so quickly reproducing are possible. Without this, the ability for AI to quickly take over the world in the coming years will be significantly reduced.

[This comment is no longer endorsed by its author]Reply

Indeed, quite a lot of experts are more optimistic than it seems. See this or this . Well, I collected a lot of quotes from various experts about the future of human extinction due to AI here. Maybe someone is interested.

1MichaelA3y
I've started collecting [https://forum.effectivealtruism.org/posts/Z5KZ2cui8WDjyF6gJ/my-thoughts-on-toby-ord-s-existential-risk-estimates?commentId=DxZbuZXo82jrLQEbj] estimates of existential/extinction/similar risk from various causes (e.g., AI risk, biorisk). Do you know of a quick way I could find estimates of that nature (quantified and about extreme risks) in your spreadsheet? It seems like an impressive piece of work, but my current best idea for finding this specific type of thing in it would be to search for "%", for which there were 384 results...

It seems Russell does not agree with what is considered an LW consensus. From ’Architects of Intelligence The truth about AI from the people building it’:

When [the first AGI is created], it’s not going to be a single finishing line that we cross. It’s going to be along several dimensions.
[...]
I do think that I’m an optimist. I think there’s a long way to go. We are just scratching the surface of this control problem, but the first scratching seems to be productive, and so I’m reasonably optimistic
... (read more)
9Steven Byrnes3y
Can you be more specific what you think the LW consensus is, that you're referring to? Recursive self-improvement and pessimism about AI existential risk? Or something else?