Old man 1: Life is one trouble after another. I'd be better off dead, better yet, I wish I was never born

Old man 2: True, true, but who has such luck ?.. maybe one in a thousand.

My blog:

I'm also building an open source generic ML library: & .... which I guess might be of interest to some people here

George's Comments

When to assume neural networks can solve a problem
Yes, I was in fact. Seeing where this internet argument is going, I think it's best to leave it here.

So, in that case.

If your original chain of logic is:

1. An RL-based algorithm that could play any game could pass the turing test

2. An algorithm that can pass the Turing test is "AGI complete", thus it is unlikely that (1) will happen soon

And you agree with the statement:

3. An algorithm did pass the Turing test in 2014

You either:

a) Have a contradiction

b) Must have some specific definition of the Turing test under which 3 is untrue (and more generally, no known algorithm can pass the Turing test)

I assume your position here is b and I'd love to hear it.

I'd also love to hear the causal reasoning behind 2. (maybe explained by your definition of the Turing test ?)

If your definitions differ from commonly accepted definitions and your you rely on causality which is not widely implied, you must at least provide your versions of the definitions and some motivation behind the causality.

When to assume neural networks can solve a problem
Turing test, which is to say AGI-complete

You are aware chatbots have been "beating" the original Turing test since 2014, right? (And arguably even before)

Also, AGI-complete == fools 1/3 of human judges in an x minute conversation via text? Ahm, no, just no.

That statement is meaningless unless you define the Turing test and keeps being meaningless even if you define the turing test, there is literally no definition for "AGI complete". AGI is more of a generic term used to mean "kinda like a human", but it's not very concrete.

On the whole, yes, some games might prove too difficult for RL to beat... but I can't think of any in particular. I think the statement hold for basically any popular competitive game (e.g. one where there are currently cash prizes above > 1000$ to be won). I'm sure one could design an adversarial game specifically designed to not be beaten by RL but doable by a human... but that's another story. Turing test, which is to say AGI-complete

When to assume neural networks can solve a problem
Also if you read almost anything on the subject, people will be constantly saying how they don't think superhuman intelligence is inevitable or close

If it's "meaningfully close enough to do something about it" I will take that as being 'close". I don't think Bostrom puts a number on it, or I don't remember him doing so, but he seems to address a real possibility rather than a hypothetical that is hundreds or thousands of years away.

What do you mean, you've never seen a consistent top-to-bottom reasoning for it? This is not a rhetorical question, I am just not sure what you mean here. If you are accusing e.g. Bostrom of inconsistency, I am pretty sure you are wrong about that.

I mean, I don't see a chain of conclusions that leads to the theory being "correct" , Vaniver mentioned bellow how this is not the correct perspective to adopt and I agree with that.... or I would, assuming that the hypothesis would be Popperian (i.e. that one could do something to disprove AI being a large risk in the relative near future).

If you are just saying he hasn't got an argument in premise-conclusion form, well, that seems true but not very relevant or important. I could make one for you if you like.

If you could make such a premise-conclusion case I'd be more then welcome to hear it out.

ease of data collection? Cost of computing power? Usefulness of intelligence? -- but all three of these things seem like things that people have argued about at length, not assumed

Well, I am yet to see the arguments

Also the case for AI safety doesn't depend on these things being probable, only on them being not extremely unlikely.

It depends on you being able to put number on those probabilities though, otherwise you are in a Pascal wager's scenario, where any event that is not almost certainly ruled out should be taken into account with an amount of seriousness proportional to it's fictive impact.

When to assume neural networks can solve a problem
moreover I think Stuart Russell is too

Yes, I guess I should have made the clarification about that, I don't think Stuart Russell is necessarily much divergent from Bostrom in his views. Rather, he's most poniente arguments seem not to be very related to that view, so I think his books is a good guide for what I labeled as the second view in the article.

But he certainly tries to uphold both.

However the article was already too long and going into that would have made it even longer.... in hindsight I've decided to just split it into two, but the version here I shall leave as is.

When to assume neural networks can solve a problem

I will probably be stealing the perspective of the view being disjunctive as a way to look at why it's hard to pin down.

And thus, just like the state of neural networks in 2010 was only weakly informative about what would be possible in 2020, it seems reasonable to expect the state of things in 2020 will be only weakly informative and about will be possible in 2030.

This statement I would partially disagree with.

I think the idea of training on a GPU was coming to the forefront by 2010 and also the idea of CNNs for image recognition: (see both in that 2006 paper)y K. et al. (2006)

I'd argue it's fairly easy to look at today' landscape and claim that by 2030 the things that are likely to happen include:

  • ML playing any possible game better than humans assuming a team actually works on that specific game (maybe even if one doesn't), with huma-like inputs and human-like limitations in terms of granularity of taking inputs and giving outputs.
  • ML achieving all the things we can do with 2d images right now for 3d images and short (e.g. < 5 minute) videos.
  • Algorithms being able to write e.g. articles summarizing various knowledge it gathers from given sources and possibly even find relevant sources via searching based on keywords (so you could just say "Write an article about Peru's economic climate in 2028, rather than feed a bunch of articles about Peru's economy in 2028)... the second part is already doable, but I'm mentioning them together since I assume people will be more impressed with the final product
  • Algorithms being able to translate from and to almost any language about as well as human, but still not well enough to translate sources which require a lot of interpretation (e.g. yes for translating a biology paper from english to hindi or vice versa, no for translating a phenomenology paper from english to hindi or vice versa)
  • Controlling mechanical systems (e.g. robotic arms) via networks trained using RL.
  • Generally speaking, algorithms being used in areas where they already out-perform humans but where regulations and systematic inefficiencies combined with issues of stake don't currently allow them to be used (e.g. accounting, risk analysis, setting insurance policies, diagnosis, treatment planning). Algorithms being jointly used to help in various scientific fields by replacing the need for humans to use classical statistics and or manually fitting equations in order to model certain processes.

I'd wager points 1 to 4 are basically a given, point 5 is debatable since it depends on human regulators and cultural acceptance for the most part.

I'd also wager than, other than audio processing, there won't be much innovation beyond those 5 points that will create load of hype by 2030. You might have ensembles of those 4 things building up to something bigger, but those 5 things will be at the core of it.

But that's just my intuition, partially based on the kind of heuristics above about what is easily doable and what isn't. But alas, the point of the article was to talk about what's doable in the present, rather than what to expect from the future, so it's not really that related.

George's Shortform

I find it interesting what kind of beliefs one needs to question and in which ways in order to get people angry/upset/touchy.

Or, to put it in more popular terms, what kind of arguments make you seem like a smart-ass when arguing with someone.

For example, reading Eliezer yudkowsky's Rationality from AI to Zombies, I found myself generally speaking liking the writing style and to a karge extent the book was just reinforcing the biases I already had. Other then some of his poorly thought out metaphysics based on which he bases his ethics argument... I honestly can't think of a single thing from that book I disagree with. Same goes for Inadequate Equilibria.

Yet, I can remember a certain feeling popping up in my head fairly often when reading it, one that can be best described in an image:


One seeming pattern for this is something like:

  • Arguing about a specific belief
  • Going a level down and challenging a pillar of the opponent's belief that was not being considered as part of the discussion.

E.g: "Arguing about whether or not climate change is a threat, going one level down and arguing that there's not enough proof climate change is happening to being with"

You can make this pattern even more annoying by doing something like:

  • Arguing about a specific belief
  • Going a level down and challenging a pillar of the opponent's belief that was not being considered as part of the discussion.
  • Not entertaining an opposite argument about one of your own pillars being shaky.

E.g.: After the previous climate change argument, not entertaining the idea that "Maybe acting upon climate change as if it were real and as if it were a threat, would actually result in positive consequences even if those two things were unture"

You can make this pattern even more annoying by doing something like:

  • Arguing about a specific belief
  • Going a level down and challenging a pillar of the opponent's belief that was not being considered as part of the discussion.
  • Doing so with some evidence that the other party is unaware or cannot understand

E.g.: After the previous climate change argument, back up your point about climate change not being real by citing various studies that would take hours to fact check and might be out of reach knowledge-wise for either of you.


I think there's other things that come into account.

For example there's some specific fields which are considered more sacrosanct then others, trying to argue against a standard position in that field as part of your argument seems to much more easily put you into the "smartass" camp.

For example, arguing against commonly held religious or medical knowledge, seems to be almost impossible, unless you are taking an already-approved side of the debate.

E.g. You can argue ibuprofen against paracetamol as the go to for common cold since there's authoritative claims for each, you can't argue for a 3rd lesser backed NSAID or for using corticosteroids or no treatment instead of NSAIDs.

Other fields such as ethics or physics or computer science seem to be fair game and nobody really minds people trying to argue for an unsanctioned viewpoint.


There's obviously the idea of politics being overall bad, and the more politicized a certain subject is the less you can change people's minds about it.

But to some extent I don't feel like politics really comes into play.

It seems that people are fairly open to having their minds changed about economic policy but not about identity policy.... no matter which side of the spectrum you are on. Which seem counter intuitive, since the issue of "should countries have open borders and free healthcare" seems like one much more deeply embedded in existing political agendas and of much more import than "What gender should transgender people be counted in when participating in the olympics".


One interesting thing that I observed: I've personally been able to annoy a lot of people when talking with them online. However, IRL, in the last 4 years or so (since I actually begun explicitly learning how to communicate), I can't think of a single person that I've offended.

Even though I'm more verbose when I talk. Even though the ideas I talk about over coffee are usually much more niche and questionable in their verity then the ones I write about online.

I wonder if there's some sort of "magic oratory skill" I've come closer to attaining IRL that either can't be attained on the internet or is very different... granted, it's more likely it's the inherent bias of the people I'm usually discussing with.

The questions one needs not address

Well, not really, since the way they get talked about is essentially searching for a "better" definition or trying to make all definitions coincide.

Even more so, some of the terms allow for definitions but those definitions in themselves run into the same problem. For example, could you try to come up with one or multiple definitions for the meaning of "free will" ? In my experience it either leads to very boring ones (in which case the subject would be moot) or, more likely, to a definition that is just as problematic as 'free will' itself.

The questions one needs not address
We can now say that trying to answer questions like "what is the true nature of god" isn't going to work

I mean, I don't think and I'm not arguing we can do that. I just think that the question in itself is mistakenly formulate, the same way "How do we handle AI risk ?" is a mistaken formulation (see Jau Molstad's answer to the post which seems to address this).

All that I am claiming is that certain ill-defined question on which no progress can be made exist and that they can be to some extent easily spotted because they would make no sense if de-constructed or if an outside observe were to judge your progress on them.

Celebrating the people who dedicated their lives to building the first steam engine, while mocking people who tried to build perpetual motion machines before conservation of energy was understood, is just pure hindsight

Ahm, I mean, Epicurus and Thales would have had pretty strong intuitions against this, and conservation of energy has been postulated in physics since Issac Newton and even before him, when the whole thing wasn't even called "physics".

Nor is there a way to "prove" conservation of energy other than purely philosophically, or in an empirical way by saying: "All our formulas make sense if this is a thing, so let's assume the world works this way, and if there is some part of the world that doesn't we'll get to it when we find it".

Also, building a perpetual motion machine (or trying to) is not working on an unsanswerable problem/question of the sort I refer to.

As in, working on one will presumably lead you to build better and better engines, and/or see your failure and give up. There is a "failure state", and there's no obvious way of getting into "metaphysics" from trying to research perpetual motion.

Indeed, "Can we build a perpetual motion machine ?" is a question I see as entirely valid, not worth pursuing, but it's at worst harm-neutral and it has proven so in the last 2,000+ years of people trying to answer it.

George's Shortform

Walking into a new country where people speak very little English reminds me of the dangers of over communication.

Going into a restaurant and saying: "Could I get the turkish coffee and an omelette with a.... croissant, oh, and a glass of water, no ice and, I know this is a bit weird, but I like cinnamon in my turkish coffee, could you add a bit of cinnamon to it ? Oh, actually, could you scratch the omelette and do poached eggs instead"

Is a recipe for failure, at best the waiter looks at you confused and you can be ashamed of your poor communication skills and start over.

At worst you're getting an omelette, with a cinnamon bun instead of a croissant, two cups of turkish coffee, with some additional poached eggs and a room-temperature bottle of water.

Maybe a far fetched example, but the point is: The more instructions you give, the flourishes you put into your request, the higher the likelihood that the core of the requests gets lost.

If you can point at the items on the menu and hold a number of fingers in the air to indicate the quantity, that's an ideal way to order.

But it's curios that this sort of over communication never happens in, say, Japan. In places where people know very little to no English and where they don't mind telling you that what you just said made no sense (or at least they get very visibly embarrassed, more so than their standard over-the-top anxiety, and the fact that it made no sense is instantly obvious to anyone).

It happens in the countries where people kinda-know English and where they consider it rude to admit to not understanding you.

Japanese and Taiwanese clerks, random pedestrians I ask for directions and servers, know about as much English as I know Japanese or Chinese. But we can communicate just fine via grunts, smiles, pointing, shaking of heads and taking out a phone to google translate if the interactions is baring close to the 30s mark with no resolution in sight.

The same archtypes in India and Lebanon speak close to cursive English though, give them 6-12 months in the UK or US plus a panache for learning and they'd be a native speaker (I guess it could be argued that many people in India speak 100% perfect English, but their own dialect, but for the intents and purposes of this post I'm referring to English as UK/US city English).

Yet it's always in the second kind of country where I find my over communicative style fails me. Partially because I'm more inclined to use it, partially because people are less inclined to admit I'm not making any sense.

I'm pretty sure it's this phenomenon is a very good metaphor or instantiation of a principle that applies in many other situations, especially in expert communication. Or rather, in how expert-layman vs expert-expert vs expert-{almost expert} communication works.

George's Shortform

This just boils down to "showing off" though. But this makes little sense considering:

a) both genders engage in bad practices. As in, I'd expect to see a lot of men doing cross fit, but it doesn't make sense when you consider there's a pretty even gender split. "Showing off health" in a way that's harmful to health is not evolutionary adaptive for women (where it arguably pays off to live for a long time, evolutionarily speaking). This is backed up by other high-risk behaviors being mainly a men's thing

b) sports are a very bad way to show off, especially the sports that come with high risk of injury and permanent degradation when practiced in their current extreme (e.g. weight lifting, climbing, gymnastics, rugby, hokey). The highest pay-off sports I can think of (in terms of social signaling) are football, american football, basketball and baseball... since they are popular and thus the competition is both intense and achieving high rank is rewarding. Other than american football they are all pretty physically safe as far as sports go... when there are risks, they come from other players (e.g. getting a ball to the head) not from over-training or over-performing.

So basically, if it's genetic miss-firing then I'd expect to see it misfire almost only in men, and this is untrue.

If it's "rational" behavior (as in, rational from the perspective of our primate ancestor) then I'd expect to see the more dangerous forms of showing off bring the most social gains rather than vice-versa.

Granted, I do think handicap principle can be partially to blame for "starting" the thing, but I think it continues because of higher level memes that have little to do with social signaling or genetics.

Load More