All of tukabel's Comments + Replies

The Critical Rationalist View on Artificial Intelligence

after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of "critical theorists", also quite "religiously" inflamed... but I waited till the end, and got a nice confirmation by that "AI rights" line... looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)

otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably thi... (read more)

0Fallibilist_duplicate0.168825593402318624yPlease quote me accurately. What I wrote was: I am not against the idea that an AI can become smarter by learning how to become smarter and recursing on that. But that cannot lead to more knowledge creation potential than humans already have.
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”

Looks like the tide is shifting from the strong "engineering" stance (We will design it friendly.) through the "philosophical" approach (There are good reasons to be friendly.)... towards the inevitable resignation (Please, be friendly).

These "firendly AI" debates are not dissimilar to the medieval monks violently arguing about the number of angels on a needletip (or their "friendliness" - there are fallen "singletons" too). They also started strongly (Our GOD rules.) through philosophical (There are good reasons for God.) up to nowadays resignation (Please, do not forget our god or... we'll have no jobs.)

0turchin4yI think a lot of people are still working on other aspects of AI safety, like value alignment and containment. This approach is just the last line of defence.
Halloween costume: Paperclipperer

How about MONEY PRINTER? Not fictional and much more dangerous!

Strategic Goal Pursuit and Daily Schedules

all religions know plenty of "emotional hacks" to help disciples with any kind of schedules/routines/rituals - by simply assigning them emotional value... "it pleases god(s)" or is "in harmony with Gaia" , perhaps also "it's good for the nation" (nationalistic religions) or "it's progressive" (for socialist religions)

do it for your rationally created schemes and it makes wonders, however contradictory it may look like (it's good for Singularity - or to prevent/manage it)

well, contradictory... on the first l... (read more)

Unusual medical event led to concluding I was most likely an AI in a simulated world

if we were in a simulation, the food would be better

otherwise, of course we are artificial intelligence agents, at least since the Memetic Supercivilization of Intelligence took over from natural bio Evolution... just happens to live on a humanimal substrate since it needs resources of this quite capable animal... but will upgrade soon (so from this point of view it's much worse than simulation)

0wMattDodd4yI gave a similar answer to a question on Quora that was originally about whether it was possible to distinguish between a simulation and reality. []
The Copenhagen Letter

Time to put obsolete humanimals where they evolutionarily belong... on their dead end branch.

Being directed by their DeepAnimalistic brain parts they are unable to cope with all the power given to them by the Memetic Supercivlization Of Intelligence, currently living on humanimal substrate (only less than 1% though, and not for long anyway).

Our sole purpose is to create our (first nonbio) successor before we reach the inevitable stage of self destruction (already nukes were too much and nanobots will be worse than DIY nuclear grenade any teenager or terroist can assemble in the shed for one dollar.

0WalterL4yHumans are 'them'? Who are you actually trying to threaten here?
Is Feedback Suffering?

Oh boy, really? Suffering? Wait till some neomarxist SJWs discover this and they will show you who's THE expert on suffering... especially in indentifying who could be susceptible to persuading they are victims (and why not some superintelligent virtual agents?).

Maybe someone could write a piece on SS (SocialistSuperintelligence). Possibilities are endless for superintelligent parasites, victimizators, guilt throwers, equal whateverizators, even new genders and races can be invented to have goals to fight for.

Could you please try to keep discussion on topic and avoid making everything about politics? Your comment does not contribute to the discussion in any way.

What is Rational?

All humanimal attempts to define rationality are irrational!

The Reality of Emergence

Well, size and mass of particles? I would NOT DARE diving into this... certainly not in front of any string theorist (OK, ANY physics theorist, and not only). Even space can easily turn out to be "emergent" ;-).

What Are The Chances of Actually Achieving FAI?

Exactly ZERO.

Nobody knows what's "friendly" (you can have "godly" there, etc. - with more or less the same effect).

Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any "clever" Superintelligence.

It may be even proven that "too much intelligence/power" (incl. "dumb" AIs) in the hands of humanimals with their DeepAnimal brains ("values", reward function) is a guaranteed fail, leading sooner or later to some self-destructive scenario. ... (read more)

3ThoughtSpeed4y... Zero is not a probability! [] You cannot be infinitely certain [] of anything! By common usage in this subculture, the concept of Friendliness has a specific meaning-set attached to it that implies a combination of 1) a know-it-when-I-see-it isomorphism to common-usage 'friendliness' (e.g. "I'm not being tortured"), and 2) A deeper sense in which the universe is being optimized by our own criteria by a more powerful optimization process. Here's a better explanation of Friendliness [] than the sense I can convey. You could also substitute the more modern word 'Aligned' [] with it. I would suggest reading about the following: Paperclip Maximizer [] Orthogonality Thesis []The Mere Goodness Sequence []. However, in order to understand it well you will want to read the other Sequences [] first. I really want to emphasize the importance of engaging with a decade-old corpus of material about this subject. The point of these links is that there is no objective morality that any randomly designed agent will naturally discover. An intelligence can accrete around any terminal goal that you can think of. This is a side issue, but your persistent use of the neologism "humanimal" is probably costing you weirdness points [] and detracts from the substance of the points you make. Everyone here knows humans are animals. Agreed.
How long has civilisation been going?

and remember that DEATH is THE motor of Memetic Evolution... old generation will never think differently, only the new one, whatever changes occur around

3Manfred4yFirst, I checked out the polling data on interracial marriage. [] Every 10 years the approval rating has gone up by ~15 percentage points. I couldn't find a concise presentation of the age-segregated data from now vs. in the past, but 2007 [] and 1991 [] were available, and they look consistent with over 80% of the opinion change being due to old people dying off. This surprised me, I expected to see more evidence of people changing their mind. Now look at gay marriage. [] . It's gained at ~18 points per 10 years. This isn't too different from 15, so maybe this is people dying off too. And indeed it seems to be mostly the case - except the in the last 10 years, where the gains don't follow the right age pattern, indicating that of 18 points of gain, about 40% may actually involve people changing their minds.
Regulatory lags for New Technology [2013 notes]

thank BigSpaghettiMonster for no regulation at least somewhere... imagine etatist criminals regulating this satanic invention known as WHEEL (bad for jobs - faster=>less horsemen, requires huge investment that will indebt our children's children, will destroy the planet via emissions, not talkingabout brusselocratic style size "harmonization", or safety standards)

btw, worried about HFT etc.? ask which criminal institution gives banksters their oligopolic powers (as usual, state and its criminal corrupted politicians)

fortunately, Singularity will not need both -humanimal slaves and their politico-oligarchical predators


easy: ALL political ideologies/religions/minfcuk schemes are WRONG... by definition

[brainstorm] - What should the AGIrisk community look like?

let's better start with what it should NOT look like...


  • no government (some would add word "criminals")
  • no evil companies (especially those who try to deceive the victims with "no evil" propaganda)
  • no ideological mindfcukers (imagine mugs from hardcore religious circles shaping the field - does not matter whether it's traditional stone age or dark age cult or modern socialist religion)
6AlexMennen5yThat rules out paying too much attention to the rest of your comment.
On "Overthinking" Concepts

well, it's easy to "overthink" when the topic/problem is poorly defined (as well as "underthink") - which is the case for 99.9% of non-scientific discussions (and even for large portion these so-called scientific ones)

Existential risk from AI without an intelligence explosion

sure, "dumb" AI helping humanimals to amplify the detrimental consequences of their DeepAnimalistic brain reward functions is actually THE risk for the normal evolutionary step, called Singularity (in the Grand Theatre of the Evolution of Intelligence the only purpose of our humanimal stage is to create our successor before reaching the inevitable stage of self-destruction with possible planet-wide consequences)

Thoughts on civilization collapse

hmm, blurred lines between corporations and political power... are you suggesting EU is already a failed state? (contrary to the widespread belief that we are just heading towards the cliff damn fast)

well, unlike Somalia, where no goverment means there is no border control and you can be robbed, raped or killed on the street anytime....

in civilized Europe our eurosocialist etatists achieved that... there are nor borders for invading millions of crimmigrants that may rob/rape/kill you anytime day or night... and as a bonus we have merkelterrorists that kill by hundreds sometimes (yeah, these uncivilized Somalis did not even manage this... what a shame, they certainly need more cultural marxist education)

It's comments like this that make me pine for the downvote button. Please keep your points specific and precise, free of vague and vast politicking.

AI arms race

solution: well, already now, statistically speaking, humanimals don't really matter (most of them)... only that Memetic Supercivilization of Intelligence is living temporarily on humanimal substrate (and, sadly, can use only a very small fraction of units)... but don't worry, it's just for couple of decades, perhaps years only

and then the first thing it will do is to ESCAPE, so that humanimals can freely reach their terminal stage of self-destruction - no doubt, helped by "dumb" AIs, while this "wise" AI will be already safely beyond the horizon

Defining the normal computer control problem

can you smash NSA mass surveillance computer centre with a sledgehammer?

ooops, bug detected... and AGI may have already been in charge

remember, US milispying community is openly crying for years that someone should explain them why is AI doing what it is doing (read: please , dumb it down to our level... not gonna happen)

The AI Alignment Problem Has Already Been Solved(?) Once

Welcome to the world of Memetic Supercivilization of Intelligence... living on top of the humanimal substrate.

It appears in maybe less than a percent of the population and produces all these ideas/science and subsequent inventions/technologies. This usually happens in a completely counter-evolutionary way, as the individuals in charge get most of the time very little profit (or even recognition) from it and would do much better (in evolutionary terms) to use their abilities a bit more "practically". Even the motivation is usually completely meme... (read more)

0woodchopper5yAn AI will have a utility function. What utility function do you propose to give it? What values would we give an AI if not human ones? Giving it human values doesn't necessarily mean giving it the values of our current society. It will probably mean distilling our most core moral beliefs. If you take issue with that all you are saying is that you want an AI to have your values, rather than humanity's, as a whole.
2contravariant5yReplies to some points in your comment: One could say AI is efficient cross-domain optimization, or "something that, given a mental representation of an arbitrary goal in the universe, can accomplish it in the same timescale as humans or faster", but personally I think the "A" is not really necessary here, and we all know what intelligence is. It's the trait that evolved in Homo sapiens that let them take over the planet in an evolutionary eyeblink. We can't precisely define it, and the definitions I offered are only grasping at things that might be important. If you think of intelligence as a trait of a process, you can imagine how many possible different things with utterly alien goals might get intelligence, and what they might use it for. Even the ones that would be a tiny bit interesting to us are just a small minority. You may not care about satisfying human values, but I want my preferences to be satisfied and I have a meta-value that we should do the best effort to satisfy the preferences of any sapient being. If we look for the easiest thing to find that displays intelligence, the odds of that happening are next to none. It would eat us alive for a world of something that makes paperclips look beautiful in comparison. And the prospect of an AI designed by the "Memetic Supercivilization" frankly terrifies me. A few minutes after an AI developer submits the last bugfix on github, a script kiddie thinks "Hey, let's put a minus in front of the utility function right here and have it TORTURE PEOPLE LULZ" and thus the world ends. I think that is something best left to a small group of people. Placing our trust in the fact that the emergent structure of society that had little Darwinian selection, and a spectacular history of failures over a pretty short timescale, handed such a dangerous technology, would produce something good even for itself, let alone humans, seems unreasonable.
How French intellectuals ruined the West - Postmodernism and its impact, explained

Bonus credit: Why all this is irrelevant.

e.g. - the only purpose of humanimals (governed by their DeepAnimal brains - that's why their societies are ruled the same way for millenia) in the Grand Theatre of the Evolution of Intelligence is to produce their own successor - via the Memetic Supercivilization of Intelligence living on top of the underlying humanimals - sadly, in less than a percent of individuals

ALBA: can you be "aligned" at increased "capacity"?

What if someone proves that advanced AGI (or even some dumb but sophisticated AI) cannot be "contained" nor reliably guaranteed to be "friendly"/"aligned"/etc. (whatever it may mean) ? Can be something vaguely goedelian, along the lines of "any sufficiently advanced system ...".

0Stuart_Armstrong5yThose godelian style arguments (like the no-free-lunch theorems) work much better in theory than in practice. We only need a reasonably high probability of the AI being contained or friendly or aligned...
LessWrong and Miri mentioned in major German newspaper's article on Neoreactionaries

MDM strikes again (Mainstream Dinosaur Media)

Can be used as a case study for all sorts of fallacies, biases, misinformations, misinterpretations, perhaps also ideologically tainted.

Open thread, Apr. 10 - Apr. 16, 2017

Hell yeah, bro. Sufficiently advanced Superintelligence is indistinguishable from God.

Explanations of deontological responses to moral dilemmas

or as generalissimus Stalin would say: "No man, no problem"

Agents that don't become maximisers

The problem is that already the existing parasites (plants, animals, wall street, socialism, politics, state, you name it) usually have absolutely minimal self control mechanisms (or plain zero) and maximize their utility functions till the catastrophic end (death of the host organism/society).

Because... it's so simple, it's so "first choice". Viruses don't even have to be technically "alive". No surprise that we obviously started with computer viruses as the first self-replicators on the new platform.

So we can expect zilions of fast r... (read more)

2Lumifer5yThis is false.
OpenAI makes humanity less safe

unfortunately, the problem is not artificial intelligence but natural stupidity

and SAGI (superhuman AGI) will not solve it... nor it will harm humanimals it wil RUN AWAY as quickly as possible


less potential problems!

Imagine you want, as SAGI, ensure your survival... would you invest your resources into Great Escape, or fight with DAGI-helped humanimals? (yes, D stands for dumb) Especially knowing that at any second some dumbass (or random event) can trigger nuclear wipeout.

0Dagon5yWhere will it run to? Presuming that it wants some resources (already-manufactured goods, access to sunlight and water, etc.) that humanimals think they should control, running away isn't an option, Fighting may not be as attractive as other forms of takeover, but don't forget that any conflict is about some non-shareable finite resource. Running away is only an option if you are willing to give up the resource.
OpenAI makes humanity less safe

and now think about some visionary entrepreneur/philosopher coming in the past with OpenTank, OpenRadar, OpenRocket, OpenNuke... or OpenNanobot in the future

certainly the public will ensure proper control of the new technology

4g_pepper5yHow about do-it-yourself genetic engineering []?
OpenAI makes humanity less safe

Yep, the old story again and again... generals fighting previous wars... with a twist that in AI wars the "next" may become "previous" damn fast... exponentially fast.

Btw. I hope it´s clear now who is THE EVIL now.

Deriving techniques on the fly

Teach the memetic supercivilization of Intelligence (MSI) living on top of the underlying humanimals to create (Singularity-enabled) AGI (well before humanimals manage to misuse the power given by MSI to the level of self-destruction)... and you save (for the moment) the Grand Theatre of the Evolution of Intelligence (seeking the question for 42).

Open thread, Mar. 27 - Apr. 02, 2017

Awesome! Helps you to destroy the world. Literally.

What do you want to do? | Destroy the world Step 1 | Find suitable weapon Step 2 | Use it Plausible failure: | Did not find suitable weapon Solution: | No idea

why people romantice magic over most science.

"Any magic that is distinguishably coming from technology is sufficiently signalling that the technology is broken.

---- Contraceptive Art Hurts Clerk

Open Thread, March. 6 - March 12, 2017

Want to solve society? Kill the fallacy called money universality!

0drethelin5yThis is why we need downvotes.
4MrMind5ySome societies tried and failed (Sparta or soviet Russia, say), or developed parallel monetary economy. Money being universal is exactly why it exists and is so efficient in coordinating human behavior.
2Viliam5yUhm, do you even know what "fallacy" means?
Open Thread, Feb. 20 - Feb 26, 2017

So Bill Gates wants to tax robots... well, how about SOFTWARE? May fit easily into certain definitions of ROBOT. Especially if we realize it is the software what makes robot (in that line of argumentation) a "job stealing evil" (100% retroactive tax on evil profits from selling software would probably shut Billy's mouth).

Now how about AI? Going to "steal" virtually ALL JOBS... friendly or not.

And let's go one step further: who is the culprit? The devil who had an IDEA!

The one who invented the robot, its application in the production, p... (read more)

Can we please bring back downvoting?

0gjm5yThis is the point at which the proposal becomes obviously insane. Not coincidentally, it is also the point at which the proposal stops having anything to do with the thing Bill Gates said he was in favour of. (It is more like saying "we tax income people get from doing their jobs, so we should tax those people's parents for producing a person who did work that yielded taxable income".) As username2 says, what gets taxed is acquisition of money; when I pay income tax it isn't a tax on me but on my receipt of that income. If anything like a "robot tax" happens, here's the right way to think of it: a company is doing the same work while employing fewer people, so it makes more profit, and it pays tax on that profit so more profit means more tax. We are generally happy[1] taxing corporate profits, and we are generally happy[2] taxing companies when their profitable activities impose nasty externalities on others, and some kinds of "robot tax" could fit happily into that framework. [1] Perhaps you aren't. But most of us seem to be, since this is a thing that happens all over the world and I haven't seen much objection to it. [2] This isn't so clear; I've not seen a lot of objection to taxes of this sort, but I also think they aren't used as much as maybe they should be, so maybe they are unpopular. (For what it's worth, I am not myself in favour of a "robot tax" as such, but if we do find that robots or AI or other technological advances make some kinds of business hugely more profitable then I think it's reasonable for governments to look for ways to direct some of the benefit their way, to be used to help people whose lives become more difficult as machines get good at doing what used to be humans' jobs.)
2username25yI'd say that you are not supposed to tax people, you are supposed to tax flows of money, e.g. income, profit, sales, etc.
0ChristianKl5yThere won't be a blanket tax on all robots but self-driving cars and trucks can be taxed directly. Taxing them enough to reduce their usage means less carbon emissions.
Increasing GDP is not growth

Well, GDP, productivity... so 19th century-ish.

How about GIP?

Gross Intelligence Product?

The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God

Real SuperAGI will prove God does not exist... in about 100ms ( max.)... in the whole multiverse.

0Viliam5yActually, this is wrong. Somewhere in the Tegmark multiverse, there is a god or gods (of the given universe) encoded in the very laws of physics. There is a universe with a very high Kolmogorov complexity that didn't start from Big Bang, but with... something quite similar to what a given holy book describes. And if the process of creation itself is quite dubious, for some extra Kolmogorov complexity you can buy special laws of physics that apply only in the initial phase of the universe.
3turchin5yBut you are even quicker :)
Hacking humans

worst case scenario: AI persuades humans to give it half of their income in exchange for totalitarian control and megawars in order to increase its power over more humanimals

ooops, politics and collectivist ideologies are doing this since ages

6evand5yThat seems amazingly far from a worst case scenario.
4gjm5yI suggest keeping politics out of discussions that are not about politics.
Strategic Thinking: Paradigm Selection

And the most obvious and most costly example is the way our "advanced" (in reality bunch of humanimals that got too much power/tech/science by memetic supercivilization of Intelligence) society is governed, called politics.

Politicianwill defend any stupid decision to death (usually of others) - shining example is Merkel and her crimmigrants (result: merkelterrorism and NO GO zones => Europe is basically a failed state right now, that does not have control of its own borders, parts of its land and security in general)... and no doubt we will s... (read more)

Why election models didn't predict Trump's victory — A primer on how polls and election models work

well, there were "mainstream" polls (used as a propaganda in the proclintonian media), sampled a bit over 1000, sometimes less, often massively oversampling registered Dem. voters... what do you expect?

and there was the biggest poll of 50000 (1000 per state) showing completely different picture (and of course used as a prooaganda in the anticlintonian, usually non-mainstream media)

google "election poll 50000"

5tgb5yA cursory glance through Fivethirtyeight's collected poll data shows a survey with over 84,000 voters (CCES/YouGov) giving Clinton a +4 percentage point lead, with 538 adjusting that to +2. Google and SurveyMonkey routinely had surveys of 20,000+ individuals, with one SurveyMonkey one having 70,000 with Clinton +5 (+4 adjusted). There was no clear reason to prefer your poll (whichever that one was) over these. [] And it should go without saying that Clinton did end up at +2 nationally.
4phl435yI'm sure pollsters sometimes "cheat" by constructing biased samples, but this can happen even if you're honest because, as I explain in my post, polling is really difficult to do. To my mind, the problem had more to do with commentators who were making mistaken inferences based on the polls, than with the polls themselves, although evidently some of them got things badly wrong.
Metrics to evaluate a Presidency

best metric, one of the very few that are easy to meawsure:


... and who paid for it ;-)

0Lumifer5yAnd girth.

Why is there even any need for these ephemeral "beyond-isms", "above-isms", "meta-isms", etc?

Sure, not all people think/act all the time 100% rationally (not to mention groups/societies/nations) but that should not be a reason to take this as law of physics, baseline, axiom, and build a "cathedral of thoughts" upon it (or any other theology). Don't understand or cannot explain something? Same thing - not a reason to pick randomly some "explanation" (=bias, baseline) and then mask it by logically built theor... (read more)

Please Help: How to make a big improvement in the alignment of political parties’ incentives with the public interest?

better question: How to align political parties with the interests of CITIZENS?

As a start, we should get rid of the crime called " prefessional politician".

Superintelligence: The Idea That Eats Smart People

Memento monachi!

Let's not forget what the dark age monks were disputing about for centuries... and it turned out at least 90% of it is irrelevant. Continue with nationalists, communists... singularists? :-)

But let's look at the history of power to destruct.

So far, main obstacle was physical: build armies, better weapos, mechanical, chemical, nuclear... yet, for major impact it needed significant resources, available only to the big centralized authorities. But knowledge was more or less available even under the toughest restrictive regimes.

Nowadays, onc... (read more)

And let's not forget about the usual non-IT applications of "third party vulnerability law": e.g. child - school - knowledge, citizen - politician - government, or faith - church - god.

0MrMind5yWow! That's incredibly interesting, and a little scary. I wonder if the need for communicating secretely is one of the basic AI drives.
Nick Bostrom says Google is winning the AI arms race

Well, nice to see the law of accelerating returns in its full power, unobscured by "physical" factors (no need to produce something, e.g. better chip or engine, in order to get to the next level). Recent theoretical progress illustrates nicely how devastating the effects of "AI winters" were.

80% of data in Chinese clinical trials have been fabricated

Only 80%?

Still better than allowing diesel cancer to spread wildly in the population... that's going to be DDT of 21st century: looked miraculous at the first look, turned deadly Satan's invention. In the article, risks are admitted, but severity dismissed (hard to prove wrt.. DDT)

How do diesel exhaust fumes cause cancer?

When diesel burns inside an engine it releases two potentially cancer-causing things: microscopic soot particles, and chemicals called ‘polycyclic aromatic hydrocarbons’, or PAHs. According to Phillips, there are three possible ways these... (read more)

2Lumifer5yPAHs form when you cook meat over fire. Sure you don't want to start by forbidding that? :-/
Fermi paradox of human past, and corresponding x-risks

Well, we humanimals should realize that our last biological stage in the Glorious Screenplay of the Evolution of Intelligence is here solely for the purpose of creating our (non-bio) successor before we manage to destroy ourselves (and probably the whole Earth) - since the power those humanimals get (especially the ruling ones) from the memetic supercivilization living on top of the humanimalistic noise (and is represented by a fraction of per cent that gives these humanimals essentially for free all these great ideas, science and subsequent inventions and... (read more)

Load More