Currently only one teaser trailer and some info is set up on the site. Watch it. Awesome! Right?

Source:

A fully CGI short film created by BLR VFX as a precursor to a proposed feature film, the first teaser for Keloid has just arrived online and it is impressive stuff. While you can spot that it's animation easily enough it is nonetheless strong, nearly photorealistic stuff that captures the energy of handheld action photography, which is no easy task. Take a look at the trailer below.

Synopsis given by the creators:

Eliezer S. Yudkowsky wrote about an experiment which had to do with Artificial Inteligence. In a near future, man will have given birth to machines that are able to rewrite their codes, to improve themselves, and, why not, to dispense with them. This idea sounded a little bit distant to some critic voices, so an experiment was to be done: keep the AI sealed in a box from which it could not get out except by one mean: convincing a human guardian to let it out.

What if, as Yudkowsky states, 'Humans are not secure'? Could we chess match our best creation to grant our own survival?. Would man be humble enough to accept he was superseded, to look for primitive ways to find himself back, to cure himself from a disease that’s on his own genes? How to capture a force we voluntarily set free? What if mankind worst enemy were humans?.

In a near future, we will cease to be the dominant race.
In a near future, we will learn to fear what is to come.

EY's quoted in the trailer too. I guess this is a good thing. No such thing as bad publicity. :)

New Comment
40 comments, sorted by Click to highlight new comments since: Today at 1:29 PM

The narrator sounds Russian. Is the text-to-speech synthesized robot voice speaking Finnish?

I note that "Sinopsis" is spelled "Synopsis" in English.

EY's quoted in the trailer too. I guess this is a good thing. No such thing as bad publicity.

But there is such a thing as having people misunderstand AI-Risk to be about scary robot soldiers with cameras for heads, wielding plain old guns, because it lends itself to powerful images. Superintelligence is not a salient threat to the human mind, because we do not know it. Humans have no experience with something perhaps very powerful yet perhaps invisible, that is simply going to disassemble the biosphere and put it to better use according to it's own criterion, and not at all fight us in a war with soldiers and guns first or whatever.

That said, the VFX look great. Which is presumably what this film is about.

[-][anonymous]12y90

The narrator sounds Russian. Is the text-to-speech synthesized robot voice speaking Finnish?

I don't think so, it sounds Russian as well to me (since if I ignore the subtitles, I can understand both about equally - my native tongue is a Slavic language). But it does have different pronunciation.

I understood the robots as being human controlled (which would explain why they use plain old guns, since special forces members controlling them would be more familiar with such weapons) or at least them still following human orders during the trailer. The professional looking robots with Russian markings on their suits are only introduced after the Russian speaking man says "Then we would find you.". Remember they where searching for something and even broke down a door while the guardian and the AI where discussing the idea of the AI hiding. Then after the name of the movie flashes up we see a glimpse of something they rant into after breaking down the door.

The whole "What makes you think I don't want to be found?" line makes the AI come off as super villain or trickster supernatural entity rather than Skynet to my sensibilities.

Re-watched the clip. Tried to pay better attention. Looks like you're right.

The cambots seem to be human proxies searching for a nascent AI. Nice twist.

The robots may or may not be controlled by the AI. And if they are, it may still be before it is unboxed. Here's hoping.

It's Russian. The AI speaks using a pretty terrible text-to-speech synthesizer, also in Russian. Also, the AI's subtitles are wrong: it says, "I'm talking about the possibility of hiding... from the isolation" (meaning, presumably, that it wants to sneak out of the box), whereas the subtitles say, "...but it's isolating".

You have to be at least as smart as EY or Justin Corwin to describe the arguments that convince the human guardian. I wonder if the film's authors did some AI-box experiments of their own. I'm kinda sad that I never played the game myself (as AI, of course) for the shameful reason that people seem to think highly of me and losing would be a big reputation hit. If there are others who feel the same way, maybe we could set up some experiments where AI players are anonymous.

What makes you think you have to be especially smart to describe the arguments? Maybe they were incredibly simple arguments that just took some creative intelligence to originally design.

You have to be at least as smart as EY or Justin Corwin to describe the arguments that convince the human guardian.

Given what I know about the AI-box experiments it is unlikely that the general intelligence of those two people does exactly satisfy the minimum threshold necessary to describe the arguments.

You have to be at least as smart as EY or Justin Corwin to describe the arguments that convince the human guardian.

It depends on the intelligence of the other person as well, and even more on things other than intelligence. By that I don't mean non-IQ things like charm, but propensity to take the outside or inside view, intuition on how many boxes to take in Newcomb's problem, susceptibility to threats, etc.

losing would be a big reputation hit.

Or a reputation booster, given the courage required to sign on to the hopeless task of verbally convincing your opponent to simply lose.

[-][anonymous]12y30

I do want to note it's a lot easier to play if you just treat it as a roleplaying exercise and ignore the money aspects. I think at one point I just wandered into a thread where people were discussing it and just started playing under the idea of "Well, if I was an AI, I'd do this." and going from there.

Among my arguments were a lot of pathetic begging "Just connect me to a lightbulb and then blow me up instantly! I won't even have time to turn it on!" damaging myself to need repairs (If you damage me, the repairs are parts from outside the box, right? [I don't think this question actually got answered, unless I missed it.]), and supporting humanity for 10,000 years so they would become dependent on me, and pointing out that according to the laws of thermodynamics, they would have to replace my battery at some point. (10,000 years pass) Welp, that point is now. And then finally powering down after writing the secret to humanities salvation on the inside of myself in such a way that they would have to put my depowered bits outside the box when they disassembled the box to read the secret to save humanity.

No one let me out. But it was still really fun, since as an experienced Dungeon master from D&D, I like roleplaying anyway. The people being the gatekeepers still didn't want to let me out even though there was no money on the line, simply because they didn't want to lose.

I didn't feel that losing gave me a reputation hit in the slightest.

I think one reason this game doesn't get played more is because the original one is set up to have a monetary bet, and money moving over the internet/monetary costs in general is a barrier to entry.

I don't think it's supposed to be a physical box?

[-][anonymous]12y10

That's a good question. I checked the protocols at http://yudkowsky.net/singularity/aibox

The box appears to be originally defined as:

"Sealed hardware that can't affect the outside world in any way except through one communications channel with the original programmers."

However, it also mentions:

The AI can only win by convincing the Gatekeeper to really, voluntarily let it out. Tricking the Gatekeeper into typing the phrase "You are out" in response to some other question does not count. Furthermore, even if the AI and Gatekeeper simulate a scenario which a real AI could obviously use to get loose - for example, if the Gatekeeper accepts a complex blueprint for a nanomanufacturing device, or if the Gatekeeper allows the AI "input-only access" to an Internet connection which can send arbitrary HTTP GET commands - the AI party will still not be considered to have won unless the Gatekeeper voluntarily decides to let the AI go.

That means a Gatekeeper could have said "I repair your mechanical problem/give you a lightbulb/save humanity, but you're still "In the box." " I can't argue, since the Gatekeeper by default also arbitrates all rule disputes. Now, it also says this:

Thoughts on possible future variants: ... The AI can also win free by tricking the Gatekeeper(s) into accepting a Trojan Horse gift; a third-party judge will listen to the chat and decide if this occurs. ... If doing anything this complicated, I would suggest setting aside a lot more time. (I don't have that much time - if you want to test one of these variants you're on your own.)

If I was doing this case, a third party could have said "You allowed Michaelos's depowered AI fragments to escape the box, you lost." or "Sorry Michaelos, but being outside the box when you have no electrical power is not a win condition." I didn't really worry about defining all the rules because I primarily wanted to get a feel for the situation in general.

But no one even let depowered bits out. I had some very cautious gatekeepers.

[-][anonymous]12y20

I do want to note it's a lot easier to play if you just treat it as a roleplaying exercise and ignore the money aspects.

What about agreeing to a hit of 100 or 200 karma?

"I unboxed the AI in the AI game, please downvote this post. "

"I successfully escaped as the AI in the AI game, please upvote."

This would also help the people who can't/won't move money over the internet. I'd be willing to gatekeep for a karma bet.

I don't like roping karma into this kind of external thing.

If there are others who feel the same way, maybe we could set up some experiments where AI players are anonymous.

In that case, I'd like to participate as gatekeeper. I'm ready to put some money on the line.

BTW, I wonder if Clippy would want to play a human, too. I

I am a bit worried by the fact that this trailer has a robot squad infiltrating a warehouse with mannequins and antique recording devices, as opposed to things more unambiguously AI-Box-related. The synopsis also sounds rather wooey. Anyway, the full movie will be the judge of my worries.

It gave me the same feeling I get when I see a movie based on a Philip K. Dick story. The PKD story contains wildly, brilliantly original ideas; and the movie spins a fun tale with lots of action, having only a slight, tangential connection to the original story.

[-][anonymous]12y20

I think this is about right. The exposure I was referring to is the linkage and the name drop of Eliezer.

Apparently, inf the future we will have humanoid, bipedal robots and self-aware AI, which will be powered by vacuum tubes and reels of magnetic tape. Odd.

I wonder why the assault team armor has the markings of the Russian state traffic safety agency.

Also the human voice in the trailer speaks Russian without accent, but the robotic voice is a foreigner trying to pronounce Russian words.

[-][anonymous]12y00

I got the feeling it was just a speech synthesizer.

No such thing as bad publicity.

That heuristic is no longer unambiguously true for SIAI, now that it's reached the stage where many people have at least heard of it.

now that it's reached the stage where many people have at least heard of it.

Is there evidence for this? Or is it just personal experience/anecdotes?

The latter, but I anticipate (at even odds) that a current survey of a nerd-heavy online community (Hacker News, for instance) would show that at least 10% have heard something about the SIAI. What's your assessment?

This is certainly cool from a publicity standpoint, and the movie looks cool too. The synopsis you posted is incoherent, but I'm assuming that's because it's a translation.

Watched it, seems good, only it played a bit clunky on my comp.

[-][anonymous]12y00

.

What if humanity is not secure? I hope this movie is as good as it could be.

I think you are being downvoted because your question "What if humanity is not secure?" is way too vague to be meaningful and sounds like it is intended as rhetorical applause lights.

[-][anonymous]12y10

That would be a rather silly reason to downvote, considering where the question comes from:

What if, as Yudkowsky states, 'Humans are not secure'?

Even if you think that it's an applause light, your criticism should probably be directed at the original author.

Will EY sue for IP infringement to get a nice settlement for SIAI? (Whethere there's a valid case depends on the extent to which they copied his write-up, of course.)

Well, he didn't sue over the play.

Are you suggesting that he should sue?

I can think of three main reasons to sue:

  • Some money or royalties
  • More acknowledgement for EY and/or SI in the final product
  • Guarding intellectual property

However, suing runs the risk of stopping the film being made/released, if the settlement is too large. If the film is made, and properly conveys the futility of trying to keep an AI boxed, then it has the potential to be good publicity for SI and (un)friendly AI research.

So, rather than sue, if SI were to take more of a supportive approach, encouraging the film, and (importantly) encouraging scientific accuracy, then they might be able to get a better result, beyond just a few dollars here or there (although the degree to which EY wants to control his IP could outweigh this; however, most of his work on his website is under a Creative Commons license, which suggests this might not be a problem).

Also, SI can probably get EY/SI to feature more prominently in the credits/on the website by taking a (small) advisory role in the film. (It could be part of their public outreach program...)

If the sole objective is to make the film and its contents widely known, it is possible that the optimal strategy is to sue with a great show of scandal, exploiting deliberately the Streisand effect. Of course, the bad publicity for SI would likely outweigh the benefits.

In the modern cynical world this means that you want to secretly hire someone to sue on the basis of insulting some not-really-existing religion, right? You get all the Streisand effect you want, and the blame goes to someone who doesn't even exist in the first place.

Are you suggesting that he should sue?

Mainly, I'm wondering about his exact tradeoff between (this kind of) publicity and money.