Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)

Of course, the 'singularity' we're talking about at SI is intelligence explosion, not accelerating change, and intelligence explosion doesn't depend on accelerating change. The term "singularity" used to mean intelligence explosion (or "the arrival of machine superintelligence" or "an event horizon beyond which we can't predict the future because something smarter than humans is running the show"). But with the success of The Singularity is Near in 2005, most people know "the singularity" as "accelerating change."

How often do we miss out on connecting to smart people because they think we're arguing for Kurzweil's curves? One friend in the U.K. told me he never uses the world "singularity" to talk about AI risk because the people he knows thinks the "accelerating change" singularity is "a bit mental." 

LWers are likely to have attachments to the word 'singularity,' and the term does often mean intelligence explosion in the technical literature, but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization. If the 'singularity' term is keeping us away from many of the people we care most about reaching, maybe we should change it.

Here are some possible alternatives, without trying too hard:

 

  • The Center for AI Safety
  • The I.J. Good Institute
  • Beneficial Architectures Research
  • A.I. Impacts Research

 

We almost certainly won't change our name within the next year, but it doesn't hurt to start gathering names now and do some market testing. You were all very helpful in naming "Rationality Group". (BTW, the winning name, "Center for Applied Rationality," came from LWer beoShaffer.)

And, before I am vilified by people who have as much positive affect toward the name "Singularity Institute" as I do, let me note that this was not originally my idea, but I do think it's an idea worth taking seriously enough to bother with some market testing.

New Comment
159 comments, sorted by Click to highlight new comments since: Today at 6:28 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So I read this, and my brain started brainstorming. None of the names I came up with were particularly good. But I did happen to produce a short mnemonic for explaining the agenda and the research focus of the Singularity Institute.

A one word acronym that unfolds into a one sentence elevator pitch:

Crisis: Catastrophic Risks in Self Improving Software

  • "So, what do you do?"
  • "We do CRISIS research, that is, we work on figuring out and trying to manage the catastrophic risks that may be inherent to self improving software systems. Consider, for example..."

Lots of fun ways to play around with this term, to make it memorable in conversations.

It has some urgency to it, it's fairly concrete, it's memorable.
It compactly combines goals of catastrophic risk reduction and self improving systems research.

Bonus: You practically own this term already.

An incognito Google search gives me no hits for "Catastrophic Risks In Self Improving Software", when in quotes. Without quotes, top hits include the Singularity Institute, the Singularity Summit, intelligencexplosion.com. Nick Bostrom and the Oxford group is also in there. I don't think he would mind too much.

This is clever but sounds too much like something out of Hollywood. I'd prefer bland but respectable.

4betterthanwell12y
I don't entirely disagree, but I do think Catastrophic Risks In Self-Improving Systems can be useful in pointing out the exact problem that the Singularity Institute exists to solve. I'm not at all sure at all that it would make a good name for the organisation itself. But I do perhaps think it would raise fewer questions, and be less confusing than The Singularity Institute for Artificial Intelligence or The Singularity Institute. In particular, there would be little chance of confusion stemming from familiarity with Kurzweil's singularity from accelerating change. There are lessons to be learned from Scientist are from Mars the Public is from Earth, and first impressions are certainly important. That said, this description is less over-exaggerated than it may at seem at first glance. The usage can be qualified in that the technical meanings of these words are established, mutually supportive and applicable. Looking at the technical meaning of the words, the description is (perhaps surprisingly) accurate: Catastrophic failure: Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system. Catastrophe theory: Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, (...) leading to large and sudden changes of the behaviour of the system. Risk is the potential that a chosen action or activity (including the choice of inaction) will lead to a loss (an undesirable outcome). The notion implies that a choice having an influence on the outcome exists (or existed). Is the CRISIS mnemonic / acronym overly dramatic? Crisis: From Ancient Greek κρίσις (krisis, “a separating, power of distinguishing, decision, choice, election, judgment, dispute”), from κρίνω (krinō, “pick out, choose, decide, judge”) A crisis is any event that is, or expected to lead to, an un
0[anonymous]12y
I don't entirely disagree. I think Catastrophic Risks In Self-Improving Systems could be useful in pointing out the exact problem that the Singularity Institute exists to solve. I'm not at all sure at all that it would make a good name for the organisation itself. But I do perhaps think it would raise fewer questions, and be less confusing than The Singularity Institute for Artificial Intelligence.
3Michelle_Z12y
I agree. That doesn't sound bad at all.
9betterthanwell12y
After thinking this over while taking a shower: The CRISIS Research Institute — Catastrophic Risks In Self-Improving Systems Or, more akin to the old name: Catastrophic Risk Institute for Self-Improving Systems Hmm, maybe better suited as a book title than the name of an organization.
5faul_sname12y
It would make an excellent book title, wouldn't it.
1thomblake12y
That's brilliant.
0Epiphany12y
Center for Preventing a C.R.I.S.I.S. A.I. C.R.I.S.I.S. A.I. could be a new term also.

Upvoted for funny, but probably not a great name for a non-profit.

0[anonymous]12y
Clippy's Bane Institute.

It's worth noting that your current name has advantages too; people who are interested in the accelerating change singularity will naturally run into you guys. These are people, some pretty smart, who are at home with weird ideas and like thinking about the far future. Isn't this how Louie found out about SI?

Maybe instead of changing your name, you could spin out yet another organization (with most of your current crew) to focus on AI risk, and leave the Singularity Institute as it is to sponsor the Singularity Summit and so on. My impression is that SI has a fairly high brand value, so I would think twice before discarding part of that. Additionally, I know at least one person assumed the Singularity Summit was all you guys did. So having the summit organized independently of the main AI risk thrust could be good.

3Alex_Altair12y
The spin-off sounds a little appealing to me too, but the problem is that the Summit provides a lot of their revenue.
1John_Maxwell12y
Good point. Maybe this could continue to happen though with sufficiently clever lawyering.
0negamuhia12y
I agree. You should change the name iff your current name-brand is irreparably damaged. Isn't that an important decision procedure for org rebrands? I forget. EDIT: Unless, of course, the brand is already irreparably damaged...in which case this "advice" would be redundant!

Center for AI Safety most accurately describes what you do.

To be honest, the I. J. Good Institute sounds the most prestigious.

Beneficial Architectures Research makes you sound like you're researching earthquake safety or something similar. I don't think you necessarily need to shy away from the word "AI."

AI Impacts Research sounds incomplete, though I think it would sound good with the word "society," "foundation," or "institute" tacked onto either end.

IJ Good Institute would make me think that it was founded by IJ Good.

4Viliam_Bur12y
I would suspect that it means "The Good Institute", something related to either philantropy or religion, with a waving hand and smiling face the webmaster failed to mark properly as a Wingdings font. :D

I really like Center for AI Safety.

The AI Risk Reduction Center

Center for AI Risk Reduction

Institute for Machine Ethics

Center for Ethics in Artificial Intelligence

And I favor this kind of name change pretty strongly.

"Risk Reduction" is very much in the spirit of "Less Wrong".

5Bugmaster12y
I like "Institute for Machine Ethics", though some people could find the name a bit pretentious.
4Kaj_Sotala12y
Machine Ethics is more associated with narrow AI, though.
0Alex_Altair12y
I think the word "machine" is too reminiscent of robots.
  • Center for Helpful Artificial Optimizer Safety (CHAOS)
  • Center for Slightly Less Probable Extinction
  • Freindly Optimisation Of the Multiverse (FOOM)
  • Yudkowsky's Army
  • The Center for World Domination
  • Pinky and The Brain Institute
  • Cyberdyne Systems

The Center for World Domination

We prefer to think of it as World Optimization.

Winners Evoking Dangerous Recursively Improving Future Intelligences and Demigods

4wedrifid12y
I commit to donating $20k to the organisation if they adopt this name! Or $20k worth of labor, whatever they prefer. Actually, make that $70k.

You can donate it to my startup instead, our board of directors has just unanimously decided to adopt this name. Paypal is fine. Our mission is developing heuristics for personal income optimization.

3thomblake12y
There's already a Cyberdyne making robotic exoskeletons and stuff in Japan.
0nshepperd12y
The Sirius Cybernetics Corporation?
0roll12y
What concerns me is lack of research into artificial optimizers in general... Artificial optimizers are commonplace already, they are algorithms to find optimal solutions to mathematical models, not to optimize the real world in the manner that SI is concerned with (correct me if I am wrong). Furthermore the premise is that such optimizers would 'foom', and i fail to see how foom is not a type of singularity.
0[anonymous]12y
Recent published SI work concerns AI safety. They have not recently published results on AGI, to whatever extent that is separable from safety research, for which I am very grateful. Common optimization algorithms do apply to mathematical models, but that doesn't limit their real world use; an implemented optimization algorithm designed to work with a given model can do nifty things if that model roughly captures the structure of a problem domain. Or to put it simply, models model things. SI is openly concerned with exactly that type of optimization, and how it becomes unsafe if enough zealous undergrads with good intentions throw this, that, and their grandmother's hippocampus into a pot until it supposedly does fantastic venture capital attracting things. The fact that SI is not writing papers on efficient adaptive particle swarms is good and normal for an organization with their mission statement. Foom was a metaphorical onomatopoeia for an intelligence explosion, which is indeed a commonly used sense of the term "technological singularity".
0roll12y
Any references? I haven't seen anything that is in any way relevant to the type of optimization that we currently know how to implement. The SI is concerned with notion of some 'utility function', which appears very fuzzy and incoherent - what it is, a mathematical function? What does it have at input and what it has at output? The number of paperclips in the universe is given as example of 'utility function', but you can't have 'universe' as the input domain to a mathematical function. In the AI the 'utility function' is defined on the model rather than the world, and lacking the 'utility function' defined on the world, the work on ensuring correspondence of the model and the world is not an instrumental sub-goal arising from maximization of the 'utility function' defined on the model. This is rather complicated, technical issue, and to be honest the SI stance looks indistinguishable from confusion that would result from inability to distinguish function of model and the property of the world, and subsequent assumption that correspondence of model and the world is an instrumental goal of any utility maximizer. (Furthermore that sort of confusion would normally be expected as a null hypothesis when evaluating an organization so outside the ordinary criteria of competence) edit: also, by the way, it it would improve my opinion of this community if, when you think that I am incorrect, you would explain your thought rather than click down vote button. While you may want to signal to me that "i am wrong" by pressing the vote button, that, without other information, is unlikely to change my view on the technical side of the issue. Keep in mind that one can not be totally certain in anything, and while this may be a normal discussion forum that happens to be owned by an AI researcher that is being misunderstood due to poor ability to communicate the key concepts he uses, it might also be a support ground for pseudoscientific research, and the norm of substance-less disag
0Michelle_Z12y
Creative and amusing, at least. :]

The obvious change if Singularity has been co-opted is the Institute for Artificial Intelligence. (but IAI is not a great acronym).

Institute for Artifical Intelligence Safety lets you keep the S, but it's at the wrong spot. Safety Institution for Artificial Intelligence is off-puttingly incorrect.

The Institute for Friendly Artificial Intelligence (pron. eye-fay) is IFAI... maybe?

If you go with the Center for Friendly Artificial Intelligence you get CFAI, sort of parallel to CFAR (if that's what you want).

Oh! If associating with CFAR is okay, then what's really lovely is the Center for Friendly Artificial Intelligence Research, acronym as CFAIR. (You could even get to do cute elevator pitches asking people how they'd program their obviously well-defined "fairness" into an AI.)

Edit: I do agree that "Friendly" is not, on the whole, desirable. I prefer "Risk Reduction" to "Safety", because I think Safety might bring a little bit of the same unsophistication that Friendly would bring.

Center for Friendly Artificial Intelligence Research

Including "Friendly" is good for those that understand that it is being used as a jargon term with a specific meaning. Unfortunately it could give an undesirable impression of unsophisticated to the naive audience (which is the target).

I also strongly object to 'Friendly' being used in the name -- it's a technical term that I think people are very likely to misunderstand.

0RichardHughes12y
Agreed that people are very likely to misunderstand it - however, even the obvious, naive reading still creates a useful approximation of what it is you guys actually do. I would consider that misreading to be a feature, not a flaw, because the layman's reading produces a useful layman's understanding.
6Dorikka12y
The approximation might end up being 'making androids to be friends with people', or some kind of therapy-related research. Seriously. Given that even many people involved with AGI research do not seem to understand that Friendliness is a problem, I don't think that the first impression generated by that word will be favorable. It would be convenient to find some laymen to test on, since our simulations of a layman's understanding may be in error.
0RichardHughes12y
I have no ability to do any actual random selection, but you raise a good point - some focus group testing on laymen would be a good precaution to take before settling on a name.
2daenerys12y
upvoted for CFAIR

I hate CFAIR.

3tgb12y
But than Eliezer and co. could be called CFAIRers!
1gwern12y
As long as they don't pledge themselves or emulated instances of themselves for 10 billion man-years of labor.
0Multiheaded12y
So far I like IFAI best; it's conscise and sounds like a logical update of SIAI. "At first they were just excited about all kinds of singularities, now they've decided how to best get to one" is what someone who only ever heard the name "IFAI (formerly SIAI)" would think.

Paraphrasing, I believe it was said by an SIer that "if uFAI wasn't the most significant and manipulable existential risk, then the SI would be working on something else." If that's true, then shouldn't its name be more generic? Something to do with reducing existential risk...?

I think there are some significant points in favor of a generic name.

  • Outsiders will more likely see your current focus (FAI) as the result of pruning causes rather than leaping toward your passion -- imagine if GiveWell were called GiveToMalariaCauses.

  • By attaching yourself directly with reducing existential risk, you bring yourself status by connecting with existing high status causes such as climate change. Moreover, this creates debate with supporters of other causes connected to existential risk -- this gives you acknowledgement and visibility.

  • The people you wish to convince won't be as easily mind-killed by research coming from "The Center for Reducing Existential Risk" or such.

Is it worth switching to a generic name? I'm not sure, but I believe it's worth discussing.

3shokwave12y
I feel like you could get more general by using the "space of mind design" concept.... Like an Institute for Not Giving Immense Optimisation Power to an Arbitrarily Selected Point in Mindspace, but snappier.
-8private_messaging12y

I have direct experience of someone highly intelligent, a prestigious academic type, dismissing SI out of hand because of its name. I would support changing the name.

Almost all the suggestions so far attempt to reflect the idea of safety or friendliness into the name. I think this might be a mistake, because for people who haven't thought about it much, this invokes images of Hollywood). Instead, I propose having the name imply that SI does some kind of advanced, technical research involving AI and is prestigious, perhaps affiliated with a university (think IAS).

Center for Advanced AI Research (CAAIR)

2[anonymous]12y
This name might actually sound scary to people worried about AI risks.
0roll12y
Hmm what do you think would have happened with that someone if the name was more attractive and that person spent more time looking into SI? Do you think that person wouldn't ultimately dismiss it? Many of the premises here seem more far fetched than singularity. I know that from our perspective it'd be great to have feedback from such people, but it wastes their time and it is unclear if that is globally beneficial.

The Center for AI Safety

Like it. What you actually do.

The I.J. Good Institute

Eww. Pretentious and barely relevant. Some guy who wrote a paper in 1965. Whatever. Do it if for some reason you think prestigious sounding initials will give enough academic credibility to make up for having a lame irrelevant name. Money and prestige are more important than self respect.

Beneficial Architectures Research

Architectures? Word abuse! Why not go all the way and throw in "emergent"?

A.I. Impacts Research

Not too bad.

0[anonymous]12y
How is it word abuse? "Architecture" is much more informative than "magic" or "thingy"; it conveys that they investigate how putting together algorithms results in optimization. That differentiates them from Givewell, The United Nations First Committee, the International Risk Governance Council, The Cato Institute, ICOS, Club of Rome, the Svalbard Global Seed Vault, the Foresight Institute, and most other organizations I can think of that study global economic / political / ecological stability, x-risk reduction, or optimal philanthropy.

Sell the naming rights.

If you could sell it to a prestigious tech firm... "The IBM Institute for AI Safety" actually sounds pretty fantastic.

I think this comment is the first that I couldn't decide whether to upvote or downvote, but definitely didn't want to leave a zero.

0Manfred12y
Don't worry, I'll fix it.

The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)

Worse, when you try to tell someone who already mainly associates the idea of the singularity with accelerating change curves about the distinctions between different types of singularity, they can, somewhat justifiably from their perspective, dismiss it as just a bunch of internal doctrinal squabbling among those loony people who obsess over technology curves, squabbling that it's really beneath them to investigate too deeply.

The Center for AI Safety-- best of the bunch. It might be clearer as The Center for Safe AI.

The I.J. Good Institute-- I have no idea what the IJ stands for.

Beneficial Architectures Research-- sounds like an effort to encourage better buildings.

A.I. Impacts Research-- reads like a sentence. It might be better as Research on AI Impacts.

6pjeby12y
Indeed - it better implies that you're actually working towards safe AI, as opposed to just worrying about whether it's going to be safe, or lobbying for OSHA-like safety regulations.
5Jayson_Virissimo12y
Irving John ("Jack"). I would guess that exactly zero of my non-Less Wronger friends have ever heard of I. J. Good.

I would guess that exactly zero of my non-Less Wronger friends have ever heard of I. J. Good.

Which is fine; to everyone else, it's some guy's name, with moderately positive affect. I'd be less in favour of this scheme if the idea of intelligence explosion had first been proposed by noted statistician I J Bad.

2Kaj_Sotala12y
Now I have Johnny C Bad playing in my head. (Well, not really, but it made for a fun comment.)
2Paul Crowley12y
Better than Johnny D Ugly.
1Douglas_Knight12y
Did you not understand that "I.J. Good" is a person's name? (Note that in this thread ciphergoth asserts that everyone recognizes the form as a name, despite your comment which I read as a counterexample.)
5NancyLebovitz12y
At this point, I'm not sure what I was thinking. It's plausible that knowing what the initials meant would be enough to identify the person. I'm pretty sure I was thinking "ok, I. J. Good founded a foundation, but who cares?".
4pjeby12y
Until I read the comment thread, I thought maybe it was facetious and stood for "It's Just Good".
1TheOtherDave12y
I can imagine, upon discovering that the "I.J.Good Institute" is interested in developing stably ethical algorithms, deciding that the name was some sort of pun... that it stood for "Invariant Joint Good" or some such thing.

You are worried that the SIAI name signals a lack of credibility. You should be worried about its people do. No, it's not the usual complaints about Eliezer. I'm talking about Will Newsome, Stephen Omohundro, and Ben Goertzel.

Will Newsome has apparently gone off the deep end: http://lesswrong.com/lw/ct8/this_post_is_for_sacrificing_my_credibility/6qjg The typical practice in these cases, as I understand it, is to sweep these people under the rug and forget that they had anything to do with the organization. This might not be the most intellectually honest thing to do, but it's more PR-minded than leaving them listed, and more polite than adding them to a hall of shame.

And, while the Singularity Institute is announcing that it is absolutely dangerous to build an AGI without proof of friendlyness, two of its advisors, Omohundro and Goertzel, are, separately, attempting to build AGIs. Of course, this is only what I have learned from http://singularity.org/advisors/ -- maybe they have since changed their minds?

6wedrifid12y
Goertzel is still there? I'm surprised.
0Halfwit11y
And now there are three: http://singularityhub.com/2013/01/10/exclusive-interview-with-ray-kurzweil-on-future-ai-project-at-google/
0novalis11y
Does Kurzweil have anything to do with the Singularity Institute? Because I don't see him listed as a director or advisor on their site.
0Halfwit11y
He was an adviser. But I see he no longer is. Retracted.

...but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization.

Why not just call it that, then ? "AI Risk Reduction Institute".

"Safe" is a wrong word for describing a process of rewriting the universe.

(An old tweet of mine; not directly relevant here.)

I think something about "Machine ethics" sounds best to me. "Machine learning" is essentially statistics with a computational flavor, but it has a much sexier name. You think statistics and you think boring tables, you think "machine learning" and you think Matrix or Terminator.

Joke suggestions: "Mom's friendly robot institute," "Institute for the development of typesafe wishes" (ht Hofstadter).

3i7712y
Singularity Institute for Machine Ethics. Keep the old brand, add clarification about flavor of singularity.
2ChrisHallquist12y
I like this one a lot. Term that has a clear meaning in the existing literature.
0thomblake12y
But Machine Ethics generally refers to narrow AI - I think it's too vague (but then, "AI" might have the same problem).
1[anonymous]12y
Ah yes, "Paperclip Maximizers..."

I think a name change is a great idea. I can certainly imagine someone being reluctant to associate their name with the "Singularity" idea even if they support what SIAI actually does. I think if I was a famous researcher/donor, I would be a bit reluctant to be strongly associated with the Singularity meme in its current degraded form. Yes, there are some high-status people who know better, but there are many more who don't.

Here is a suggestion: Center for Emerging Technology Safety. This name affiliates with the high-status term "emerging t... (read more)

1Plasmon12y
I understand that the original name can be taken as overly techno-optimistic/Kurzweilian. IMHO this name errs on the other side, it sets of Luddite-detecting heuristics.

"Singularity Institue? Oh, Kurzweil!" It's as if he has a virtual trademark on the word. Yeah.

-1private_messaging12y
To think about it, SIAI name worked in favour of my evaluation of SI. I sort of mixed up EY with Kurzweil, thought that the EY has created some character recognition software and whatnot. Kurzweil is pretty low status but it's not zero. What I see instead is a person who by the looks of it likely wouldn't even be able to implement belief propagation with loops in the graph, or at least never considered what's involved (as evident from the rationality/bayesianism stuff here, Bayes vs science stuff, and so on). You know, if I were preaching rationality, I'd make a bayes belief propagation applet with nodes and lines connecting them, for demonstration of possible failure modes also (and investigation of how badly incompleteness of the graph breaks it, as well as demonstration of NP-complete in certain cases). I can do that in a week or two. edit: actually, perhaps I'll do that sometime. Or actually, I think there's such applications for medical purposes.
1David_Gerard12y
A simple open-source one would be an actually useful thing to show people failure modes and how not to be stupid.
3private_messaging12y
Well it won't be useful for making glass eyed 'we found truth' cult because it'd actually kill the confidence, in the Dunning-Kruger way where more competent are less confident. The guys here haven't even wondered how exactly do you 'propagate' when A is evidence for B and B is evidence for C and C is evidence for A (or when you only see a piece of cycle, or several cycles intersecting). Or when there's unknown nodes. Or what happens out of the nodes that were added based on reachability or importance or selected to be good for the wallet of dear leader. Or how badly it breaks if some updates are onto wrong nodes. Or how badly it breaks when you ought to update on something outside the (known)graph but pick closest-looking something inside. Or how low the likelihood of correctness gets when there's some likelihood of such errors. Or how difficult it is to ensure sane behaviour on partial graphs. Or how all kinds of sloppiness break the system entirely making it arrive at superfluous very high and very low probabilities. People go into such stuff for immediate rewards - now i feel smarter than others kind of stuff.

Semi-serious suggestions:

  • Intelligence Explosion Risk Research Group
  • Foundation for Obviating Catastrophes of Intelligence (FOCI)
  • Foundation for Evaluating and Inhibiting Risks from Intelligence Explosion (FEIRIE)
  • Center for Reducing Intelligence Explosion Risks (CRIER)
  • Society for Eliminating Existential Risks (SEERs) of Intelligence Explosion
  • Center for Understanding and Reducing Existential Risks (CURER)
  • Averting Existential Risks from Intelligence Explosion (AERIE) Research Group (or Society or ...)

'A.I. Impact Institute', although that leads to the unfortunate acronym AIII...

Though it is a remarkably accurate imitation of the reactions of those first hearing about it.

7Risto_Saarelma12y
You might get away with using AI3.

Do we actually have rigorous evidence of a need for name change? It seems that we're seriously considering an expensive and risky move on the basis of mere anecdote.

It’s quite likely you can solve the problem of people miss-associating SI with “accelerating change“ without having to change names.

The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves.

What if the AI researcher read (or more likely, skimmed) the concise summary before responding to the potential supporter? At least this line in the first paragraph, “artificial intelligence beyond some threshold level would snowball, crea... (read more)

AI Impacts Research seems to me the best of the bunch, because it's pretty easy to understand. People who know nothing about Eliezer's work can see it and think, "Oh, duh AI will have an impact, it's worth thinking about that." On the other hand:

  • Center for AI Safety: not bad, but people who don't know Eliezer's work might wonder why we need it (same thing with a name involving "risk")
  • The I.J. Good Institute: Sounds pretigious, but gives no information to someone who doesn't know who I.J. Good is.
  • Beneficial Architectures Research: me
... (read more)
6JonathanLivengood12y
And gives potentially wrong information to someone who does know who I.J. Good is but doesn't know about his intelligence explosion work.

I actually suspect that the word "Singularity" serves as a way of differentiating you from the huge number of academic institutes to do with AI so I'm not endorsing change necessarily.

However, if you do change, I vote for something to do with the phrase "AI Risk" - your marketing speel is about reducing risk and I think you're name will attract more donor attention if people can see a purpose rather than a generic name. As such, I vote against "I.J. Good Institute".

I also think "Beneficial Architectures Research" is... (read more)

A.I. Safety Foundation

Center for existential risk reduction

Friendly A.I. Group

A.I. Ethics Group

Institute for A.I. ethics

Why did the "AI" part get dropped from "SIAI" again?

9VincentYu12y
Zack_M_Davis on this:
9wedrifid12y
So essentially the problem with "SIAI" is the letter "f" in the middle.
4Normal_Anomaly12y
The Singularity Institute was for AI before it was against it! :P

Mandate

"The Mandate is a Gnostic School founded by Seswatha in 2156 to continue the war against the Consult and to protect the Three Seas from the return of the No-God.

... [it] also differs in the fanaticism of its members: apparently, all sorcerers of rank continuously dream Seswartha's experiences of the Apocalypse every night ...

...the power of the Gnosis makes the Mandate more than a match for schools as large as, say, the Scarlet Spires."

No-God/UFAI, Gnosis/x-rationality, the Consult/AGI community? ;-)

0Multiheaded12y
Haha, we're gonna see a lot more of such comparisons as the community extends.

Does this mean it's too late to suggest "The Rationality Institute for Human Intelligence" for the recent spin-off, considering the original may no longer run parallel to that?

Seriously though, and more to the topic, I like "The Center for AI Safety", not only because it sounds good and is unusually clear as to the intention of the organization, but also because it would apparently, well, run parallel with "The Center for Modern Rationality" (!), which is (I think) the name that was ultimately (tentatively?) picked for the spin-off.

[-][anonymous]12y40

Center for AI Safety sounds excellent actually.

AI Ballistics Lab? You're trying to direct the explosion that's already underway.

Center for General Artificial Intelligence Readiness Research

[-][anonymous]12y40

The Last Organization.

Come to think of it, SI have a bigger problem than the name: getting a cooler logo than these guys.

/abg frevbhf

More than that, many people in SU-affiliated circles use the word "Singularity" by itself to mean Singularity University ("I was at Singularity"), or else next-gen technology; and not any of the three definitions of the Singularity. These are smart, innovative people, but some may not even be familiar with Kurzweil's discussion of the Singularity as such.

I'd suggest using the name change as part of a major publicity campaign, which means you need some special reason for the campaign, such as a large donation (see James Miller's excellent idea).

A suggestion: it may be a bad idea to use word 'artificial intelligence' in the name without qualifiers, as to serious people in the field

  • the 'artificial intelligence' has much, much broader meaning than what SI is concerning itself with

  • there is very significant disdain for the commonplace/'science fiction' use of 'artificial intelligence'

I like "AI Risk Reduction Institute". It's direct, informative, and gives an accurate intuition about the organization's activities. I think "AI Risk Reduction" is the most intuitive phrase I've heard so far with respect to the organization.

  • "AI Safety" is too vague. If I heard it mentioned, I don't think I'd have a good intuition about what it meant. Also, it gives me a bad impression because I visualize things like parents ordering their children to fasten their seatbelts.
  • "Beneficial Architectures" is too vague.
... (read more)

I'll focus on "The Center for AI Safety", since that seems to be the most popular. I think "safety" comes across as a bit juvenile, but I don't know why I have that reaction. And if you say the actual words Artificial Intelligence, "The Center for Artificial Intelligence Safety" it gets to be a mouthful, in my opinion. I think a much better option is "The Center for Safety in Artificial Intelligence", making it CSAI, which is easily pronounced See-Sigh.

4mwengler12y
On the one hand, "The Center for AI Safety" really puts me off. Who would want to associate with a bunch of people who are worried about the safety of something that doesn't even exist yet? Certainly you want to be concerned with Safety, but it should be subsidiary to the more appealing goal of actually getting something interesting to work. On the other hand, if I weren't trying to have positive karma, I would have zero or negative karma, suggesting I am NOT the target demographic for this institute. And if I am not the target demographic, changing the name is a good idea because I like SIAI.
-8private_messaging12y

You could reuse the name of the coming December conference, and go for AI Impacts (no need to add "institute" or "research").

Retaining the meaning of 'intelligence explosion' without the word 'singularity':

[-][anonymous]12y10

Center for AI Ethics Research

Center for Ethical AI

Singularity Institute for Ethical AI

The Good Future Research Center

A wink to the earlier I.J. Good Institute idea, it matches the tone of the current logo while being unconfining in scope.

Institute for Friendly Artificial Intelligence (IFAI).

It would be nice if the name reflected the SI's concern that the dangers come not just from some cunning killer robots escaping a secret government lab or a Skynet gone amok, or a Frankenstein monster constructed by a mad scientist, but from recursive self-improvement ("intelligence explosion") of an initially innocuous and not-very smart contraption.

I am also not sure whether the qualifier "artificial" conveys the right impression, as the dangers might come from an augmented human brain suddenly developing the capacity for recursive s... (read more)

0wedrifid12y
The Singularity Institute (folks) does consider the dangers to be from the "artificial" things. They don't (unless I am very much mistaken) consider a human brain to have the possibility to recursively self-improve. Whole Brain Emulation FOOMing would fall under their scope of concern but that certainly qualifies as "artificial".

I agree that something along the lines of "AI Safety" or "AI RIsk Reduction" or "AI Impacts Research" would be good. It is what the organization seems to be primarily about.

As a side-effect, it might deter folks from asking why you're not building AIs, but it might make it harder to actually build an AI.

I'd worry about funding drying up from folks who want you to make AI faster, but I don't know the distribution of reasons for funding.

I'd prefer AI Safety Institute over Center for AI Safety, but I agree with the others that that general theme is the most appropriate given what you do.

[-][anonymous]9y00

Going by the google suggest principle, how about the AI Safety Syndicate (ASS)

0gjm9y
Leaving aside the facts that (1) they already changed their name and (2) they probably don't want to be called "ASS" and (3) that webpage looks as sketchy as all hell ... what principle exactly are you referring to? The "obvious" principle is this: if you start typing something that possible customers might start typing into the Google search box, and one of the first autocomplete suggestions is your name, you win. But if I type "ai safety" into a Google search box, "syndicate" is not one of the suggestions that come up. (Not even if I start typing "syndicate".) (Perhaps you mean that having a name that begins with "ai safety" is a good idea if people are going to be searching for "ai safety", which is probably true but has nothing to do with Google's search suggestions. And are a lot of people actually going to be searching for "ai safety"?)
[-][anonymous]11y00

The Centre for the Development of Benevolent Goal Architectures

[This comment is no longer endorsed by its author]Reply
[-][anonymous]12y00

Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)

For what it... (read more)

4JGWeissman12y
That misses the point that SIAI only gets the chance to respond in such a way if the potential supporter actually contacts them and tells them the story. It makes you wonder how many potential supporters they never heard from because the supporter themself or someone the supporter asked for advice rejected a misunderstanding of what SIAI is about.
[-][anonymous]12y00

Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?

You currently have 290 posts on LessWrong and Zero (0) total Karma.

I don't care about opinion of a bunch that is here on LW.

Others: please do not feed the trolls.

[This comment is no longer endorsed by its author]Reply

Heh. It's a pretty rare organisation that does Research in Artificial Intelligence Risk Reduction.

(Artificial Intelligence Risk Reduction by itself might work.)

0thomblake12y
That name reminds me eerily of RAINN.
[-][anonymous]12y00
[This comment is no longer endorsed by its author]Reply

"They Who Must Not Be Named"? Like it.

[-][anonymous]12y00

Here's a few:

  • Protecting And Promoting Humanity In The Future
  • Society For An Ethical Post-Humanity
  • Studies For Producing A Positive Impact Through Deep Recursion
  • Rationality Institute For Self-Improving Information Technology
[This comment is no longer endorsed by its author]Reply
  • Remedial Investigation [or Instruction] of Safety Kernel for AI [or: 'for AGI'; 'for Friendly AI'; 'for Friendly AGI'; 'for AGI Research'; etc.] (RISK for AI; RISK for Friendly AI)
  • Friendly Architectures Research (FAR)
  • Sapiens Friendly Research (SFR - pronounced 'Safer')
  • Sapiens' Research Foundation (SRF)
  • Sapiens' Extinction [or Existential] Risk Reduction Cooperative [or Conglomerate] (SERRC)
  • Researchers for Sapiens Friendly AI (RSFAI)

While the concise summary clearly associates SI with Good's intelligence explosion, nowhere does it specifically say anything about Kurzweil or accelerating change. If people really are getting confused about what sort of singularity you're thinking about, would it be helpful as a temporary measure to put some kind of one-sentence disclaimer in the first couple paragraphs of the summary? I can understand that maybe this would only further the association between "singularity" and Kurzweil's technology curves, but if you don't want to lose the wor... (read more)

Ok.

The Center for AI safety and Centre for Friendly Artificial Intelligence research sound the most correct as of now.

If you wanted to aim for a more creative name, then here are some

Centre for Coding Goodness

Man's Best Friend Group (If the slightly implied sexism of "Man's" is Ok..)

The Artificial Angels Institute / Centre for Machine Angels - The angels word directly conveys goodness and superiority over humans, but due to its christian origins and other associated imagery, it might be walking a tight rope.

Man's Best Friend Group (If the slightly implied sexism of "Man's" is Ok..)

Naming your research institute after a pet dog reference and it is the non gender neutral word that seems like the problem?

2blogospheroid12y
They'll come for the dogs, they'll stay for the AI. :)

Wasn't this discussed before?

Center for Applied Rationality Education?

You're thinking of the CfAR naming. CfAR has been spun out as a separate organisation from SI.

2RomeoStevens12y
ah yes.