Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.

Alexander Kruel asked:

What would you do with $100,000 if it were given to you on the condition that you donate it to a charity of your choice?

Ben Goertzel replied:

Unsurprisingly, my answer is that I would donate the $100,000 to the OpenCog project which I co-founded and with which I'm currently heavily involved.  This doesn't mean that I think OpenCog should get 100% of everybody's funding; but given my own state of knowledge, I'm very clearly aware that OpenCog could make great use of $100K for research working toward beneficial AGI and a positive Singularity. If I had $100M rather than $100K to give away, I would have to do more research into which other charities were most deserving, rather than giving it all to OpenCog!

What can one learn from this?

  • The SIAI is not the only option to work towards a positive Singularity
  • The SIAI should try to cooperate more closely with other AGI projects to potentially have a positive impact.

I'm planning to contact and ask various experts, who are aware of risks from AI, the same question. 

New to LessWrong?

New Comment
75 comments, sorted by Click to highlight new comments since: Today at 6:36 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I don't see how either of your Lessons Learned follows from the Goertzel quotes.

0wedrifid13y
We could possibly say the first point scrapes through on a technicality. It perhaps constitutes an option even if the option may be sightly worse than useless.
-6XiXiDu13y

Has anyone here ever tried to contribute to the OpenCog project?

Because I have.

You know what I learned?

This open source* code is missing huge components that are proprietary parts of Ben's Novamente system. So if you're a coder, you can't actually compile it, run it, or do anything with else with it. Ben's holding all the key components hostage and refuses to release them until he's paid money. If you'd like to pay someone a second time to open source the code they already wrote, OpenCog is an excellent charity. Hopefully after he gets enough money to actu... (read more)

5Mitchell_Porter13y
Are you saying that the existing OpenCog source is actually useless unless you have a paid-for copy of Novamente to augment it, or just that there are functionalities which have already been realized in Novamente which will have to be recreated in open source if they are to become part of what OpenCog can do?

I'm one of the leaders of OpenCog, and I can tell you that these accusations are spurious and bizzare. Regarding installing dependencies and compiling the code, detailed instructions are provided on our wiki. All the major features have been released (as they were ported/cleaned up during 2008 and 2009).

Some interesting features were previously implemented in Novamente but during rushed commercial contracts, in a hacky way that means it's easier to re-implement them now. Sometimes people have difficulties compiling the code, but we help them if they show up on IRC (I don't remember Louie though).

5Louie13y
My comment relates to the state of OpenCog when I downloaded it in November 2009. It's entirely possible that things are much improved since then. I think it was reasonable to assume that things hadn't changed much though since the code looked mostly empty at that time and I didn't sense that there was any active development by anyone who wasn't on the Novamente/OpenCog team an an employee or close team member. There were comments in the code at the time stating that pieces were missing because they hadn't yet been released from Novamente. Hopefully those are gone now. Sorry I didn't join you on IRC. I never noticed you had a channel. I could have sent an email to the list. But again, it looked like I couldn't contribute to OpenCog unless I somehow got hired by OpenCog/Novamente or ingratiated myself to the current team and found a way to become part of the inner circle. I was considering if that would be a good idea at the time but figured that emailing the list with "Duuuhhhh... I can't compile it. WTF?" would only frustrate internal developers, get condescending replies from people who had unreleased code that made their versions work, or get requests for funding to help open source the unreleased code. Hopefully things have improved in the last 1.5 years. I would love to support OpenCog. The vision you guys have looks great.
4ferrouswheel13y
Well, we get a lot of the "I can't compile it" emails and while we are not especially excited to receive these, we usually reply and guide people through the process with minimal condescension. There has been progressive additions to OpenCog from closed source projects, but they've never prevented the core framework from compiling and working in and of itself. Apologies for my tone too. We occasionally get people trolling or trash-talking us without taking any time to understand the project... sometimes they just outright lie, and that's frustrating. Of course, we're not perfect as an OSS project, but we are constantly trying to improve.
4nilg13y
Ah, OK. Thanks for clearing that up. Sorry for my perhaps harsh tone, I didn't imagine your comment would be based on an old/incomplete version of OpenCog, you should have mentioned that in your post or even better update your knowledge before posting! There's been a lot of work since then. You can use it to run a virtual pet under Multiverse (although you need either 2 machines or a virtual box, one with Linux and the other one with Windows because OpenCog isn't completely ported for Windows and Multiverse runs under Windows). It is also used to control the Nao robot in a lab in China. Soon it will be possible to connect it in the Unity3D game engine with a much improved tool kit to code you own bot (because currently the API is really tough to understand and use). Just for playing around with the various components (except MOSES which is a standalone executable for now) there is a Scheme binding, and there will be soon a Python binding. It's really a lot of work and except the HK team who got a grant to focus entirely on it for the next 2 years and some students in the BLISS lab in China we only manage to contribute via loosely related contracts that do not always help advancing OpenCog itself (though we're trying our best to direct our efforts toward it). So any help is very welcome! http://wiki.opencog.org/w/Volunteer http://wiki.opencog.org/w/The_Open_Cognition_Project
0ferrouswheel13y
Yeah, you've tried to contribute huh? Who are you again and why is there no mention of you in my complete archive of the OpenCog mailing lists?
0nilg13y
Louie where did you get this non sense? OpenCog doesn't need any proprietary add-ons and is better and cleaner (and keeps getting better and better) than the Novamente code from which it has been seeded. You are either hiding your identity or making up the fact that you've tried to contribute because I've never heard about you on the email list or IRC.

Any possibility to ask a follow-up question about what he would do with $100M? With all due respect to Ben, there's a good chance he'd overestimate the importance of his own project so I'd be more interested to see how he thinks other projects compare with each-other.

-1XiXiDu13y
Upvoted because someone downvoted you without explaining himself. I think that is sufficiently outweighed by the fact that the same could be said about the SIAI. He said he would have to do more research into it. I really don't want to bother him even more with questions right now. But you can find his e-mail on his homepage.
2benelliott13y
The fact that SIAI is just as likely to be biased is exactly why I want to hear what those outside of it think of it. Fair enough.

What can one learn from this?

  • The SIAI should try to cooperate more closely with other AGI projects to potentially have a positive impact.

This is not something that can be learned from what you have mentioned. Particularly if prior observation of Goertzel left you unimpressed. A self endorsement does not 'teach' you that cooperation with him on AGI would be beneficial.

1XiXiDu13y
What information I learnt from Ben's answer: * There is an experimental AGI project doing research towards a postive Singularity. * An AGI researcher believes $100K are better spend with that project. What I concluded: * There are other options to work towards a positive Singularity (experimental). * The SIAI might benefit from cooperating with them rather than competing. I'm more impressed than I am with the SIAI right now. At least he is doing something while most of what the SIAI achieved is some science fictional idea called CEV and a handful of papers of which most are just survey papers and none are peer-reviewed, as far as I know. And the responses to this post seem completely biased and in some cases simply motivated by a strong commitment to the SIAI.

Too bad we can't judge Friendly AI charity effectiveness as "easily" as we can judge the effectiveness of some other charities, like those which distribute malaria nets and vaccines.

If one assumes that giving toward solving the Friendly AI problem offers the highest marginal return on investment, which project do you give to? Yudkowsky / SIAI? OpenCog / Goertzel? Gert-Jan Lokhorst? Stan Franklin / Wendell Wallach / Colin Allen?

My money is on SIAI, but I can't justify that with anything quick and easy.

6ferrouswheel13y
As I see it, OpenCog is making practical progress towards an architecture for AGI, whereas SIAI is focused on the theory of Friendly AI. I specifically added "consultation with SIAI" in the latter part of OpenCog's roadmap to try to ensure the highest odds of OpenCog remaining friendly under self-improvement. As far as I'm aware there is no software development going on in SIAI, it's all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven't heard anything about them launching an engineering or experimental effort).
3XiXiDu13y
Indeed, that is another reason for me to conclude that the SIAI should seek cooperation with projects that follow an experimental approach.
-8timtyler13y
0Vladimir_M13y
I don't know much about AI specifically, but I do know something about software in general. And I'd say that even if someone had a correct general idea how to build an AGI (an assumption that by itself beggars belief given the current state of the relevant science), developing an actual working implementation with today's software tools and methodologies would be sort of like trying to build a working airplane with Neolithic tools. The way software is currently done is simply too brittle and unscalable to allow for a project of such size and complexity, and nobody really knows when and how (if at all) this state of affairs will be improved. With this in mind, I simply can't take seriously people who propose a few years long roadmap for building an AGI.
5Vladimir_Nesov13y
This is the sort of sentiment that has people predict that AGI will be built in 300 years, because "300 years" is how difficult the problem feels like. There is a lot of uncertainty about what it takes to build an AGI, and it would be wrong to be confident one way or the other just how difficult that's going to be, or what tools are necessary. We understand both airplanes and Neolithic tools, but we don't understand AGI design. Difficulty in basic understanding doesn't straightforwardly translate into the difficulty of solution.
1Vladimir_M13y
That is true, but a project like OpenCog can succeed only if: (1) there exists an AGI program simple enough (in terms of both size and messiness) to be doable with today's software technology, and (2) people running the project have the right idea how to build it. I find both these assumptions improbable, especially the latter, and their conjunction vanishingly unlikely. Perhaps a better analogy would be if someone embarked on a project to find an elementary proof of P != NP or some such problem. We don't know for sure that it's impossible, but given both the apparent difficulty of the problem and the history of the attempts to solve it, such an announcement would be rightfully met with skepticism.
0Vladimir_Nesov13y
You appealed to inadequacy of "today's software tools and methodologies". Now you make a different argument. I didn't say it's probable that solution will be found (given the various difficulties), I said that you can't be sure that it's Neolithic tools in particular that are inadequate.
1Vladimir_M13y
It's hard to find a perfect analogy here, but both analogies I mentioned lend support to my original claim in a similar way. It may be that with the present state of math, one could cite a few established results and use them to construct a simple proof of P != NP, only nobody's figured it out yet. Analogously, it may be that there is a feasible way to take present-day software tools and use them to implement a working AGI. In both cases, we lack the understanding that would be necessary either to achieve the goal or to prove it impossible. However, what insight and practical experience we have strongly suggests that neither thing is doable, leading to conclusion that the present-day software tools likely are inadequate. In addition to this argument, we can also observe that even if such a solution exists, finding it would be a task of enormous difficulty, possibly beyond anyone's practical abilities. This reasoning doesn't lead to the same certainty that we have in problems involving well-understood physics, such as building airplanes, but I do think it's sufficient (when spelled out in full detail) to establish a very high level of certainty nevertheless.
3ferrouswheel13y
Well, if you bothered looking at our/OpenCog's roadmap you'll see it doesn't expect AGI in a "few years". What magical software engineering tools are you after that can't be built with the current tools we have? If nobody attempts to build these then nothing will ever improve - people will just go "oh, that can't be done right now, let's just wait a while until the tools appear that make AGI like snapping lego together". Which is fine if you want to leave the R&D to other people... like us.
1Vladimir_M13y
ferrouswheel: The roadmap on opencog.org has among its milestones: "2019-2021: Full-On Human Level AGI." Well, if I knew, I'd be cashing in on the idea, not discussing it here. In any case, surely you must agree that claiming the ability to develop an AGI within a decade is a very extraordinary claim.
4JaredWigmore13y
As in "extraordinary claims demand extraordinary evidence". A summary of the evidence can be found on Ben's blog Adding some more info... Basically the evidence can be divided into two parts. 1) Evidence that the OpenCog design (or something reasonably similar) would be a successful AGI system when fully implemented and tested. 2) Evidence that the OpenCog design can be implemented and tested within a decade. 1) The OpenCog design has been described in considerable detail in various publications (formal or otherwise); see http://opencog.org/research/ for an incomplete list. A lot of other information is available in other papers co-authored by Ben Goertzel, talks/papers from the AGI Conferences (http://agi-conf.org/), and the AGI Summer School (http://agi-school.org/) amongst other places. These resources also include explanations for why various parts of the design would work. They use a mix of different types of arguments (i.e. intuitive arguments, math, empirical results). It doesn't constitute a formal proof that it will work, but it is good evidence. 2) The OpenCog design is realistic to achieve with current software/hardware and doesn't require any major new conceptual breakthroughs. Obviously it may take years longer than intended (or even years less); it depends on funding, project efficiency, how well other people solve parts of the problem, and various other things. It's not realistic to estimate the exact number of years at this point, but it seems unlikely that it needs to take more than, say 20 years, given adequate funding. By the way, the two year project mentioned in that blog post is the OpenCog Hong Kong project, which is where ferrouswheel (Joel Pitt) and I are currently working. We have several other people here as well, and various other people working right now (including Nil Geisweiller who posted before as nilg).
1ferrouswheel13y
Not particularly, people have been claiming a decade from human-level intelligence since the dawn of the AI field, why should now be any different? ;p And usually people would consider a decade being more than a "few years" - which was sort of my point.
0timtyler13y
Eyeballing my own graph I give it about a 12% chance of being true. Ambitious - but not that extraordinary. People are usually overoptimistic about the timescales of their own projects. It is typically an attempt to signal optimism and confidence.
0lukeprog13y
We agree on that point, I just didn't have the balls to say it. :)
0[anonymous]13y
The OpenCog Roadmap does say that they will collaborate with SIAI at some point: