All of drcode's Comments + Replies

That works for me too... anyone here have enough karma so that we can break this out as a separate top level post? :-)

0Benquo12y
The convention seems to be to post something 1-2 weeks in advance. I can make a post on May 1.

I'd attend a DC meetup, but maybe we should push it out a month or so at least- Otherwise it causes confusion about the Baltimore meeting, which has already been fully organized... no need to split attendance by having two meetings at the same time at two places so close to each other.

0Benquo12y
I agree.

I doubt that simply donating money to charity is an efficient way to make the world a better place. There are studies that question, for instance, how much good all the money has done that we've given to developing nations.

It's definitely possible, I think, that creating a great video game might bring more happiness to the world than simply writing a check for a charity.

I am not saying, by the way, that being charitable is a bad idea. However, I do think you need to be strategic for it to be effective. For instance, it might be better to help a struggling ... (read more)

9MBlume12y
Most charities suck. A few don't. Finding the ones that suck least and then pumping money into them is actually a pretty efficient way to make the world a better place.
[anonymous]12y16

Or, if you have specific skill that can be helpful for a charity organization, that may be a better investment than just giving them money.

How does this make sense? By donating your labor you're effectively giving the charity money, since now they don't have to pay someone to do said labor. Since it's rare that your skills and an organization's needs are going to line up, it's almost always going to be more efficient to just make a donation.

If you think the organization is going to waste your donation, you shouldn't offer them labor instead - you should find a better organization.

Any time want to perform a complex activity, we need to balance our time between evaluating different strategies for performing this activity, versus performing the mundane steps of this activity, themselves. If we just jump right into the activity without adequate planning (and without reevaluating our plan periodically) then we may perform it with a low efficiency. On the other hand, if we invest too much time in planning, we end up never actually "doing it."

At it's simplest level, your idea can thought of as getting stuck in local maxima of efficiency, when additional time could be spent in strategizing to find higher possibilities for efficiency.

I interpreted the question PG was asking as, "why is it worth considering newcomb-like problems?"

(Of course, any philosophical idea is worth considering, but the question is whether this line of reasoning has any practical benefits for developing AI software)

3AlephNeil13y
Ah, I see. I'm not really qualified to give an answer (as I don't have any background in AI) but I'll try anyway: The strategies which succeed in newcomblike problems are in a certain sense 'virtuous'. By expanding the scope of their concern from the immediate indexical 'self' to the 'world as a whole' they realise that in the long run you do better if you're 'honest', and fulfil your 'obligations'. So a decision theory which can deduce and justify the 'right' choices on such problems is desirable. UDT reminds me of Kant's categorical imperative "Act only according to that maxim whereby you can at the same time will that it should become a universal law." I think the way in which moral behaviour gradually emerges out of 'enlightened self-interest' is profoundly relevant to anyone interested in the intersection of ethics and AI.

Hmm... that list of projects worries me a little...

It uncomfortably reminds me of preachers on TV/radio who spend all their air time trying to convert new people as opposed to answering the question "OK, I'm a Christian, now what should I do?" The fact that they don't address any follow up questions really hurts their credibility.

Many of these projects seem to address peripheral/marketing issues instead of addressing the central, nitty-gritty technical details required for developing GAI. That worries me a bit.

3SoullessAutomaton13y
I expect that developing AI to the desired standards is not currently a project that can be moved forward by throwing money at it (at least not money at the scale SIAI has to work with). I can't speak for SIAI, but were I personally tasked with "arrange the creation an AI that will start a positive singularity" my strategy for the next several years at least would center on publicity and recruiting.

Working on papers submitted to peer-reviewed scientific journals is not marketing but research.

If SIAI wants to build some credibility then it needs some publications in scientific journals. Doing so could help to ensure further funding and development of actual implementations.

I think that it is a very good idea to first formulate and publish the theoretical basis for the work they intend to do, rather than just saying: we need money to develop component X of our friendly AI.

Of course a possible outcome will be that the scientific community will deem the ... (read more)

I'd say my most valuable skill derives from the fact that I had very unusual parents with whom I also moved a lot, so that they had a strong influence on me. Consequently, the environment of my childhood was pretty unique, giving me neural patterns that deviate significantly from those of many other people.

This means I sometimes behave in ways that seem "dumb", but in other instances act in ways that seem unusually intelligent.

I excel in areas where unique neural patterns are rewarded: This includes (naturally) the stock market, some types of pro... (read more)