Artaxerxes

Comments

Superintelligence discussed on Startalk

The segment on superintelligence starts at 45:00, it's a rerun of a podcast from 2 years ago. Musk says it's a concern, Bill Nye commenting on Musk's comments about it afterwards says that we would just unplug it and is dismissive. Neil is similarly skeptical and half heartedly plays devils advocate but clearly agrees with Nye.

Open thread, Feb. 13 - Feb. 19, 2017

I'd even suspect that it's possible that it's even more open to being abused by assholes. Or at least, pushing in the direction of "tell" may mean less opportunity for asshole abuse in many cases.

Dan Carlin six hour podcast on history of atomic weapons

I've heard good things about Dan Carlin's podcasts about history but I've never been sure which to listen to first. Is this a good choice, or does it assume you've heard some of his other ones, or perhaps are other podcasts better to listen to first?

Open thread, Jan. 30 - Feb. 05, 2017

Whose Goodreads accounts do you follow?

Open thread, Jan. 30 - Feb. 05, 2017

If you buy a Humble Bundle these days, it's possible to use their neat sliders to allocate all of the money you're spending towards charities of your choice via the PayPal giving fund, including Lesswrong favourites like MIRI, SENS and the Against Malaria Foundation. This appears to me to be a relatively interesting avenue for charitable giving, considering that it is (at least apparently) as effective per dollar spent as a direct donation would be to these charities.

Contrast this with buying games from the Humble Store, which merely allocates 5% of money spent to a chosen charity, or using Amazon Smile which allocates a miniscule 0.5% of the purchase price of anything you buy. While these services are obviously a lot more versatile in terms of the products on offer, they to me are clearly more something you set up if you're going to be buying stuff anyway rather than what this appears to be to me, a particular opportunity.

Here are a couple of examples of the kinds of people for whom I think this might be worthwhile:

  1. People who are interested in video games or comics or whatever including any that are available in Humble Bundles to purchase them entirely guilt-free, with the knowledge that the money is going to organisations they like.

  2. People who are averse to more direct giving and donations for whatever reason to be able to support organisations they approve of in a more comfortable, transactional way, in a manner similar to buying merchandise.

  3. People who may be expected to give gifts as part of social obligation, and for whom giving gifts of the kinds of products offered in these bundles is appropriate, to do so while all of the money spent goes to support their pet cause.

Open thread, Nov. 14 - Nov. 20, 2016

Can anyone explain to me what non-religious spirituality means, exactly? I had always thought it was an overly vague to meaningless new age term in that context but I've been hearing people like Sam Harris use the term unironically, and 5+% of LW are apparently "atheist but spiritual" according to the last survey, so I figure it's worth asking to find out if I'm missing out on something not obvious. The wikipedia page describes a lot of distinct, different ideas when it isn't impenetrable, so that didn't help. There's one line there where it says

The term "spiritual" is now frequently used in contexts in which the term "religious" was formerly employed.

and that's mostly how I'm familiar with its usage as well.

Yudkowsky vs Trump: the nuclear showdown.

This is a really good comment, and I would love to hear responses to objections of this flavour from Eliezer etc.

Saying "we haven't had a nuclear exchange with Russia yet, therefor our foreign policy and diplomatic strategy is good" is an obvious fallacy. Maybe we've just been lucky.

I mean it's less about whether or not it's good as much as it is trying to work out the likelihood of whether policies resulting from Trump's election are going to be worse. You can presuppose that current policies are awful and still think that Trump is likely to make things much worse.

Yudkowsky vs Trump: the nuclear showdown.

Like, reading through Yudkowsky's stuff, his LW writings and HPMOR, there is the persistent sense that he is 2 guys.

One guy is like "Here are all of these things you need to think about to make sure that you are effective at getting your values implemented". I love that guy. Read his stuff. Big fan.

Other guy is like "Here are my values!" That guy...eh, not a fan. Reading him you get the idea that the whole "I am a superhero and I am killing God" stuff is not sarcastic.

It is the second guy who writes his facebook posts.

Yes, I agree with this sentiment and am relieved someone else communicated it so I didn't have to work out how to phrase it.

I don't share (and I don't think my side shares), Yudkowsky's fetish for saving every life. When he talks about malaria nets as the most effective way to save lives, I am nodding, but I am nodding along to the idea of finding the most effective way to get what you want done, done. Not at the idea that I've got a duty to preserve every pulse.

I don't think Yudkowsky think malaria nets are the best use of money anyway, even if they are in the short term the current clearest estimate as to where to put your money in in order to maximise lives saved. In that sense I don't think you disagree with him, he doesn't fetishize preserving pulses in the same way that you don't. Or at least, that's what I remember reading. First thing I could find corroborating that model of his viewpoint is his interview with Horgan.

There is a conceivable world where there is no intelligence explosion and no superintelligence. Or where, a related but logically distinct proposition, the tricks that machine learning experts will inevitably build up for controlling infrahuman AIs carry over pretty well to the human-equivalent and superhuman regime. Or where moral internalism is true and therefore all sufficiently advanced AIs are inevitably nice. In conceivable worlds like that, all the work and worry of the Machine Intelligence Research Institute comes to nothing and was never necessary in the first place, representing some lost number of mosquito nets that could otherwise have been bought by the Against Malaria Foundation.

There’s also a conceivable world where you work hard and fight malaria, where you work hard and keep the carbon emissions to not much worse than they are already (or use geoengineering to mitigate mistakes already made). And then it ends up making no difference because your civilization failed to solve the AI alignment problem, and all the children you saved with those malaria nets grew up only to be killed by nanomachines in their sleep. (Vivid detail warning! I don’t actually know what the final hours will be like and whether nanomachines will be involved. But if we’re happy to visualize what it’s like to put a mosquito net over a bed, and then we refuse to ever visualize in concrete detail what it’s like for our civilization to fail AI alignment, that can also lead us astray.)

I think that people who try to do thought-out philanthropy, e.g., Holden Karnofsky of Givewell, would unhesitatingly agree that these are both conceivable worlds we prefer not to enter. The question is just which of these two worlds is more probable as the one we should avoid. And again, the central principle of rationality is not to disbelieve in goblins because goblins are foolish and low-prestige, or to believe in goblins because they are exciting or beautiful. The central principle of rationality is to figure out which observational signs and logical validities can distinguish which of these two conceivable worlds is the metaphorical equivalent of believing in goblins.

I think it’s the first world that’s improbable and the second one that’s probable. I’m aware that in trying to convince people of that, I’m swimming uphill against a sense of eternal normality – the sense that this transient and temporary civilization of ours that has existed for only a few decades, that this species of ours that has existed for only an eyeblink of evolutionary and geological time, is all that makes sense and shall surely last forever. But given that I do think the first conceivable world is just a fond dream, it should be clear why I don’t think we should ignore a problem we’ll predictably have to panic about later. The mission of the Machine Intelligence Research Institute is to do today that research which, 30 years from now, people will desperately wish had begun 30 years earlier.

Also, on this:

Yes, electing Hillary Clinton would have been a better way to ensure world prosperity than electing Donald Trump would. That is not what we are trying to do. We want to ensure American prosperity.

Especially here, I'm pretty sure Eliezer is more concerned about general civilisational collapse and other globally negative outcomes which he sees as non-trivially more likely with Trump as president. I don't think this is as much of a difference in values and specifically differences with regards to how much you each value each level of the concentric circles of the proximal groups around you. At the very least, I don't think he would agree that a Trump presidency would be likely to result in improved American prosperity over Clinton.

I just want to point out that Yudkowsky is making the factual mistake of modeling us as being shitty at achieving his goals, when in truth we are canny at achieving our own.

I think this is probably not what's going on, I honestly think Eliezer is being more big picture about this, in the sense that he is concerned more about increased probability of doomsday scenarios and other outcomes unambiguously bad for most human goals. That's the message I got from his facebook posts anyway.

Open thread, Sep. 12 - Sep. 18, 2016

LessWrong has made me if anything more able to derive excitement and joy from minor things, so if I were you I would check if LW is really to blame or otherwise find out if there are other factors causing this problem.

Load More