Sorted by New


How does personality vary across US cities?

However, this likely understates the magnitudes of differences in underlying traits across cities, owing to people anchoring on the people who they know when answering the questions rather than anchoring on the national population

I think this is a major problem. This is mainly based on taking a brief look at this study a while back and being very suspicious of it explicitly contradicting so many of my models (eg South America having lower Extraversion than North America and East Asia being the least Conscientious region)

"Flinching away from truth” is often about *protecting* the epistemology

The causal chain feels like a post-justification and not what actually goes on in the child's brain. I expect this to be computed using a vaguer sense of similarity that often ends up agreeing with causal chains (at least good enough in domains with good feedback loops). I agree that causal chains are more useful models of how you should think explicitly about things, but it seems to me that the purpose of these diagrams is to give a memorable symbol for the bug described here (use case: recognize and remember the applicability of the technique).

Lesswrong 2016 Survey

I just remembered that I still haven't finished this. I saved my survey response partway through, but I don't think I ever submitted it. Will it still be counted, and if not, could you give people with saved survey responses the opportunity to submit them?

I realize this is my fault, and understand if you don't want to do anything extra to fix it.

Open Thread Feb 16 - Feb 23, 2016

I wasn't only referring to wanting to live where there are a lot of people. I was also referring to wanting to live near to very similar/nice people and far from very dissimilar/annoying people. I think the latter, together with the expected ability to scale things down, would make people want to live in smaller, more selected, communities. Even if they were in the middle of nowhere.

Open Thread Feb 16 - Feb 23, 2016

Where people want to live depends on where other people live. It's possible to move away from bad Nash equilibria by cooperation.

[Link] Introducing OpenAI

Yes, robust cooperation is not much to us if its cooperation between the paperclip maximizer and the pencilhead minimizer. But if there are a hundred shards that make up human values, and tens of thousands of people running AI's trying to maximize the values they see fit. It's actually not unreasonable to assume that the outcome, while not exactly what we hoped for, is comparable to incomplete solutions that err on the side of (1) instead.

After having written this I notice that I'm confused and conflating: (a) incomplete solutions in the sense of there not being enough time to do what should be done, and (b) incomplete solutions in the sense of it being actually (provably?) impossible to implement what we right now consider essential parts of the solution. Has anyone got thoughts on (a) vs (b)?

[Link] Introducing OpenAI

It's important to remember the scale we're talking about here. A $1B project (even when considered over its lifetime) in such an explosive field with such prominent backers, would be interpreted as nothing other than a power-grab unless it included a lot of talk about openness (it will still be, but as a less threatening one). Read the interview with Musk and Altman and note how they're talking about sharing data and collaborations. This will include some noticeable short term benefits for the contributors, and pushing for safety, either via including someone from our circles or by a more safety focused mission statement, would impede your efforts at gathering such a strong coalition.

It's easy to moan over civilizational inadequacy and moodily conclude that above shows us how (as a species) we're so obsessed with appropriateness and politics that we will avoid our one opportunity to save ourselves. Sure do some of that, and then think of the actual effects for a few minutes:

If the Value Alignment research program is solvable in the way we all hope it is (complete with a human universal CEV, stable reasoning under self-modification and about other instances of our algorithm) then having lots of implementations running around will be basically the same as distributing the code over lots of computers. If the only problem is that human values won't quite converge: this gives us a physical implementation of the merging algorithm of everyone just doing their own thing and (acausally?) trading with each other.

If we can't quite solve everything that we're hoping for, this does change the strategic picture somewhat. Mainly it seems to push us away from a lot of quick fixes, that will likely seem tempting as we approach the explosion: we can't have a sovereign just run the world like some kind of OS that keeps everyone separate, we'll also be much less likely to make the mistakes of creating CelestAI from Friendship is Optimal, something that optimizes most our goals but has some undesired lock-ins. There are a bunch of variations here, but we seem locked out of strategies that try to achieve some minimum level of the cosmic endowment, while possibly failing at getting a substantial constant fraction of our potential by achieving it at the cost of important values or freedoms.

Whether this is a bad thing or not really depends on how one evaluates two types of risk: (1) the risk of undesired lock-ins from an almost perfect superintelligence getting too much relative power, (2) the risk of bad multi-polar traps. Much of (2) seems solvable by robust cooperation, that we seem to be making good progress on. What keeps spooking me are risks due to consciousness: either mistakenly endowing algorithms with it creating suffering, or evolving to the point that we loose it. These aren't as easily solved by robust cooperation, especially if we don't notice them until it's too late. The real strategic problem right now is that there isn't really anyone we can trust to be unbiased in analyzing the relative dangers of (1) and (2), especially because they pattern-match so well with the ideological split between left and right.

[Link] Introducing OpenAI

They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).

Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well advance AI Safety Research relative to AI research (via tighter integration of the research programs and choices of architecture and research direction, this seems especially important in how it will play out in the endgame).

In summary, this could actually be really good, it's just too early to tell.

Apptimize -- rationalist startup hiring engineers

Does Java (the good parts) refer to the O'Reilly book with the same name? Or is it some proper subset of the language like what Crockford describes for Javascript?

Load More