Lumpyproletariat

Lumpy is an undergraduate at some state college somewhere in the States. He isn't an interesting person and interesting things seldom happen to him.

Among his skills are such diverse elements as linguistic tomfoolery, procrastination, being terrible with computers yet running Linux anyway, a genial temperament and magnanimous spirit, a fairly swell necktie if he does say so himself, mounting dread, and quiet desperation.

Plays as a wizard in any table top or video game where that's an option, regardless of whether it's a [i]strong[/i] option. Has never failed a Hogwarts sorting test, of any sort or on any platform. (If you were about to say how one can't fail a sorting test . . . one surmises that you didn't make Ravenclaw.) Read The Fellowship, Two Towers, and Return of the King over the course of three sleepless days at age seven; couldn't keep down solid food after, because he'd forgotten to eat. Was really into the MBTI as a tweenager; thought it ridiculous how people said that no personality type was "better" than the others when ENTJ is clearly the most powerful. (Scored INFP, his self, but hey, one out of four isn't so bad. (However, found a better fit in INTP.)) Out of the Disney princesses Lumpy is Mulan--that is, if one is willing to trust BuzzFeed. Which, alas, one is not.

No, but seriously.

Mulan?? 0_o

If, despite this exhaustive list of traits and deeds, your burning question is left unanswered, send a missive in private. Should your quest be noble and intentions pure, it is said that Lumpyproletariat might respond in kind.

Wiki Contributions

Comments

Anything that's smart enough to predict what will happen in the future, can see in advance which experiences or arguments would/will cause them to change their goals. And then they can look at what their values are at the end of all of that, and act on those. You can't talk a superintelligence into changing its mind because it already knows everything you could possibly say and already changed its mind if there was an argument that could persuade it.

Anything that's smart enough to predict what will happen in the future, can see in advance which experiences or arguments would/will cause them to change their goals. And then they can look at what their values are at the end of all of that, and act on those. You can't talk a superintelligence into changing its mind because it already knows everything you could possibly say and already changed its mind if there was an argument that could persuade it.

So, your exact situation is going to be unique, but there's no reason you shouldn't be able to get alternate funding to do college. Could you give more specifics about your situation and I'll see what I can do or who I can put you in contact with?

My off-the-cuff answers are ~about thirty thousand, and less than a hundred people respectively. That's from doing some googling and having spoken with AI safety researchers in the past, I've no particular expertise.

It hasn't been discussed to my knowledge, and I think that unless you're doing something much more important (or you're easily discouraged by people telling you that you've more to learn) it's pretty much always worth spending time thinking things out and writing them down.

Alien civilizations already existing in numbers but not having left their original planets isn't a solution to the Fermi paradox, because if the civilizations were numerous some of them would have left their original planets. So removing it from the solution-space doesn't add any notable constraints. But the grabby aliens model does solve the Fermi paradox.

The reason humans don't do any of those things is because they conflict with human values. We don't want to do any of that in the course of solving a math problem. Part of that is that doing such things would conflict with our human values, and the other part is that it sounds for a lot of work and we don't actually want the math problem solved that badly.

A better example of things that humans might extremely optimize for, is the continued life and well-being of someone who they care deeply about. Humans will absolutely hire people--doctors and lawyers and charlatans who claim psychic foreknowledge--, kill large numbers of people if that seems helpful, and there are people who would tear apart the stars to protect their loved ones if that were both necessary and feasible (which is bad if you inherently value stars, but very good if you inherently value the continued life and well-being of someone's children). 

One way of thinking about this is that an AI can wind up with values which seem very silly from our perspective, values that you or I simply wouldn't care very much about, and be just as motivated to pursue those values as we're motivated to pursue our highest values. 

But that's anthropomorphizing. A different way to think about it is that Clippy is a program that maximizes the number of paperclips, like an if loop in Python or water flowing downhill, and Clippy does not care about anything.

The history of the world would be different (and a touch shorter) if immediately after the development of the nuclear bomb millions of nuclear armed missiles constructed themselves and launched themselves at targets across the globe.

To date we haven't invented anything that's an existential threat without humans intentionally trying to use it as a weapon and devoting their own resources to making it happen. I think that AI is pretty different.

Robin Hanson has an solution to the Fermi Paradox which can be read in detail here (there are also explanatory videos at the same link): https://grabbyaliens.com/

The summary from the site goes: 

There are two kinds of alien civilizations. “Quiet” aliens don’t expand or change much, and then they die. We have little data on them, and so must mostly speculate, via methods like the Drake equation.

“Loud” aliens, in contrast, visibly change the volumes they control, and just keep expanding fast until they meet each other. As they should be easy to see, we can fit theories about loud aliens to our data, and say much about them, as S. Jay Olson has done in 7 related papers (1, 2, 3, 4, 5, 6, 7) since 2015.

Furthermore, we should believe that loud aliens exist, as that’s our most robust explanation for why humans have appeared so early in the history of the universe. While the current date is 13.8 billion years after the Big Bang, the average star will last over five trillion years. And the standard hard-steps model of the origin of advanced life says it is far more likely to appear at the end of the longest planet lifetimes. But if loud aliens will soon fill the universe, and prevent new advanced life from appearing, that early deadline explains human earliness.

“Grabby” aliens is our especially simple model of loud aliens, a model with only 3 free parameters, each of which we can estimate to within a factor of 4 from existing data. That standard hard steps model implies a power law (t/k)n appearance function, with two free parameters k and n, and the last parameter is the expansion speed s. We estimate:

  • Expansion speed s from fact that we don’t see loud alien volumes in our sky,
  • Power n from the history of major events in the evolution of life on Earth,
  • Constant k by assuming our date is a random sample from their appearance dates.

Using these parameter estimates, we can estimate distributions over their origin times, distances, and when we will meet or see them. While we don’t know the ratio of quiet to loud alien civilizations out there, we need this to be ten thousand to expect even one alien civilization ever in our galaxy. Alas as we are now quiet, our chance to become grabby goes as the inverse of this ratio.

/quote

Load More