All of Lumpyproletariat's Comments + Replies

Anything that's smart enough to predict what will happen in the future, can see in advance which experiences or arguments would/will cause them to change their goals. And then they can look at what their values are at the end of all of that, and act on those. You can't talk a superintelligence into changing its mind because it already knows everything you could possibly say and already changed its mind if there was an argument that could persuade it.

2Vladimir_Nesov4mo
This takes time, you can't fully get there before you are actually there. What you can do (as a superintelligence) is make a value-laden [https://arbital.com/p/value_laden/] prediction of future values, remain aware that it's only a prediction, and only act mildly on it to avoid goodharting. The point is the analogy between how humans think of this and how superintelligences would still think about this, unless they have stable/tractable/easy-to-compute values. The analogy holds, the argument from orthogonality doesn't apply (yet, at that time). Even if the conclusion of immediate ruin is true, it's true for other reasons, not for this one. Orthogonality suggests eventual ruin, not immediate ruin. Orthogonality thesis holds for stable values, not for agents with their unstable precursors that are still wary of goodhart. They do get there eventually, formulate stable values, but aren't automatically there immediately (or quickly, even by physical time). And the process of getting there influences what stable goals they end up with, which might be less arbitrary than poorly-selected current unstable goals they start with, which would rob orthogonality thesis of some of its weight, as applied to the thesis of eventual ruin.

Anything that's smart enough to predict what will happen in the future, can see in advance which experiences or arguments would/will cause them to change their goals. And then they can look at what their values are at the end of all of that, and act on those. You can't talk a superintelligence into changing its mind because it already knows everything you could possibly say and already changed its mind if there was an argument that could persuade it.

So, your exact situation is going to be unique, but there's no reason you shouldn't be able to get alternate funding to do college. Could you give more specifics about your situation and I'll see what I can do or who I can put you in contact with?

My off-the-cuff answers are ~about thirty thousand, and less than a hundred people respectively. That's from doing some googling and having spoken with AI safety researchers in the past, I've no particular expertise.

It hasn't been discussed to my knowledge, and I think that unless you're doing something much more important (or you're easily discouraged by people telling you that you've more to learn) it's pretty much always worth spending time thinking things out and writing them down.

Alien civilizations already existing in numbers but not having left their original planets isn't a solution to the Fermi paradox, because if the civilizations were numerous some of them would have left their original planets. So removing it from the solution-space doesn't add any notable constraints. But the grabby aliens model does solve the Fermi paradox.

The reason humans don't do any of those things is because they conflict with human values. We don't want to do any of that in the course of solving a math problem. Part of that is that doing such things would conflict with our human values, and the other part is that it sounds for a lot of work and we don't actually want the math problem solved that badly.

A better example of things that humans might extremely optimize for, is the continued life and well-being of someone who they care deeply about. Humans will absolutely hire people--doctors and lawyers and... (read more)

2Vladimir_Nesov4mo
This holds for agents that are mature optimizers [https://www.lesswrong.com/posts/Mrz2srZWc7EzbADSo/wrapper-minds-are-the-enemy], that tractably know what they want. If this is not the case, like it is not the case for humans, they would be wary of goodharting [https://arbital.com/p/goodharts_curse/] the outcome, so might instead pursue only mild optimization [https://arbital.com/p/soft_optimizer/].

The history of the world would be different (and a touch shorter) if immediately after the development of the nuclear bomb millions of nuclear armed missiles constructed themselves and launched themselves at targets across the globe.

To date we haven't invented anything that's an existential threat without humans intentionally trying to use it as a weapon and devoting their own resources to making it happen. I think that AI is pretty different.

Robin Hanson has an solution to the Fermi Paradox which can be read in detail here (there are also explanatory videos at the same link): https://grabbyaliens.com/

The summary from the site goes: 

There are two kinds of alien civilizations. “Quiet” aliens don’t expand or change much, and then they die. We have little data on them, and so must mostly speculate, via methods like the Drake equation.

“Loud” aliens, in contrast, visibly change the volumes they control, and just keep expanding fast until they meet each other. As they should be easy to see, we c... (read more)

Epistemic status: socially brusque wild speculation. If they're in the area and it wouldn't be high effort, I'd like JenniferRM's feedback on how close I am.

My model of JenniferRM isn't of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite's comment below, they say:

It was a purposefully pointed and slightly unfair question. I didn't predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).

If he answered in various bad ways

... (read more)
3Duncan_Sabien1y
Plausible to me. (Thanks.)

It isn't pleasant when a critical response garners more upvotes than the original post. I tell people that I'm not thin-skinned, but that's only because I don't respect most people. I respect LessWrongers, so this rather stung.

"To me this sentence reads like you haven't put in the work to analyse why those tools don't do what's needed and why you think a new tool would do what's needed."

You'll need to tell me how you do those block quotes, they are neat.

Thanks for the feedback; this is something I'll keep in mind next time I write something. An earlier dra... (read more)

2ChristianKl2y
For your project to be successful (be promising to contribute to) that those other running the other projects didn't understand. If you have a thesis like: "The other projects lacked crucial thing X but my project will have X" then that's an argument that's possible to evaluate. It's illegal to artificially inflate or deflate of the price of a security. Generally when it comes to individual companies intent to artificially inflate the price of a security is relatively hard to prove. If you however make an explicit deal to artifically inflate the price of a security, it's quite easy in court to argue that this is what happens. However even if you win the court battle, the court of credit card companies is still there to judge companies. Cryptography doesn't help you from being cut off from processing credit cards. If we would live in a world where people regularly pay with crypto that might be an alternative but we don't live in that world currently. Having good examples where the proposed framework would actually useful is important.
3MikkW2y
If you put a > at the start of a line, it will make the line into a quote

I've seen. Though, as said in the post, "If I want to organize something important, I would not consider using Actuator nor Collaction."