How are you saving the world? Please, let us know!

Whether it is solving the problem of death or teaching rationality, one of the correlated phenomena of being less wrong is making things better. Given the value many of us place on altruism, this extends beyond just ourselves and into that question of, “How can I make The Rest better?” The rest of my community. The rest of my country. The rest of my species. The rest of my world. To word it in a less other-optimizing way: How can I save the world?

So, tell us how you are saving the world. Not how you want to save the world. Not how you plan to. How you are, actively, saving the world. It doesn’t have to be “I invented a friendly AI,” or “I reformed a nation’s gender politics” or “I perfected a cryonics reviving process.” It can be a simple goal (“I taught a child how to recognize when they use ad hominen” or "I stopped using as much water to shower") or a simple action as part of a larger plan (such as “I helped with a breakthrough on reducing gas emissions in cars by five percent”).

If we accept this challenge of saving the world, then let us be open and honest with our progress. Let us put our successes on display and our shortcomings as well, so that both can be recognized, recommended, and, if need be, repaired.

If you are not doing anything to save the world, even something as simple as “learning about global risks” or “encouraging others to research a topic before deciding on it”? Then find something. Find a goal and work for it. Find an act that needs doing and do it.

 

Then tell us about it. 

 

New Comment
47 comments, sorted by Click to highlight new comments since: Today at 9:13 AM

Perhaps 'improving the world' or some other phrasing could appease the people who don't like the sound of "saving the world".

Then again, perhaps not.

Good point, though it's more important than just "appeasing" some people.

To think of "saving the world" is to adopt a god's-eye view, which tempts one to think of solutions appropriate only for a god, or tyrant. I'm not against ever looking at things from a god's eye view, but too many things, like globes, maps and "International News" columns tend to make us imagine we are really good at it when we aren't, and in any event, if we do, we should very deliberately climb down and remind ourselves of our true dimensions.

I don't mind the question though, if asked in a tongue-in-cheek way.

[-][anonymous]10y10

But that's so generic that it misses the point. Saving the world is extraordinary. What things are you doing towards extraordinary outcomes?

OP mentions "I used less water in the shower", so is obviously not only looking for extraordinary outcomes. So "saving the world" does indeed sound silly.

I'm not saving the world.

I'm not planning to start saving the world.

I am highly suspicious of people who claim to be saving the world because they tend to grow very intolerant of other people who don't think in the same ways.

I feel you've painted me with an unfair brush, and lots of other nice people too.

I feel you've painted me with an unfair brush, and lots of other nice people too.

First, if you were to read my post more carefully you'd discover that it's all about myself -- my plans, my opinions, and my reasons for my opinions. I'm not painting anyone.

Second, I'm curious as what do you think of as "fairness" in this context. Why is the brush you felt "unfair"?

And speaking about words, I feel some tension between "nice people" and "saving the world", do you think these two expressions go well together?

You made a generalisation about all people who say they want to save the world. When you say your comment was all about you, that's not true, because that was a description of other people, including me. And I have read a fair amount on here by people who state that as one of their goals, and I've never thought they were very intolerant of other people because of this - in fact, I remember EY once writing about how he holds other people to a very different standard than himself.

So generalising negatively about me and lots of other people who have formed a goal of trying to do a lot of good in this world, didn't feel justified, and thus a little mean. If you didn't mean that to come across, that's cool.

Benito, it's not about you.

Neither it is about other people on LW who said they are "trying to do a lot of good in this world". The point is, saving the world isn't something that LW invented or first started practicing. People have been trying to save the world, that is, do a lot of good in the world, for a really long time and in many, many different ways. The real consequences of that... vary.

People who were burning witches were trying to save the world from Satan's influence. The communists were trying to save the world from the rapacious maw of capitalism. Anti-gay protesters were trying to save the world from moral decay.

To make things explicit, the problems are twofold. First, "saving the world" can and does mean very different things to different people. Second, for a goal as vital and compelling as saving the world, no sacrifice is too great.

I see you didn't mean what it appeared you meant. Nonetheless, your statements seems... To be unnecessary.

The people on this forum were thinking about trying to help as many people as possible, and tend to use the phrase 'save the world' to describe this aim. You said "People who try to 'save the world' generally become intolerant of those who aren't", I said "I don't, that seems an unfair assessment of the people on this forum" and now you seem to be saying "Oh, I wasn't talking about you guys."

You seem to be saying that a lot of other people who use the phrase have done lots of bad things, and therefore... We should be really cautious? Look, trying to maximise good is a fine aim. It's really good! And if you think someone's making a mistake, then help them. Heck, people are listing the things they're doing. I think what everyone's doing to help the world is good, and giving them positive feedback is a kind and helpful thing to do. This isn't an echo chamber, it's trying to help people do good.

If you think that someone is doing a specific thing wrong, then I'm sure they'd be interested in that. This is a site for becoming more in line with reality, and to attain our goals most effectively.

and now you seem to be saying "Oh, I wasn't talking about you guys."

I wasn't talking about you guys personally. But do you believe that you are radically different from all those people who were saving the world before?

Look, trying to maximise good is a fine aim.

That really depends on what do you consider good and what are you willing to sacrifice for that.

I have a feeling there is some typical-mind thinking going on here.

Let's say Alice appears and says "I want to to do good and save the world!" How are you going to respond? Are you going to encourage Alice, give her positive feedback? You are assuming that Alice is like you and shares the important chunks of your value system. But what if she does not? She is entirely sincere, it's just that what she thinks of as "good" does not match your ideas.

The world at large does not necessarily share your values. I feel it's an important point that gets overlooked in the sheltered and cloistered LW world. Some guy's idea of doing good might be slitting the throats of everyone from the enemy tribe.

That also works in reverse -- you say you want to do good, but why should I blindly trust you? Was the Unabomber trying to save the word from the perils of high technology? Would you save the world by bombing an AI lab on the verge of an uncontrolled breakthrough? X-D

You just seem to be talking past us. I think there are a lot of shared values on this forum and furthermore, we're not celebrating the abstract goal but the particular acts, which means we're open to discussion on the specifics of what we're doing. Your talk about the Unabomber is just inappropriate on this thread.

[-][anonymous]10y00

So? The positive context was pretty clear here. Why be antagonistic about it?

The positive context was pretty clear here.

LOL. Echo chambers are full of positive context, aren't they?

But to answer your question, hubris is not usually a good state of mind.

[-][anonymous]10y40

I'm developing a low-level strongly typed virtual machine suitable for running thought processes with provable program properties, and which result in checkable partial execution traces with cryptographically strong bounds on honest vs fraudulent work. Such a framework meets the needs for a strong AI boxing setup.

If this is for proving useful work in crypto currencies, then awesome!

If primarly for AI boxing, then I assume this is useful for a massively parallel 'folding at home' style AI. In which case, is it possible (given sufficient technical knowledge) that any computer running a node could let the AI out of the box, or is each node a black box from the point of view of the person running the node?

checkable partial execution traces with cryptographically strong bounds on honest vs fraudulent work

You may want to talk to these guys: http://www.scipr-lab.org/pub (SNARKs for C)

And to me as well, in due time.

[-][anonymous]10y00

I am aware of their work, being as I am a bitcoin core developer working on the related areas of privacy enhancement and script extensions. What I was alluding to is not pairing cryptography though as that is much too slow for this application, and ultimately not interesting in the context of self-modifying programs. It is something much simpler having to do with one way hashes of the result then pruning the execution trees in such a way that cannot be cheated within the bounds of execution time, which can be compared with energu usage. in other words have the program give a concise summary of its execution thought processes which led to the result it gives in such a way that it can't be hiding things.

what are you working on?

Interesting. Can you say more about how your work compares to existing VMs, such as the JVM, and what sorts of things you want to prove about executions?

[-][anonymous]10y30

Can you say more about how your work compares to existing VMs, such as the JVM?

Most commercially used VMs are not designed to the level of security required to protect humanity from an UFAI. VM escape bugs are routinely found in various JVM implementations and the complexity and structure of the VM itself precludes existing or theorized analysis tools from being able to verify the implementation to a level of detail sufficient for high-assurance systems.

To be useful in this context, a VM needs to be designed with simplicity and security as the two driving requirements. One would expect, for example, an interpreter for such a VM to occupy no more than 1,000 lines of MISRA C, which could have its security assertions proven with existing tools like Coq. The goal is to get in the core VM the simplest set of opcodes and state transitions which still allow a nearly Turing-complete (total functional, to be specific) program description language, and one which is still capable of concisely representing useful programs.

The other way in which it differs from many VM layers is that is has a non-optional strong typing system (in the spirit of Haskell, not C++).

and what sorts of things you want to prove about executions?

Type checking, mostly. Actually that aspect of the system has less to do with boxing than the design of the artificial intelligence itself. You can imagine, for example, a core evolutionary search algorithm which operated over program space by performing type-safe mutation of program elements, or achieved creative combination by substituting type-equal expressions.

It is also important to prove some properties, e.g. bounds on running time. Particularly when interacting with other untrusted agents ("sharing minds" -- machine-to-machine communication is likely to be literally uploading memories from one mind-space to another).

Type checking, mostly.

You're building a box to contain a strong AI out of type checking? I suggest that unless your types include "strings that can be displayed to a human without causing him to let the AI out of the box", this is unlikely to succeed.

bounds on running time

Won't you run into Halting Problem issues? Although perhaps that's why it's only nearly Turing-complete. Incidentally, can you quantify this and say how your machine is less powerful than a Turing-complete one but more powerful than regexes?

[-][anonymous]10y00

You're building a box to contain a strong AI out of type checking?

No, where did I say that? A VM layer with strong type checking is just one layer.

I suggest that unless your types include "strings that can be displayed to a human without causing him to let the AI out of the box", this is unlikely to succeed.

I don't want to debate this here because (a) I haven't the time, and (b) prior debates I've had on LW on the subject have been 100% unproductive, but in short: there hasn't been a single convincing argument against the feasibility of boxing, and there is nothing of value to be learnt from the AI box "experiments". So you don't want to give a single isolated human a direct line to a UFAI running with no operational oversight. Well, duh. It's a total strawman.

Won't you run into Halting Problem issues?

No.

Incidentally, can you quantify this and say how your machine is less powerful than a Turing-complete one but more powerful than regexes?

I did: http://en.wikipedia.org/wiki/Total_functional_programming

there is nothing of value to be learnt from the AI box "experiments".

I would disagree: even if the experiments do not deter us from running boxed AI, they might allow us to know what arguments to prepare for.

Do we know whether anyone has let the AI out of the box in experiments without the AI using simulation warfare? Because AFAICT there is a considerable amount of processing power & information required in this case, and limiting these factors while precommiting not to let the AI out could minimise the risk.

they might allow us to know what arguments to prepare for

... if the people conducting the experiments ever revealed enough about what happened in them to be useful for anyone else preparing. Which they don't.

[-][anonymous]10y00

even if the experiments do not deter us from running boxed AI, they might allow us to know what arguments to prepare for.

You're missing the point. If you ever find yourself in a situation where a human operator with the ability to let the AI out is listening to arguments from the AI, or even in a situation where the AI is doing unaudited computation, you are doing it wrong and already failed. The entire setup of the AI box game is bogus to begin with, in that it doesn't reflect in any way a realistic oracle AI box construction or breakout scenario.

AFAICT there is a considerable amount of processing power & information required in this case, and limiting these factors while precommiting not to let the AI out could minimize the risk.

Yes, that is basically the pathway towards a realistic AI box. UFAI need not be treated as black boxes: they are performing computation, and that computation is energetically bounded and can be audited.

If the AI is meant to be much smarter than us, what auditing can we usefully do? I wouldn't think much of our chances of getting any useful information from auditing the computation done by an ordinary human brain, never mind a possibly superhuman AI. (I can imagine that auditing might enable to tell it hadn't been corrupted by viruses or something, but surely that's far from the biggest concern.)

[-][anonymous]10y00

That's a sensible sounding intuition, but it's wrong. So what if the AI is smarter than us? We still have full access to its inner workings, and have the ability to design its internal structures in such a way as to remain transparently sensible to auditors. I turns out you can audit just about anything to whatever required level of detail with probabilistic assurances. You can use tools without general intelligence to perform the auditing task free of influence from the boxed AI, and there are cool tricks from cryptography that make it impossible for the AI to fake computational traces without doing an energetically detectable multiple of the purported work.

To better prime your intuition, think about it this way: to make a recommendation of a course of action which achieves X requires searching the space of actions for one which satisfies condition X. This has some inherent difficulty related to the size of the answer space related to the size of the search space (as well as the structure of the two spaces, but let's simplify for now). Assuming random distributions, you can say that an a course of action requires 40 bits of identifying information, meaning that you need to perform 2^40 operations to find an instance of a solution.

Now let's say that the answer space which contains innocuous looking, but actually evil solutions requires 60 bits. In other words, for every evil solution there is 2^20 good solutions. Then an AI which wants to deceive you into doing an action which purports to meet good requirement X, but actually meets the AI's evil hidden requirement Y requires 2^60 operations.

You can tell whether a machine did 2^40 or 2^60 operations, and if it does the latter it better have a damn good explanation for why. (And faking that explanation would require further computation.. I hope you see where this is going.)

Is there somewhere where I can find this stuff written up, and discover exactly what is known to be (provably, or at least that's the impression I get from what you write) achievable by auditing?

I remain gently skeptical despite your very confident tone because, e.g., we don't currently know how to make any sort of superintelligent machine, and I at least would be rather surprised by any theorem along the lines of "given any reasonable class of superintelligent agents, there is a reasonably straightforward way to make a superintelligent agent in this class that can be shown to be innocuous by means of auditing that ordinary human beings are capable of doing reliably".

For the avoidance of doubt, I have no difficulty at all in believing e.g. that there are auditing techniques that will guarantee (or very very nearly guarantee) that a particular agent is performing a particular computational process; I would be only modestly surprised to find that there are techniques that will verify that a particular agent is in some sense optimizing a particular objective function; but the difficulties of keeping a superintelligent AI from doing terrible things are much more complicated and include e.g. tremendous difficulty in working out what it is we really want optimized, and what computational processes we really want carried out.

Perhaps it would be useful to get a bit more concrete. Could you give an example of the sort of thing we might want a superintelligent AI to do for us, that we can't "obviously" make it do safely without the techniques you have in mind, and explain how those techniques enable us to make it do that thing safely?

Designing meta ideas for safer AIs, analysing how the insurance industry model risk.

I organize LW meetups in my country, and have translated Sequences. Sometimes I correct people on internet when they are wrong, but I try to focus on those where the chance of updating seems highest.

This all probably will not have much impact, but it's as much as I seem to be able to do now. I wish I was stronger, or even better: more strategic. Maybe I'll get there once.

[-][anonymous]10y30

Being strategic is nothing more than taking literally 5 minutes to examine the problem of achieving your goal with an open mind, and translating that into next actions. The trouble is that most people don't bother linking goals to actions at all, they'd apparently rather aimlessly wander through life seemingly hoping to end up at a goal state by random chance.

Fixing this requires two things: (1) an ability to admit you're wrong (meaning what you are doing now, and what you have done in the past is/was not in fact effective at achieving your goals, and you should be doing something else instead), and (2) an ability to avoid bias in the brain storming process.

Suggested exercise: drop all preconceived notions of what you should be doing, and think for a literal five minutes -- set a timer on your phone or something -- doing nothing but enumerating possible pathways to achieving your goal. Do not evaluate, simply enumerate with pen and paper. When the buzzer goes off, evaluate and organize the options, then repeat, this time focusing on what tactics are necessary to implement the strategic pathways. After theroughly brainstorming at that level, make some decisions about which strategies and tactics to follow, for now, and repeat one more time the brain storming session, this time coming up with next actions.

This can take less than an hour, no matter the size of the goal. For example, it took me only 40 minutes to reduce "permanent and sustainable expansion of human settlements into the cosmos" to a next action related to Bitcoin commodity markets.

I suspect people actually have defined goals but are not specific enough about actions.

[-][anonymous]10y00

Oh I agree. As simple as it sounds, people lack procedures for turning goals into actions. It's a major failing of our educational system.

I save the world each day at work in obvious (and not so obvious) ways. For the sake of space & time, I'll elaborate on the "obvious" bit. I work for a company that provides near-real time (updated with new data every 5-15 minutes) information of how well paramedics, call takers, and dispatchers do their job compared to medically sound protocols. By "protocols", I'm referring to things like the Medical Priority Dispatch System (which has peer-reviewed articles backing it up), those created by the medical director for a given ambulance system/911 call center (unfortunately, not everyone's custom protocols are that great), and comparisons against basic expected requirements for doing a task (e.g. after sticking in a breathing tube, did the paramedics check to make sure the patient started getting oxygen?).

In addition to providing a constantly updated view, we also send e-mail/text message alerts when things look weird (e.g. lots of respiratory related problems all of a sudden) or when things aren't doing so well (e.g. an ambulance took longer than 15 minutes to arrive).

Finally, we even deal with the allocation of dollars (or, as LW would put it, "utilions"). Bluntly put, ambulances and doctors require money and less money = less/worse services. So, when we help people get paid, we increase the number of utilions floating around for providing patient care.

I have built at least 50+ such things and am working on improving the map that is used to display the information (which affects all our triggers and alerts). Unfortunately, I don't know of a neat easy formula for converting that to lives saved.

Since my company has customers in nearly every state in the USA (including Alaska and Hawaii) + several provinces in Canada, I guess that's the geographic scope of my work too.

[-][anonymous]10y10

Trying to decouple my mind and the mind of those I know from the 'black magic' of advertisements and social norms designed to funnel money and material through wasteful useless economic activity rather than productive activity, by denying advertisement access to my nervous system as much as possible and calling out every time something or someone tries to implicitly tie something to a basal social-primate urge it does not actually belong tied to.

Exposing more people to the idea that the endless-progress-versus-apocalypse dichotomy of conceptions of the future is a false dichotomy, that neither will actually occur, and that the broader 'mythology of progress' is destructive and limiting.

Continuing to read up on the work of some very interesting scientists creating synthetic metabolic pathways in tanks that do not exist in nature and thinking about the sorts of research I want to do after grad school.

Exposing more people to the idea that the endless-progress-versus-apocalypse dichotomy of conceptions of the future is a false dichotomy, that neither will actually occur, and that the broader 'mythology of progress' is destructive and limiting.

Can you elaborate on what you mean?

Pretending that:

*My ideas are unique and useful (because I've assimilated an unusual mix of information)

*I still have a realistic shot at becoming Elon Musk (I mean, copying his education and work style)

*It's possible to change the world without becoming a narcissistic sociopath

*Immortality-seeking won't cause decades of sanity-draining failure experiences

In other words: going to grad school, learning C++ and accounting, and loving someone sane and inspiring

[-][anonymous]10y00

In other words: going to grad school, learning C++ and accounting, and loving someone sane and inspiring

Where is the connection to that?

There are two sentient biological agents who are not yet ready to fully care for themselves, and I provide the means for them to enjoy food, shelter, education, entertainment, cultural enrichment, personal connections, and other life experiences while they develop. As part of a larger organization, I organize matter into a form that my fellow beings find far more useful than the original formation of said matter.

In sum: 2 beings, great positive impact each. Millions of beings, small positive impact each.

Just that.

As part of a larger organization, I organize matter into a form that my fellow beings find far more useful than the original formation of said matter.

Pretty much everyone who can find work for pay can say the same.

Saving the world sounds self-important and hubristic to my ears. Does the world really need saving? But trying to make the world a slightly better place on the margin is a worthy goal for any of us. My contributions are as follows:

  • Producing high quality software that people want to buy.
  • Persuading colleagues not to contribute to an unworthy open source project.
  • Reducing the amount I recycle.
  • Making a modest donation to a worthy political party.
  • Helping a friend with a work problem that has been overwhelming her.

Would you care to elaborate on this one?

Reducing the amount I recycle

Do you mean (1) "Being more efficient, so that less stuff needs recycling or discarding" or (2) "Putting stuff into landfill rather than recycling it"? To me, #1 seems like an obviously good thing but "Reducing the amount I recycle" seems like an odd way to describe it; #2 seems like the obvious reading of what you wrote but it's hard to see how it could be a very good thing (though it might be a way of Making a Statement, if you consider that effort spent recycling things rather than just chucking them out is wasted).

Despite his usual rudeness ("Because he's Salemicus"? Really?), the link gwern provides goes some way to explaining my views on the subject. On many margins we recycle too much. Unfortunately, where I live this is made worse by the government, because the local council charges you more for rubbish collection if you don't recycle as much as they deem proper. However, because I care about the environment and future generations, I am willing to incur that cost in order to help society on the margin.

Note incidentally that this is typical of government intervention; in the textbooks they adjust prices to correct market failures based on some miraculous, a priori knowledge of the"true social costs." In reality, they intervene in purely private transactions and break functioning markets based not on any considered measurement of alleged externalities, but just the free-standing moralising of self important do-gooders, no doubt aided by cynical rent-seekers.

Which brings me back nicely that saying you want to save the world sets off my alarm bells.

By what mechanism do you expect recycling less to help society on the margin?

(Are you thinking of instances where in order to recycle more you would have to do things that harm the environment more than sending the stuff to landfill would have, or that cost more than recycling would have saved -- e.g., where you'd have to wash stuff thoroughly before recycling it? Or is this about message-sending, and if so how does the causal chain go?)

In this instance -- though of course I don't know where you live and in any case haven't investigated deeply -- it seems to me that the claims on both sides are rather dubious. On the one hand we have the excessively moralized Recycle All The Things drive; on the other we have, e.g., your statement that "[government interventions] break functioning markets" without, so far as I can see, any evidence that there was or would have been a well functioning market without the government intervention. It seems fairly clear to me that the optimal amount of recycling is greater than zero, and so far as I know it's generally only been in response to government intervention that that's happened. (Not necessarily coercive intervention; e.g., where I live, local government provides recycling services and makes them easy to use but doesn't punish you for not using them -- though if that means you send a lot more stuff to landfill then you might have to pay more for extra collection.)

(It also seems plausible to me -- but I have only weak, indirect evidence -- that there is a local optimum, better than where most of the US and UK currently sits, where more stuff gets recycled using facilities that cost money to set up but once in place make recycling more effective, and which isn't easily accessible to the free market untouched by government intervention because the benefits are spread out but some individual or corporation would need to make the facilities actually get built.)

He means neither, because he's Salemicus. What he means is roughly http://www.cato-unbound.org/2013/06/18/editors/comments

That's an essay and three responses. I'm not sure how to turn that into an interpretation of what S. wrote. Nor, I'm afraid, does "because he's Salemicus" enlighten me much. My apologies if I'm being dim.

(The position taken by the main essay at that link looks to me a lot like "effort spent recycling things rather than just chucking then out is (often) wasted", and the only way I can see for this to make recycling less be a contribution to making the world a better place, worthy of calling out explicitly in a context like this, is for the sake of Making a Statement. So this looks a lot like my #2. But again I may well be being dim.)

is for the sake of Making a Statement.

Indeed.