All of Kyre's Comments + Replies

Very nice. This is the cleanest result on cognitive (or rationality) costs in co-operative systems that I've seen. Modal combat seems kind of esoteric compared to, say, iterated prisoners' dilemma tournaments with memory, but it pays off nicely here. It gives you the outcomes of a set of other-modelling agents (without e.g. doing a whole lot of simulation), and the box-operator depth then plugs in as a natural modelling-cost measure.

Did you ever publish any of your modal combat code (I have a vague recollection that you had some Haskell code ?) ?

9Scott Garrabrant6y
There is this:

Don't humans have to give up on doing their own science then (at least fundamental physics) ?

I guess I can have the FAI make me a safe "real physics box" to play with inside the system; something that emulates what it finds out about real physics.

If unfriendly AI is possible, making a safe physics box seems harder than the rest of my proposal :-) I agree that it's a drawback of the proposal though.

If you failed you'd want to distinguish between (a) rationalism sucking, (b) your rationalism sucking, or (c) EVE already being full of rationalists.

Whether or not success in Eve is relevant outside Eve is debatable, but I think the complexity, politics and intense competition means that it would be hard to find a better online proving ground.

Good advice, but I would go further. Don't use your inbox as a to-do list at all. I maintain a separate to-do list for roughly three reasons.

(1) You can't have your inbox in chronological and priority order. Keeping an inbox and email folders in chronological order is good for searching and keeping track of email conversations.

(2) Possibly just my own psychological quirk, but inbox emails feel like someone waiting for me and getting impatient. I can't seem to get away from my inbox fundamentally representing a communications channel with people on the othe... (read more)

Not just the environment in which you share your goals, but also how you suspect you will react to the responses you get.

When reading through these two scenarios, I can just as easily imagine someone reacting in exactly the opposite way. That is, in the first case, thinking "gosh, I didn't know I had so many supportive friends", "I'd better not let them down", and generally getting a self-reinforcing high when making progress.

Conversely, say phase 1 had failed and got the responses stated above. I can imagine someone thinking "hey ... (read more)

Certainly - when I describe the environment that is a factor that I assumed fit the label. Thank you for making it explicit. yes. this is important and should be known generally.

My five minutes thoughts worth.

Metrics that might useful (on the grounds that in hindsight people would say that they made bad decisions): traffic accident rate, deaths due to smoking, bankruptcy rates, consumer debt levels.

Experiments you could do if you could randomly sample people and get enough of their attention: simple reasoning tests (e.g. confirmation bias), getting people to make some concrete predictions and following them up a year later.

Maybe something measuring people's level of surprise at real vs fake facebook news (on the grounds people should be more surprised at fake news) ?

Doing theoretical research that ignores practicalities is sometimes turns out to be valuable in practice. It can open a door to something you assumed to be impossible; or save a lot of wasted effort on a plan that turns out to have an impossible sub-problem.

A concrete example of first category might be something like quantum error correcting codes. Prior to that theoretical work, a lot of people thought that quantum computers were not worth pursuing because noise and decoherence would be an insurmountable problem. Quantum fault tolerance theorems did nothi... (read more)

But it might not be a useful line of thinking? Is it possible for lesswrong/AI Risk community to talk about the two universes at the same time? To be concrete I mean the universe where very formal models are useful for AI development and the other where the only practical solutions are adhoc and contextual. Just as AIXI seems to have been discarded for not being naturalistic enough (Hoorah) you might want to entertain the idea that software based decision theories aren't naturalistic enough.

Will second "Good and Real" as worth reading (haven't read any of the others).

Maybe translating AI safety literature into Japanese would be a high-value use of your time ?

Yeah, that would be great indeed. Unfortunately my Japanese is so rudimentary that I can't even explain to my landlord that I need a big piece of cloth to hang it in front of my window (just to name an example). :-( I'm making progress, but getting a handle on Japanese is about as time-consuming as getting a handle on ML, although more mechanical.

That's true, 20 years wouldn't necessarily bring to light a delayed effect.

However the GMO case is interesting because we have in effect a massive scale natural experiment, where hundreds of millions of people on one continent have eaten lots of GMO food while hundreds of millions on another continent have eaten very little, over a period of 10-15 years. There is also a highly motivated group of people who bring to the public attention even the smallest evidence of harm from GMOs.

While I don't rule out a harmful long-term effect, GMOs are a long way down on my list of things to worry about, and dropping further over time.

Not really, because the two groups differs in many attributes. You can't draw any reliable conclusions from that if you don't know individual consumption. If you could draw that conclusion we could conclude from US bee deaths that GMO's are bad. But there also no reason to assume that risk from GMO would be equally distributed among different GMO foods. Letting plants produce poisons so that they won't get eaten by insects is likely more risky than doing something to improve drought resistance. Our ability to manipulate organism increases as time goes on. Organisms where multiple genes are added might be more risky than organisms where only a single gene was added. Valid arguments against early GMO that they spread antibiotic resistance genes also don't hold against newer GMO's. Bioengineered pandemics frequently top the LW census as an X-risk concern. Commerical usage of GMO's pays for technology development to produce more capabilities on that front.

Heh, that was really just me trying to come up with a justification for shoe-horning a theory of identity into a graph formalism so that Konig's Lemma applied :-)

If I were to try to make a more serious argument it would go something like this.

Defining identity, whether two entities are 'the same person' is hard. People have different intuitions. But most people would say that 'your mind now' and 'your mind a few moments later' are do constitute the same person. So we can define a directed graph with verticies as mind states (mind states would probably have... (read more)

So, the graph model of identity sort of works, but I feel it doesn't quite get to the real meat of identity. I think the key is in how two vertices of the identity graph are linked and what it means for them to be linked. Because I don't think the premise that a person is the same person they were a few moments ago is necessarily justified, and in some situations it doesn't meld with intuition. For example, a person's brain is a complex machine; imagine it were (using some extremely advanced technology) modified seriously while a person was still conscious. So, it's being modified all the time as one learns new information, has new experiences, takes new substances, etc, but let's imagine it was very dramatically modified. So much so that over the course of a few minutes, one person who once had the personality and memories of, say, you, ended up having the rough personality and memories of Barack Obama. Could it really be said that it's still the same identity? Why is an uploaded mind necessarily linked by an edge to the original mind? If the uploaded mind is less than perfect (and it probably will be; even if it's off by one neuron, one bit, one atom) and you can still link that with an edge to the original mind, what's to say you couldn't link a very, very dodgy 'clone' mind, like for example the mind of a completely different human, via an edge, to the original mind/vertex? Some other notes: firstly, an exact clone of a mind is the same mind. This pretty much makes sense. So you can get away from issues like 'if I clone your mind, but then torture the clone, do you feel it?' Well, if you've modified the state of the cloned mind by torturing it, it can no longer be said to be the same mind, and we would both presumably agree that me cloning your mind in a far away world and then torturing the clone does not make you experience anything.

If we take "immortality" to mean "infinitely many distinct observer moments that are connect to me through moment-to-moment identity", then yes, by Konig's Lemma.

(Every infinite graph with finite-degree verticies has an infinite path)

(edit: hmmm, does many-worlds give you infinite-branching into distinct observer moments ?)

Can you elaborate on the concept of a connection through "moment-to-moment identity"? Would for example "mind uploading" break such a thing?

Procedural universes seemed to see a real resurgence from around 2014, with e.g. Elite Dangerous, No Man's Sky, and a quite a few others that have popped up since.

I love a beautiful procedural world, but I think things will get more interesting when games appear with procedural plot structures that are cohesive and reactive.

Then multiplayer versions will appear that weave all player actions into the plot, and those games will suck people in and never let go.

Artificial storytelling has some promising directions for games and there may be some reasons to think that this can have benefit value aligned AI research. -- Beyond Adversarial: The Case for Game AI as Storytelling Also Storytelling may be the secret to creating ethical artificial intelligence but alas storytelling is hard.

For 5 minutes suspension versus dreamless deep sleep - almost exactly the same person. For 3 hours dreamless deep sleep I'm not so sure. I think my brain does something to change state while I'm deep asleep, even if I don't consciously experience or remember anything. Have you ever woken up feeling different about something, or with a solution to a problem you were thinking about as you dropped off ? If that's not all due to dreaming, then you must be evolving at least slightly while completely unconscious.

Would a slow cell by cell, or thought by thought / byte by byte, transfer of my mind to another medium: one at a time every new neural action potential is received by a parallel processing medium which takes over? I want to say the resulting transfer would be the same consciousness as is typing this but then what if the same slow process were done to make a copy and not a transfer? Once a consciousness is virtual, is every transfer from one medium or location to another not essentially a copy and therefore representing a death of the originating version

... (read more)
I definitely agree that incremental change (which gets stickier with incremental non-destructive duplication) is a sticky point. What I find the most problematic to my my thesis is a process where every new datum is saved on a new medium, rather than the traditionally-cited cell-by-cell scenario. It's problematic but nothing in it convinces me to step up to Mr Bowie-Tesla's machine under any circumstances. Would you? How about if instead of a drowning pool there was a team of South America's most skilled private and public sector torture experts, who could keep the meat that falls through alive and attentive for decades? Whatever the other implications, the very eyes seeing these words would be the ones pierced by needles. I don't care if the copy gets techno-heaven/ infinite utility. Your thought experiment doesn't really hit the point at issue for me. My answer is always "I want to stay where I am". For silicon to choose meat is for the silicon to cease to exist, for meat to choose silicon is for meat to cease to exist. I only value the meat right now because that is where I am right now. My only concern is for ME, that is the one you are talking to, to continue existing. Talk to a being that was copied from me a split second ago and that guy will throw me under the bus just as quickly (allowing for some altruistic puzzles where I do allow that I might care slightly more about him than a stranger, but mostly because I know the guy and he's alright and I can truly empathize with what he must be going through (ie if I'm dying tomorrow anyway and he gets a long happy life, but I may do the same for a stranger). The scenario is simply russian roulette if you won't accept my "I want to stay put" answer. Shit, if I came to realize that I was a freshly-minted silicon copy living in a non-maleficent digital playground I would be eternally grateful to Omega, my new God whether It likes it or not, and that meat shmuck who chose to drown his monkey ass just before he reali

Not sure if it's a scientific or engineering achievement, but this Nature letter stuck in my mind:

An aqueous, polymer-based redox-flow battery using non-corrosive, safe, and low-cost materials

Neat! I might leave it here in the comments.

Oh, I think I see what you mean. No matter how many or how detailed the simulations you run, if your purpose is to learn something from watching them, then ultimately you are limited by your own ability to observe and process what you see.

Whoever is simulating you only has to run the simulations that you launch to the level of fidelity such that you can't tell if they've taken shortcuts. The deeper the nested simulation people are, the harder it is for you to pay attention to them all, and the coarser their simulations can be.

If you are running simulations... (read more)

Exactly - and you expressed it better than I could.

I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can.

You're right - branch (2) should be "we don't keep running run more than one". We can launch as many as we like.

The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation,

... (read more)
My thought was that if a simulation that centered around a single individual had a simulation running within it, the simulation would only need to be convincing enough to appear real to that one person. Even if the nested simulation runs a third level simulation within it, or if the one individual runs two simulations, aren't you still basically exploring the idea space of that one individual? That is, me running a simulation and experiencing it through virtual reality is limited in cognitive/sensory scope and fidelity to the qualia that I can experience and the mental processes that I can cope with... which may still be very impressive from my point of view, but the computational power required to present the simulation can't be much more complex than the computational power required to render my brain states in the base simulation. I may simulate a universe with very different rules, but these rules are by definition consistent with a full rendering of my concept space; I may experience new sensory inputs (if I use VR), but I won't be experiencing new senses.... and what I experience through VR replaces, rather than adds to, what I would have experienced in the base simulation. Even in the worst case scenario that I build 1000+ simulations, they only have to run for the time that I check on them. The more time I spend programming them and checking that they are rendering what they should, the less time I have to do additional simulations. This seems at worst an arithmetic progression. Of course, if I were specifically trying to crash the simulation that I was in, I might come up with some physical laws that would eat up a lot of processing power to calculate for even one person's local space, but between the limitations of computing as they exist in the base simulation, the difficulty in confirming that these laws have been properly executed in all of their fully-complex glory, and the fact that if it worked, I would never know, I'm not sure that that is a signi

That's the unbounded computation case.

It seems like there is a lot of room between "one simulation" and "unbounded computational resources"

Well the point is that if we are running on bounded resources, then the time until it runs out depends very sensitively on how many simulations we (and simulations like us) launch on average. Say that our simulation has a million years allocated to it, and we launch simulations starting a year back from the time when we launch a simulation.

If we don't launch any, we get a million years.

If we launch one, but that one doesn't launch any... (read more)

I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can. The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation, and may have correctly programmed a full simulation, but there is simply no reason for it to actually replicate either the whole universe or the whole actor, as long as it gives output that looks valid).

Here is a second Simulation Trilemma.

If we are living in a simulation, at least one of the following is true:

1) we are running on a computer with unbounded computational resources, or

2) we will not launch more than one simulation similar to our world, or

3) the simulation we are in will terminate shortly after we launch our own simulations.

Here 'short' is on the order of the period between the era we start the simulation at and when the simulation reaches our stage.

Why would us launching a simulation use more processing power? It seems more likely that the universe does a set amount of information processing and all we are doing is manipulating that in constructive ways. Running a computer doesn't process more information than the wind blowing against a tree does; in fact, it processes far less.
The computer could just halve our clock speed every time we launch a new simulation. No matter how many simulations we launch, our clock speed never reaches zero, so everything continues as normal inside our simulation. Problem solved! Suggested reading: "Hotel Infinity" followed by "Permutation City". If you wanted to launch a higher order of infinity number of ssimulation from inside our simulation, that would be another story...
It seems like there is a lot of room between "one simulation" and "unbounded computational resources". Also, it is a bit odd to think that when computational resources start running low the correct thing to do is wipe everything clean... that is an extremely primitive response, and one that suggests that our simulation was pretty close to worthless (at least at the end of its run). It also assumes a full-word simulation, and not just a preferred-actors simulation, which is a possibility, and maybe a probability, but not a given.

I heard strawberry jam can be made with just strawberries, water and sugar on a frying pan on the radio.

I'd use a stove.

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade ...

Assuming you get good feedback and think that you have an interesting, solid arguments ... please think carefully about whether such publicity helps the existential risk movement more than it... (read more)

I would not worry about that for three reasons: 1) I am very shy online. Even posting this took several days and I did not look at the comments for almost a day after. 2) I am bringing this here first to see if it is worth considering, and also because I want input not only on the idea, but on the idea of spreading it further. 3) I would never identify myself with MIRI, etc. not because I would not want to be identified that way, but because I have absolutely not earned it. I also give everyone full permission to disavow me as a lone crackpot as needed should that somehow become a problem. That said, thank you for bringing this up as a concern. I had already thought about it, which is one of the reasons I was mentioning it as a tentative consideration for more deliberation by other people. That said, had I not, it could have been a problem. A lot of stuff in this area is really sensitive, and needs to be handled carefully. That is also why I am nervous to even post it. All of that said, I think I might make another tentative proposal for further consideration. I think that some of these ideas ARE worth getting out there to more people. I have been involved in International NGO work for over a decade, studied it at university, and have lived and worked in half a dozen countries doing this work, and had no exposure to Effective Altruism, FHI, Existential Risk, etc. I hang out in policy/law/NGO circles, and none of my friends in these circles talk about it either. These ideas are not really getting out to those who should be exposed to them. I found EA/MIRI/Existential Risk through the simulation argument, which I read about on a blog I found off of reddit while clicking around on the internet about a year ago. That is kind of messed up. I really wish I had stumbled onto it earlier, and I tentatively think there is a lot of value in making it easier for others to stumble onto it into the future. Especially policy/law types, who are going to be needed at some point in

It's a trade-off. The example is simple enough that the alignment problem is really easy to see, but it also means that it is easy to shrug it off and say "duh, just the use obvious correct utility function for B".

Perhaps you could follow it up with an example with more complex mechanics (and or more complex goal for A) where the bad strategy for B is not so obvious. You then invite the reader to contemplate the difficulty of the alignment problem as the complexity approaches that of the real world.

Nitpick: we have equations for (special) relativistic quantum physics. Dirac was one of the pioneers, and the Standard Model for instance is a relativistic quantum field theory. I presume you mean general relativity (gravity) and quantum mechanics that is the problem.

(Douglas_Knight) Moreover, the predictions that QFT makes about chemistry are too hard. I don't think it is possible with current computers to compute the spectrum of helium, let alone lithium. A quantum computer could do this, though.

In the spirit of what Viliam suggested, maybe you could... (read more)

Things that are unsexy but I can actually verify as having been useful more than once:

In wallet, folded up tissue. For sudden attack of sniffles (especially on public transport), small cuts, emergency toilet paper.

In bag I carry every day: small pack of tissues, multitool, tiny torch, ibuprofin, pad and pencil, USB charging cable for phone, plastic spork, wet wipe thing from KFC (why do they always shovel multiples of those things in with my order ?).

Very rough toy example.

Say I've started a project which I can definitely see 5 days worth of work. I estimate there'll be some unexpected work in there somewhere, maybe another day, so I estimate 6 days.

I complete day one but have found another day's work. When should I estimate completion now ? Taking the outside view, finishing in 6 days (on day 7) is too optimistic.

Implicit in my original estimate was a "rate of finding new work" of about 0.2 days per day. But, now I have more data on that, so I should update the 0.2 figure. Let's see, 0.2 is... (read more)

Thank you, you saved me a lot of typing. No amount of straight copying of that GIF will generate a conscious experience; but if you print out the first frame and give it to a person with a set of rules for simulating neural behaviour and tell them to calculate the subsequent frames into a gigantic paper notebook, that might generate consciousness.

Thanks for replying ! Sorry if the bit I quoted was too short and over-simplified.

That does clarify things, although I'm having difficulty understanding what you mean by the phrase "causal structure". I take it you do not mean the physical shape or substance, because you say that a different computer architecture could potentially have the right causal structure.

And I take it you don't mean the cause and effect relationship between parts of the computer that are representing parts of the brain, because I think that can be put into one-to-one corr... (read more)

Thanks for the replies. I will try to answer and expand on the points raised. There are a number of reductio ad absurdums that dissuade me from machine functionalism, including Ned Block's China brain and also the idea that a Turing machine running a human brain simulation would possess human consciousness. Let me try to take the absurdity to the next level with the following example: Does an animated GIF possess human consciousness? Imagine we record the activity of every neuron in a human brain at every millisecond; at each millisecond, we record whether each of the 100 billion neurons in the human brain is firing an action potental or not. We record all of this for a 1 second duration. Now, for each of the 1000 milliseconds, we represent the neural firing state of all neurons as a binary GIF image of about 333,000 pixels in height and width (this probably exceeds GIF format specifications, but who cares), where each pixel represents the firing state of a specific neuron. We can make 1000 of these GIFs for each millisecond over the 1 second duration. With these 1000 GIFs, we concatenate them to form an animated GIF and then play the animated GIF on an endless loop. Since we are now "simulating" the neural activities of all the neurons in the human brain, we might expect that the animated GIF possesses human consciousness... But this view is absurd and this exercise suggests there is more to consciousness than reproducing neural activities in different substrates. To V_V, I don't think it has human consciousness. If I answer otherwise, I'm pressed to acknowledge that well-coded chatbots have human consciousness, which is absurd. With regard to what "conscious" means in epistemic terms, I don't know, but I do know that the Turing test is insufficient because it only deals with appearances and it's easy to be duped. About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over. To Kyre, you hit the crux i

That is very interesting; there does seem to be quite rapid progress in this area.

From the blog entry:

... the reason for this is because simulating the neural activity on a Von Neumann (or related computer) architecture does not reproduce the causal structure of neural interactions in wetware. Using a different computer architecture may avert this problem ...

Can anyone explain what that means ? I can't see how it can be correct.

Shawn Mikula here. Allow me to clear up the confusion that appears to have been caused by being quoted out of context. I clearly state in the part of my answer preceding the quoted text the following: "2) assuming you can run accurate simulations of the mind based on these structural maps, are they conscious?". So this is not a question of misunderstanding universal computation and whether a computer simulation can mimic, for practical purposes, the computations of the brain. I am already assuming the computer simulation is mimicking the brain's activity and computations. My point is that a computer works very differently from a brain which is evident in differences in its underlying causal structure. In other words, the coordinated activity of the binary logic gates underlying the computer running the simulation has a vastly different causal structure than the coordinated activity and massive parallelism of neurons in a brain. The confusion appears to result from the fact that I'm not talking about the pseudo-causal structure of the modeling units comprising the simulation, but rather the causal structure of the underlying physical basis of the computer running the simulation. Anyway, I hope this helps.

Well, the simplest explanation may be: it's not correct.

He doesn't believe in functionalism (or at least he probably doesn't):

The question of uploading consciousness can be broken down into two parts: 1) can you accurately simulate the mind based on complete structural or circuit maps of the brain?, and 2) assuming you can run accurate simulations of the mind based on these structural maps, are they conscious? I think the answer is probably ‘no’ to both.

Perhaps he doesn't really understand the implications of universal computability. I've found that a... (read more)

Tattoo private key on inside of thigh.

What's to stop the AI from instead learning that "good" and "bad" are just subjective mental states or words from the programmer, rather than some deep natural category of the universe? So instead of doing things it thinks the human programmer would call "good", it just tortures the programmer and forces them to say "good" repeatedly.

The pictures and videos of torture in the training set that are labelled "bad".

It is not perfect, but I think the idea is that with a large and diverse training set the hope... (read more)

I'm not sure succeeding at number 4 helps you with with the unattractiveness and discomfort of number 3.

Say you do find some alternative steel-manned position on truth that is comfortable and intellectually satisfying. What are the odds that this position will be the same position as that held by "most humans", or that understanding it will help you get along with them ?

Regardless of the concept of truth you arrive at, you're still faced with the challenge of having to interact with people who have not-well-thought-out concepts of truth in a way that is polite, ethical, and (ideally) subtly helpful.

I thought CLARITY was an interesting development - a brain preservation technique that renders tissue transparent. I imagine in the near future there's likely to be benefits going both was from preservation and imaging research.

Buffy / Xander, Motoko / Batu, Deunan / Briareos

(although I'm not sure "Sidekick" is exactly right here)

Hah, thanks for pointing this out. I must have read or heard of this before and then forgotten about it, except in my subconscious. Looks like they have done the math, too, and it figures. Cool!

Ah, my mistake, thanks again.

Downvoted for bad selective quoting in that last quote. I read it and thought, wow, Yudkowsky actually wrote that. Then I thought, hmmm, I wonder if the text right after that says something like "BUT, this would be wrong because ..." ? Then I read user:Document's comment. Thank you for looking that up.

Roko wrote that, not Yudkowsky. But either way, yes, it's incomplete.
The last quote isn't from Yudkowsky.

I believe this is incorrect. The required proportion of the population that needs to be immune to get a herd immunity effect depends on how infectious the pathogen is. Measles is really infectious with an R0 (number of secondary infections caused by a typical infectious case in a fully susceptible population) of over 10, so you need 90 or 95% vaccination coverage to stop it speading - and why it didn't much of a drop in vaccination before we saw new outbreaks.

R0 estimates for seasonal influenza are around 1.1 or 1.2. Vaccinating 100% of the population with... (read more)

My current rationalisation for my level of charitable giving is "if, say, the wealthiest top billion humans gave as much as me, most of the worlds current problems that can be solved by charity would be solved in short order".

I use this as a labor-saving angst prevention device.

Me: "Am I a good person ? Am I giving too little ? How should I figure out how much to give ? What does my giving reveal about my true preferences ? What would people I admire think of me if they knew ?"

Me: "Extra trillions thing. Get back to work."

Also, do the wealthiest top billion humans share your values? If you asked them what the "world's current problems" are, would they give the same answer as you? For instance, there are hundreds of millions, if not billions, of people who want me to burn in hell forever for not believing in a god. I don't mean "who think I will burn", I actually do mean "want me to burn". Needless to say, their values are opposed to my values.
That's interesting, but how much money is needed to solve "most of the world's current problems"?

1 - All but one of our ships BUILT for space travel that have gone on to escape velocity have failed after a few decades and less than 100 AUs. Space is a hard place to survive in.

Voyagers 1 and 2 were launched in 1977, are currently 218 and 105 AU from the Sun, and are both are still communicating. They were designed to reach Jupiter and Saturn - Voyager 2 had mission extensions to Uranus and Neptune (interestingly, it was completely reprogrammed after the Saturn encounter, and now makes use of communication codes that hadn't been invented when it was ... (read more)

The voyagers are 128 and 104 AUs out upon me looking them up - looks like I missed Voyager 2 hitting the 100 AU mark about a year and a half ago. Still get what you are saying. Still not convinced that all that much has been done in the realm of spacecraft reliability recently aside from avoiding moving parts and having lots of redundancy, they have major issues quite frequently. Additionally all outer solar system probes are essentially rapidly catabolizing plutonium pellets they bring along for the ride with effective lifetimes in decades before they are unable to power themselves and before their instruments degrade from lack of active heating and other management that keeps them functional.

That's not why it's useful. It's useful because it provides liquidity and reduces the costs of trading.

Absent other people getting their trades completed slightly ahead of you, getting your trades completed in a millisecond instead of a second is that valuable ? I'm not being rhetorical - I know very little about finance. What processes in the rest of the economy are happening fast enough to make millisecond trading worthwhile ?

I would have guessed a failure to solve a co-ordination problem. That is, at one time trades were executed on the timescale of ... (read more)

getting your trades completed in a millisecond instead of a second is that valuable ?

The benefit to the small investor is not really faster execution -- it is lower bid-ask spread and lower trading costs in general.

For example there was a recent "natural experiment" in Canada (emphasis mine): a recent natural experiment set off by Canada’s stock market regulators. In April 2012 they limited the activity of high-frequency traders by increasing the fees on market messages sent by all broker-dealers, such as trades, order submissions and c

... (read more)

I don't know if there's a name for it. In general, consequentialism is over the entire timeline.

Yes, that makes the most sense.

It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob.

No no, I understand that you're not talking about killing people off and replacing them, I was just trying (unsuccessfully) to give the most clearest example I could.

And I agree with your consequentialist analysis of indifference between the creation of Alice and Bob if they have the same utility ... unless "playing god events" have negative utility.

Is there a separate name for "consequentialism over world histories" in comparison to "consequentialism over world states" ?

What I mean is, say you have a scenario where you can kill of person A and replace him with a happier person B. As I understand the terms, deontology might say "don't do it, killing people is bad". Consequentialism over world states would say "do it, utility will increase" (maybe with provisos that no-one notices or remembers the killing). Consequentialism over world histories would say "the utility contribution of the final state is higher with the happy person in it, but the killing event subtracts utility and makes a net negative, so don't do it".

I don't know if there's a name for it. In general, consequentialism is over the entire timeline. You could value events that have a specific order, or value events that happen earlier, etc. I don't like the idea of judging based on things like that, but it's just part of my general dislike of judging based on things that cannot be subjectively experienced. (You can subjectively experience the memory of things happening in a certain order, but each instant of you remembering it is instantaneous, and you'd have no way of knowing if the instants happened in a different order, or even if some of them didn't happen.) It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob. I was talking about preventing the existence of Alice to make way for Bob. Alice is not dying. I am removing the potential for her to exist. But potential is just an abstraction. There is not some platonic potential of Alice floating out in space that I just killed. Due to loss aversion, losing the potential for Alice may seem worse than gaining the potential for Bob, but this isn't something that can be justified on consequentialist grounds.

OK now I have to quote this:

Bernard Woolley: What if the Prime Minister insists we help them?

Sir Humphrey Appleby: Then we follow the four-stage strategy.

Bernard Woolley: What's that?

Sir Richard Wharton: Standard Foreign Office response in a time of crisis.

Sir Richard Wharton: In stage one we say nothing is going to happen.

Sir Humphrey Appleby: Stage two, we say something may be about to happen, but we should do nothing about it.

Sir Richard Wharton: In stage three, we say that maybe we should do something about it, but there's nothing we can do.

Sir Humph

... (read more)

Current beliefs on climate change: I would defer to the IPCC.

I would have first came across the subject while I was at school about 25 years ago (probably not at school, or at least only in passing). I think I accepted the idea as plausible based on a basic understanding of the physics and on scientific authority (probably of science popularisers). I don't remember anyone mentioning quantitative warming estimates, or anyone being particularly alarmist or alarmed.

My current views aren't based on detailed investigation. I would say they are based mostly on (... (read more)

My impression (which is only a handwavy impression, and I'll be happy to be corrected) is that climate-change "skeptics" used to say that there was no global warming, then that there was some but it probably wasn't anthropogenic, then that there was some and some of it was anthropogenic but that it was a good thing rather than a bad thing, and now that there is some and some of it is anthropogenic and it's probably a bad thing but the cost of stopping it would outweigh the benefits. In other words, that what's remained constant is the bottom line (we shouldn't make any changes to our industrial practices, economic policies, etc., to mitigate anthropogenic climate change), but the justification for it has become more and more modest over time, perhaps in response to strengthening evidence or to popular opinion. (Of course not everyone who can be categorized as a "climate-change skeptic" holds the exact same opinions; the above is intended as a rough characterization of what I think the typical "respectable skeptic" position has been.)

Another text file user. My current system is a log.txt file that has to-do lists at the top, followed by a big list of ideas waiting to be filed, followed by datestamped entries going downwards. That way the next thing to do is right at the top, and I can just cat the date to the bottom to write an entry. I keep this in the home directory on my notebook, but regularly copy it up to my Dropbox. When it gets really long I cut the log part off and save it.

I have another set of files with story ideas, productivity notes, personal thoughts, wildlife sightings, ... (read more)

Rather flogs a dead horse, but highlights an important difference in perspective. You tell your AI to produce paperclips, and eventually it stops and asks if you would like it to do something different.

You could think "hey, cool, its actually doing friendly stuff I didn't ask for", or you could think "wait ... how would knowing what I really want help it produce more paperclips ... "

Yeah. Though actually it's more of a simplified version of a more serious problem. One day you may give AI precise set of instructions, which you think would make good. Like find a way of curing diseases, but without harming patients, and without harming people for the sake of research and so on. And you may find that your AI is perfectly friendly, but it wouldn't yet mean it actually is. It may simply have learned human values as a mean of securing its existence and gaining power. EDIT: And after gaining enough power it may as well help improve human health even more or reprogram human race to think unconditionaly that diseases were eradicated.

I agree, I think there is a common part of the story that goes "once connected to the internet, the AI rapidly takes over a large number of computers, significantly amplifying its power". My credence that this could happen has gone way up over the last 10 years or so. Also my credence that an entity could infiltrate a very large number of machines without anyone noticing has also gone up.

Staniford, Paxson & Weaver 2002

Whenever you see the words "Internet of things", think "unfixable Heartbleed everywhere forever".

Hasn't something much like this already happened?
Load More