All of RolfAndreassen's Comments + Replies

Mu: Question cannot be answered because "win" is not defined. Does winning require 

a) Dictating terms, in the style of Versailles?

b) As above, but also not burning to cinders the social technology that allowed you to fight such a war in the first place? (As happened to the OTL victors.)

c) As above, but also getting some kind of actual net benefit either in geopolitical-power terms or in goods for your citizens? (As very noticeably did not occur for the OTL "victors".)

d) A negotiated peace in which it's generally recognised that you had the upper hand ... (read more)

That's a good point. I will clarify. I mean [a] - you win, the enemy surrenders.

Does that make sense?


In a word, no.

I believe you are thinking of infinity as a number, and that's always a mistake. I think that what you're trying to say with your left-hand graph is that, given infinite utility, probability is a tiebreaker, but all infinite-utility options dominate all finite utilities. But this treats "infinity" as a binary quality which an option either has or not.

Consider two different Pascal's muggers: One offers you a 1% probability of utility increasing linearly in time, the other, a 1% chance of utility increasing exponentia... (read more)

Thanks @RolfAndreassen.  I'm reconsidering and will post a different version if I get there.  I've marked this one as [retracted].

This needs

a) proofreading and

b) unpacking for inferential distance.

Illustration of point a): "Infinity is low but not zero." This does not seem to make sense as written. Plausibly you missed out a "the probability of" or "on the curve" somewhere; which goes back to needing proofreading.

Illustration of b): "I'm trying to map it to a probability on my indifference curve of utility value versus probability (0 to 1) and pick the the highest expected value (probability * utility)." What is the difference (if any) between this, and just picking the highest expec... (read more)

Thanks for the response! I really appreciate it. a) Yes, I meant "the probability of" b) Thinking about how to plot this on graphs is helping me to clarify thinking and I think adding these may help to reduce inferential distance. (The X axis is probability.  For the case where we consider infinite utilities as opposed to the human case, the graph would need to be split into 2 graphs. The one on left is just an infinity horizontal line but there is still a probability range.  The one on the right has an actual curve and covers the rest of the probability range but doesn't matter since its utility values are finite. Considering only the infinite utilities is a fanatical decision procedure but doesn't generally lead to weird decisions. Does that make sense?) 

The Wiki link on Operation Bernhard does not very obviously support the assertions you make about the Germans flinching. Do you have a different source in mind?

I cannot quickly find a clean "smoking gun" source nor well summarized defense of exactly my thesis by someone else. (Neither Google nor the Internet seem to be as good as they used to be, so I no longer take "can't find it on the Internet with Google" as particularly strong evidence that no one else has had the idea and tested and explored it in a high quality way that I can find and rely on if it exists.) place of a link, I wrote 2377 more words than this, talking about the quality of the evidence I could find and remember, and how I process it, and which larger theories of economics and evolution I connect to the idea that human governance capacity is an evolved survival trait of humans, and our form of governments rely on it for their shape to be at all stable or helpful, and this "neuro-emotional" trait will probably not be reliably installed in AI, but also the AI will be able to attack anthropological preconditions of it, if that is deemed likely to get an AI more of what that AI wants, as AI replaces humans as the Apex Predator of Earth.  It doesn't totally seem prudent to publish all 2377 words, now that I'm looking at them? Publishing is mostly irreversible, and I don't think that "hours matter" (and also probably even "days matter" is false) so I want to sit on them for a bit before committing to being in a future where those words have been published... Is there a big abstract reason you want a specific source for that specific part of it? I don't see that example as particularly central, just as a proposal that anyone can use as a springboard (that isn't "proliferative" to talk about in public because it is already in Wikipedia and hence probably cognitively accessible to all RLLLMs already) where the example: (1) is real and functions as a proof-by-existence of that class of "planning-capacity attacking ideas" being non-empty in a non-fictive context, (2) while mostly emotionally establishing that "at least some of the class of tactics is

Did you, by any chance, predict this result anywhere? Explanations after the result are a dime the dozen.

Here is Stephan Guyenet writing about the diet 10 years ago:

There's some sort of "out of sight, out of [their] minds" pun here.

That aside, isn't this the actual purpose of mental institutions? Like the attics of previous generations, they are where we stow people we prefer not to think about. And you have to admit they do that job very well indeed.

The question is why does the attic work so well. Why does no one talk about the attic?

The free market can't be always pushing down the price of all goods (measured in other goods), that's a logical impossibility.

And yet that seems to be precisely what has happened.

However, supposing we hold tech progress and capital investment constant, then yes, we'll reach a steady state in which prices as a whole cannot fall further. But that still does not demonstrate that it is possible to maintain the sort of high-value-extraction transactions you outline for any great length of time. If the profit of bread is high then it will fall as people enter... (read more)

It seems like you have just reinvented the criticism "if you can extract almost all the value from each transaction (aka 'exploitation'), you will shortly be rich". Well, yes, but the point is that a market with competition generally prevents you from doing that. As someone pointed out, if you make 100 loaves then you have created 100 dollars of value; the question is how those 100 dollars are distributed. You construct an example where the baker is able to capture 99% of the value he created; good for him, but it relies on your construction of t... (read more)

This kind of "stuff gets cheaper, everyone benefits" advocacy is why I wrote that comment to begin with. The free market can't be always pushing down the price of all goods (measured in other goods), that's a logical impossibility. There's no magic force acting on one conveniently chosen side of each transaction. Why isn't the same force pushing down the price of labor then, making labor cheap in terms of bread, instead of making bread cheap in terms of labor? Oh wait, maybe it is. Maybe all these forces are acting at once and going into weird feedback loops and there's no reason why the end result would be moral in any way. That's my point.

It's disrespectful to people who don't have any food to eat, much less play with. Food is important, and this fact is easily forgotten.

Pretty much everything you do in the first world is disrespectful from that point of view. You pick clothes on the basis how fashionable they are? You play games on a computer? You have a pet, YOU GIVE FOOD TO AN ANIMAL?!!

Idea 2 seems very vague. Can you give an example of how I would use it?

To be honest, I'm not too sure myself. I was thinking about times where, say, TIME writes a favorable piece on AI, then we can coordinate to get lots of people to upvote it on HN/reddit, or things like that, where having lots of people do a thing could be useful. Maybe it'll be more relevant for people in the same geographical areas?

There seems to be some implicit premise along these lines: "When contemplating the 'arrow of time' we should not consider anything that doesn't explicitly appear in the laws of physics." but I don't see any reason to accept such a premise.

I would say "explicitly or implicitly", and then it seems to me that we have every reason to accept that premise, because where the Devil else are you going to look? Noting that entropy does not appear in the laws of physics even implicitly; it's a heuristic, not a derived quantity.

If I talked to

... (read more)

I don't understand what, if anything, you would consider non-arbitrary.

I'm not sure this is actually an important disagreement; I'm ok with dropping it if you want. However, you are the one who suggested that entropy could be calculated in a non-arbitrary way; but I don't think you've offered an example of such a calculation.

And why does that conflict with what anyone says about the "arrow of time"?

It conflicts with the notion that entropy is a good way to consider the problem; entropy is a non-full-information heuristic that doesn't appea... (read more)

All I actually said was "not-so-arbitrary". I think that's pretty much all one can say about anything, which is why I asked what if anything you would consider non-arbitrary. I don't see the connection between the two halves of that sentence. There seems to be some implicit premise along these lines: "When contemplating the 'arrow of time' we should not consider anything that doesn't explicitly appear in the laws of physics." but I don't see any reason to accept such a premise. If you mean that that's enough to appreciate that in principle something of that sort is not entirely ruled out -- yeah, I agree. If you mean that your intuition tells you that weak parity violation really is the reason why we can fry eggs but not un-fry them then, well, I'm afraid I don't trust your intuition as much as you might. If I talked to a bunch of theoretical physicists -- a group whose intuition in such things I think we should probably trust more than that of either experimentalists like you or pure mathematicians like me -- would you expect them to agree with you, to say "yes, of course, weak parity violation is probably the cause of the familiar macroscopic time-asymmetries we see in the world"? My impression -- which I admit is not based on actually finding lots of theoretical physicists and asking them -- is that they mostly would not say any such thing. As one example, I'll cite Sean Carroll again; although he is an author of pop-science books he is also a working scientist and this is pretty much in his field of expertise. And he says: Time reversal violation is not the arrow of time.

Yes, a notion of entropy depends on some state of knowledge and observational ability. But that doesn't mean it depends on picking ours in particular, and there are not-so-arbitrary ways to do it.

I don't understand how your suggested calculation is non-arbitrary; you still seem to be picking some criterion and then doing math. My point is that the laws of physics don't do any such thing; they just apply the exact laws of motion to the exact particle locations at every time step. Picking a different criterion for the entropy doesn't help - it's still not... (read more)

I don't understand what, if anything, you would consider non-arbitrary. And why does that conflict with what anyone says about the "arrow of time"? So you actually are suggesting that weak-interaction parity violation is responsible for the asymmetry between frying and un-frying eggs. OK, then. Do you have any actual evidence that it's so? It seems awfully implausible on the face of it, to me, but since (1) neither of us is a quantum field theorist and (2) so far as I know no one knows how to do the QFT calculations on anything like the scale required to understand what's happening when you fry an egg, I'm not sure that either my intuition or yours is to be trusted. So, I dunno: has anyone done the back-of-envelope calculations to figure out whether this works in some sort of toy model? have any actual quantum field theory experts given opinions on how plausible this is?

I keep coming back to entropy because the asymmetry in entropy is one of the things that needs explaining

Again, why bother with entropy as such? Just say "the initial conditions need explaining" and be done.

Given any criterion for distinguishing macrostates, you can (in principle) compute entropy relative to that criterion.

I do not understand how these two paragraphs are a response to what I said. Can you elucidate?

So far as I am aware, there is no reason to think that weak parity violation is responsible for the familiar macro-scale t

... (read more)
Because it's one of the more obvious descriptive statistics to look at and it shows the difference nice and clearly. If we just say "the initial conditions need explaining" (or: the differences between initial and final) then the obvious question is what about the initial conditions, and part of the answer to that is going to be the entropy. (Or maybe some other thing that's essentially equivalent.) Also, because it's a statistic that not only is different between the distant past and the distant future, but also varies in a consistent way at present. I can try, but if they aren't then my best guess is that I didn't correctly understand what you were saying (which was less than 100% clear to me). So I'll be brief about the elucidation, and then whichever of us turns out to have been misunderstood first can do the next round of elucidating :-). It looked to me as if you were saying, more or less, that entropy is a silly thing to be looking at at all, because it describes only our state of ignorance and not the actual universe; that when we say "the universe seems to be evolving from a low-entropy state to a high-entropy state" all we really mean is something like "we know a lot more about the past of the universe than about its future". I, on the other hand, think that is a wrong (i.e., a less than maximally useful) way to look at it. Yes, a notion of entropy depends on some state of knowledge and observational ability. But that doesn't mean it depends on picking ours in particular, and there are not-so-arbitrary ways to do it. Noun phrase! Would you like to make your argument a little more explicit? Do you think that weak parity violation is responsible for the familiar macro-scale time asymmetries everyone notices? Only in so far as it's plausible that the asymmetry-in-the-laws that we found actually causes the asymmetry-in-our-observations that we're trying to explain. I don't see that it is plausible, but perhaps the words "electroweak unification" should

If weak parity violation really explains anything here, I don't see what. Do you have any grounds for suspecting that weak parity violation explains why we see a very dense low-entropy universe in one direction and a very sparse high-entropy universe in the other? Do you have any grounds for suspecting that weak parity violation explains why smashing an egg is easier than putting it together?

So first let me note that the weak parity violations cannot explain the observed matter/antimatter asymmetry; it follows that there is a source of CP violation that... (read more)

I keep coming back to entropy because the asymmetry in entropy is one of the things that needs explaining, and because some of the other things that need explaining seem to be explicable in terms of entropy. Given any criterion for distinguishing macrostates, you can (in principle) compute entropy relative to that criterion. E.g., if you care only about macroscopic thermodynamic parameters when distinguishing macrostates, you get the classical Boltzmann entropy. These parameters presumably stop making sense when you consider the early enough universe, but we can still say that the thermodynamic entropy of the universe appears to be surprisingly small early on and much larger later on. (If the universe is infinite in extent, there are some technical difficulties here. I don't know exactly how they are addressed, but I note that cosmologists who accept the possibility that the universe may be infinite don't thereupon seem to stop talking about entropy, and I infer that the current best way of addressing them doesn't make the time asymmetry of entropy go away. If there are experts in the field reading this who would like to enlighten me further, I'm all ears.) I'm pretty sure this is just plain wrong, unless you have already established that the microlevel asymmetry is responsible for the macrolevel asymmetry. So far as I am aware, there is no reason to think that weak parity violation is responsible for the familiar macro-scale time asymmetries everyone notices.

Reversed spatial particles look the same to us as unreversed

No they don't; the neutrinos would change their handedness. (So would our amino acids, but that wouldn't affect their functioning, so far as I know, since everything else would as well.) And chiral-reversed neutrinos don't interact with anything. The laws of physics are in fact just about as P-violating as they can possibly be!

and the names "matter" and "anti-matter" are arbitrary

The names are arbitrary, but the functions aren't; matter consists of particles favoured by ... (read more)

In any case, flipping those things would definitely not result in a universe that goes from high entropy to low entropy, which is enough to show that you have not explained the arrow of time by those things.

This seems to me to be moving the goalposts, and additionally to put a lot of work into that word 'simple'. Suppose the symmetry was CPXYZT instead, would required the CPXYZ transformation still be simple? Is there a criterion for deciding other than "Sean Carroll thinks so"?

I agree that this definition is fuzzy. (So does Carroll, as he makes clear in the text immediately following the bit I quoted.) But no, I don't think it's moving the goalposts, though it may not be putting them where you would prefer them to be. I take the basic arrow-of-time problem to be something like this: The universe appears to be dramatically asymmetric in time: it is expanding in one time direction and contracting in the other; if we trace its evolution in the direction we call "past" according to our best understanding of the physics, we find a "big bang"; if we go in the direction we call "future" we find a "big freeze". These are distinguished not only by density/scale but also by entropy: the big bang is a much lower-entropy state than the big freeze. Furthermore, we see a similar dramatic asymmetry in our everyday lives: it's easy to break an egg or fry one, not so easy to put it together or turn it raw again. But in the fundamental laws of physics as we currently know them, we find nothing to explain any of this. Weak interactions do indeed show a slight violation of CP-symmetry, hence of T-symmetry, but frying eggs doesn't appear to have much to do with weak interactions; CPT-symmetry would appear to turn our universe into one that "looks just the same" but has time running "the other way"; and nothing in all of this shows any sign of explaining why (the history of) the universe should be so dramatically asymmetric in time. If weak parity violation really explains anything here, I don't see what. Do you have any grounds for suspecting that weak parity violation explains why we see a very dense low-entropy universe in one direction and a very sparse high-entropy universe in the other? Do you have any grounds for suspecting that weak parity violation explains why smashing an egg is easier than putting it together? I'm not sure whether this question is really directed at Sean Carroll (complaining that the passage I quoted is vague) or at me (complaini

Your second paragraph is simply incorrect: there is no known asymmetry in the laws of physics that might explain the arrow of time.

The laws of physics are CPT-invariant, as /u/gjm pointed out; CP symmetry is known to be broken; consequently T symmetry is also broken. The effect has been measured directly:

This is not helpful for explaining the arrow of time, for reasons that Sean Carroll points out in the post I linked.

[You can] reverse T without violating any laws providing you also (1) replace particles with antiparticles and (2) reverse all the spatial coordinates.

Well, yes, but we in fact have a universe with a bunch of particles in particular coordinates! Given the particles there is an arrow of time, that is, you can tell the difference between forward and backwards evolution.

I do not think what most people mean by an explanation of the arrow of time is a way of distinguishing the history of the universe from a T-reversed version, given that under CPT symmetry this is equivalent to a way of distinguishing the history of the universe from a CP-reversed version. By way of illustration, here's an actual cosmologist: Sean Carroll, in his book "From eternity to here".
Reversed spatial particles look the same to us as unreversed; and the names "matter" and "anti-matter" are arbitrary. So those differences are not helpful in explaining an arrow of time. They will not make any large scale difference in how the universe evolves.

the arrow of time follows from the laws given a low entropy at the beginning of the universe.

That is not correct. Entropy is a statistical tendency over ensembles of states, which we use to make probabilistic predictions because we do not know the single true state with precision. But the actual physical world has exactly one state, and it evolves deterministically. There is no reason within Newton's and Maxwell's laws for the world to go from low to high entropy; it could just as well evolve in the other direction.

The correct answer is to notice that ... (read more)

gjm's response to this is correct. "It could just as well evolve in the other direction." If you mean that you could, if you wanted, call the past "the future," and call the future, "the past," you can do that if you want: but you will remember things in the direction of lower entropy and expect things in the direction of higher entropy. Which as gjm said is what we mean by talking about an arrow of time. In other way, saying that this can happen "just as well" is like saying that when you flip a coin a thousand times, you can just as easily get a thousand heads as any other sequence. But you will actually get a random looking sequence, and you will actually get increasing entropy, not decreasing entropy. Your second paragraph is simply incorrect: there is no known asymmetry in the laws of physics that might explain the arrow of time. It is explained (in terms of experience) by the fact that time in one direction is vastly different in entropy from the other. We call the low end "the past," because we necessarily remember the low-entropy side of time. If you assume random conditions to the universe, you will not get either low entropy to high, or high to low (which is really the same thing), but high entropy on both sides, and any low entropy situation like conscious experience would be explained as Boltzmann brains.
So far as anyone knows, the true laws of physics are "CPT-symmetric", meaning that you can reverse T without violating any laws providing you also (1) replace particles with antiparticles and (2) reverse all the spatial coordinates. I don't think there is an explanation for the arrow of time here. Entropic considerations can't explain (even if one could find a good way of stating precisely) the alleged observation that time "runs" one way rather than the other; but they can explain why we remember the past and not the future, which is plausibly what's actually meant by saying that there's an arrow of time. An explanation of this sort, of course, leaves open the question of why there's a very-low-entropy state at all to serve as the "beginning" of the universe.

I think you are conflating "is overly rational and insufficiently pragmatic" with "doesn't do what ArisC wants, on demand, in the way they want it done".

All three things are quantised and should take 'fewer': Fewer pictures, fewer links, fewer words. Less is for things that aren't countable; less liquid, less wrong.

Ah, less vs. fewer. Another arrow in the prescriptivist's quiver of pointless pedantry.

This is awesome. Please write Week Two.

I'm currently out of decent ideas to keep Weirdtopia appropriately weird, so I'm focusing on other writing projects until such time as I can write a sequel that does justice to Week One. I can't promise it'll ever happen, but it's not out of the realm of possibility.

To the extent that some SJWs seem to want to say “I really, really want X,” and leave their argument at that, then rationality is irrelevant to them.

Rationality is also irrelevant to my daughter, and for the same reason, as for example in this exchange:

Daughter: I want TV. Me: No more TV now. Daughter: But I want it!

This is rather a common 'argument' of hers; from the outside it looks like she models me as not having understood her preference, and tries to clarify the preference. To be sure, she has the excuse of being four.

I’m not saying that a little more rationality wouldn’t be helpful. I’m saying the pointing to this and saying it’s irrational and maybe stupid is not the most interesting thing that can be said about it. It is more fascinating to look for what is incentivizing the irrationality. There’s a very rational (in the sence of effective for getting what you want) negotiating tactic I heard about in one of Eric Flint’s books. The negotiator points at Crazy Joe muttering to himself in the corner and says ‘He and his friends are saying that if you don’t double their salaries they are not only going to strike, they’re going to be throwing rotten eggs at you come into the office and showing up at your house at midnight with bullhorns. But I know these people and can talk their language. I can get them to calm down. But you have to give me something in exchange. A ten percent raise doesn’t sound so crazy compared to double the salaries, does it? And besides, if you don’t make a deal with me, you’ll end up having to negotiate with Crazy Joe.’ And the business manager is so aghast by Crazy Joe’s demands that he ends up agreeing that ten percent is not as extreme as he first thought. The fact that there is a crazy extreme getting a lot of attention, can make the only rationally extreme seem moderate. This sort of tactic doesn’t have to be disingenuous, or even conscious. In fact, If you eject (or steam roll into moderation) too many people for being more extreme than you are, not only do you weaken your negotiating position, you can get evaporative cooling in the direction of agreeing with your opponent. Of course the people of the extreme extreme don’t need to know, and certainly can’t admit in public, that they are just being used to make other positions look good. When you combine that with the fact the SJWs have trouble coming to terms with the fact you can’t make everyone happy all of the time, the extreme extreme can spiral out of control in a way that is very hard to stop.

Right, which is why I don't postulate a simulated universe as the explanation for existence.

Positing a hyper-powerful creative entity seems not that epistemologically reckless

How about epistemicologically useless? What caused your hyper-powerful creative entity? You haven't accomplished anything, you've just added another black box to your collection.

Can you explain how a simulated universe, for instance, is more useful than deism? Doesn't it also simply move the question of ultimate origins back a step?
It is a progress from "here is a black box and I don't know what is inside" to "here is a black box and I believe there is a magical fairy inside".

Cynical, but is it actually true? It seems to me that a lot of people are actually quite strongly committed to the cause of the environment, or defense against terrorists. They do not necessarily take effective action for those causes, but they would certainly vote for someone who signalled similar commitment.

I think it is true. So true. People whom I have upbraided for selling rare flowers or digging vegetable gardens on protected territories immediately began to talk about oligarchs having private residences in our beloved forests and why am I not doing anything about that?..

How many slaves were there in the Paleolithic?

See my other comment in this subthread.

Unfortunately I cannot communicate why I think Christianity is true; it's a gestalt thing - it just makes sense, it can't be any other way in the light of all the evidence.

-- Any number of quite successful CEOs, neurosurgeons, writers.

I think you can differentiate between people who say that about a skill, and people who say it about a concept. Consider driving: I recall driving being a System 2 activity for at least a year while I was learning. It was certainly stressful enough to induce tears on a regular basis (give me a pass, I was a teenager). Slowly, driving under normal conditions became integrated into System 1, and now I don't feel like crying when I have to change lanes. Sufficient practice of any skill can turn it into a System 1 activity. Currently, programming is a System 2 activity for me. My husband, however, has more than a decade of experience programming. When he helps debug my code, he doesn't painstakingly go over every line, first thing. He glances, skims, says "This doesn't look right..." and then uses a combination of instinct and experience to find my error. I can't imagine being a professional programmer until it's a System 1 activity at least half of the time. So: the difference between saying "System 1 is integral to my profession and execution of a skill," and "System 1 is all the evidence I need for the existence of a deity" is very large. In the first case, we can take the statement as evidence, appropriately weighted against the speakers track record with that skill, that System 1 has been beneficial. In the second case, people are saying the equivalent of "My instinct = God, and that's the only test I need!" The weight that bears in your Bayesian calculation should be nothing, or almost nothing, because there is no way to develop a God-detecting skill and integrate it into System 1.

Surgery to replace the bones with rubber things.

Oh wait, you had some constraints on the problem?

Downvoted for being a stream of consciousness.

There are two options: Either we have terminal goals that include "having a good time" and "living enjoyable lives", so that a pleasant life is good in itself. Or else we have terminal goals that are finitely achievable, and when we've achieved them we should shut down humanity as useless. In the latter case, we can throw out anything that doesn't advance us towards those finite goals; not in the former.

I think one may hold the first belief without advocating wireheading, in that our terminal goal may be "enjoy a wide variety of pleasant things that exist outside your skull".

It is possible to have 'infinite' goals that don't include "having a good time". Although speaking for myself, my goals certainly do include that.

Yes, but it may be true without being provable.

But if it's true that there doesn't exist a proof that it halts, then it will run forever searching for one.

No; provable and true are not the same thing. It may be the case that the program halts, but it is nevertheless impossible to prove that it halts except by "run it and see", which doesn't count.

I admit I was using the word 'torture' rather loosely. However, unless the AI is explicitly instructed to use anesthesia before any cutting is done, I think we can safely replace it with "extended periods of very intense pain".

As a first pass at a way of safely boxing an AI, though, it's not bad at all. Please continue to develop the idea.

If the excellent simulation of a human with cancer is conscious, you've created a very good torture chamber, complete with mad vivisectionist AI.

I have to be honest: I hadn't considered that angle yet (I tend to create ideas first, then hone them and remove issues). The first point is that this was just an example, the first one to occur to me, and we can certainly find safer examples or improve this one. The second is that torture is very unlikely - death, maybe painful death, but not deliberate torture. The third is that I know some people who might be willing to go through with this, if it cured cancer through the world. But I will have to be more careful in these issues in future, thanks.
I'm unsettled by the tags he gave the article. You could say the person with cancer was just an example, and we could make them brain dead, etc. But the article has the tags "emulation", "upload", "whole_brain_emulation", and "wbe". It's very disturbing that anyone would even consider feeding a simulated human to an unfriendly AI. Let alone in this horrifying torture chamber scenario.

I sold out to the Dark Side in 2014. This was a move between industry jobs. But, actually, the new one is somewhat more in the direction of data-gathering than the old one was.

Nu, but a method that has already been used on five problems seems to be pretty good at converting problems into nails. :)

Not sure that generalises outside of math. Is it really better to solve one problem really, really thoroughly, than to have a good-enough fix for five? Depends on the problems, perhaps - but without knowing anything else, I'd rather solve five than one.

I don't know the exact context of this particular quote, but George Pólya wrote a few books about how to become a better problem solver (at least in mathematics). In that context the quote is very reasonable.
I think the point of the quote is that in the first case you have five methods you can use to attack different problems. In the second case you only have one method, and you have to hope every problem is a nail.

Bentham is using Enlightenment shorthand; he means "good, just, natural-law-following legislation". He's not talking about the actual sausages that we get from real legislatures.

I got a new job! Which pays better than the old one.

Do you still get to do some science?
How did you go about finding a new one?

I opine that you are equivocating between "tends to zero as N tends to infinity" and "is zero". This is usually a very bad idea.

You take the probability of A not happening and multiply by the probability of B not happening. That gives you P(not A and not B). Then subtract that from 1. The probability of at least one of two events happening is just one minus the probability of neither happening.

In your example of 23% and 48%, the probability of getting at least one is

1 - (1-0.23)*(1-0.48) = 0.60.

Only if A and B are independent.

Repeating MattG's question: What do you expect to do that MTurk and the others don't already do? Why is your project an improvement on what already exists?

MTurk employs a lot of people in developed countries. I have read they are starting to reject Indian based workers because of poor work quality. I can find employment for people who can provide a similarly high standard of work relative to workers in more developed countries, but who need the income more. Member participants would otherwise have had difficulties joining, say, MTurk because of a lack of computers, internet access, proper guidance, training... I don't think there are any companies helping freelancers find work because it's not very profitable, and yet there is a great need to reach people who are not working to their potential.

Magical powers is not the same as powers divinely granted by a being that has your best interests at heart and whose servants have no agenda of their own. And, going genre savvy for a moment, the incident you refer to is pretty strong evidence that Mellie's powers tend to the less-luminous side.


The lymph node is connected to the... central nervous system! The central nervous system is connected to the... brain lobes! The brain lobes are connected to the... Descartian ghost! Doing the consciousness dance!

If mathematicians measure randomness with probability, then there must be some things that have a 100% occurrence probability

Er... what? I think you need to state your train of thought in more detail; at the moment it doesn't seem precise enough to engage with.

Yes, I think I am confusing randomness, with probability and certainty. I will try to clarify above by editing my post.

Therefore it is reduced impact to output the correct x-coordinates, so I shall.

This seems to me to be a weak point in the reasoning. The AI must surely assign some nonzero probability to our getting the right y-coordinate through some other channel? In fact, why are you telling the X AI about Y at all? It seems strictly simpler just to ask it for the X coordinate.

0Stuart_Armstrong9y and

I'll observe that cold vessels fail gradually; pressure vessels may fail catastrophically.

Actually, cryogenic vessels do not really fail, in the sense I think you mean, over time - with the notable exception of liquid helium and liquid hydrogen storage vessels. Liquid helium has bizzare effects of metal (in addition to quantum tunneling) causing high strength steel to embrittle over time. It is thoought that this occurs due to the presence of helium in solid solution in the metal subjected to loading, and being present at a temperature sufficiently low to form grain boundary cracks as a result of sliding along grain boundaries (which contain s... (read more)

There may be a better one; Moldbug's financial ideas are spread over so many words that I gave up on finding the perfect link and just posted one that at least gestures in the right direction.

Load More