# All of bcoburn's Comments + Replies

My first idea is to use something based on cryptography. For example, using the parity of the pre-image of a particular output from a hash function.

That is, the parity of x in this equation:

f(x) = n, where n is your index variable and f is some hash function assumed to be hard to invert.

This does require assuming that the hash function is actually hard, but that both seems reasonable and is at least something that actual humans can't provide a counter example for. It's also relatively very fast to go from x to n, so this scheme is easy to verify.

-2Watercressed10y
Hash functions map multiple inputs to the same hash, so you would need to limit the input in some other way, and that makes it harder to verify.

Obligatory note re: standing desk ergonomics: http://ergo.human.cornell.edu/CUESitStand.html

The lesson seems to be to mostly sit, but stand and walk around every 30-45 minutes or so.

0D_Malik10y
Thanks for the link! The page's arguments don't seem to strongly support its recommendation to spend most of the day sitting, though; my takeaway is that you should look at ergonomics, and you shouldn't stand all day.

I think that the main difference between people who do and don't excel at SC2 isn't that experts don't follow algorithms, it's that their algorithms are more advanced/more complicated.

For example, Day[9]'s build order focused shows are mostly about filling in the details of the decision tree/algorithm to follow for a specific "build". Or, if you listen to professional players talking about how they react to beginners asking for detailed build orders the response isn't "just follow your intuition" it's "this is the order you build ...

I actually see a connection between the two: One of the points in the article is to buy experiences rather than things, and Alicorn's post seems to be (possibly among other things) a set of ways to turn things into experiences.

Yes, that is exactly what they are saying. It happens to be the case that this thing works for you. That is only very weak evidence that it works for anyone else at all. All humans are not the same.

We recommend getting over being insulted and frustrated when things that work for you specifically turn out to be flukes, it's not a surprising thing and sufficiently internalizing how many actual studies turn out to be flukes would make it the obvious result. Reality shouldn't be strange or surprising or insulting!

-3denisbider11y
It doesn't only work for me. It's how most people I know, who are into fitness, manage their weight. The "Calories In" part is not eating too much. The "Calories Out" part is maintaining your metabolism by eating small meals regularly, exercising, and eating lots of protein to gain and preserve muscle mass. It works. It works for a lot of people. In fact, aside from gastric bypass surgery, it's the only reliable way to lose weight that I know. And gastric bypass surgery is a form of CI:CO! And then we have a bunch of people on Less Wrong, all of whom appear to be convinced that human bodies can somehow violate the rules of thermodynamics. Or that the calorie content of foods varies so wildly no one can ever track it well enough to lose weight. Then when challenged, you resort to arguments like this: * The sun is dark green. * No, it's bright yellow, I saw it this morning. * That's anecdotal evidence. It's no good as science. It's green, stop spreading your bullshit. * I'm pretty sure that it was yellow every time I saw it in my life. It was never green. * More anecdotal evidence. What you see is not what other people see. Learn to science, man! Ad hominems are the last thing to resort to, but this conversation has become so ridiculous, I am left with no more credible explanations for this denialism than that you guys are chronically fat, and hiding behind excuses because you lack the will power to stop slurping Double Diet Mountain Dew. Then, you make endless posts about beating akrasia.

I'm not sure about the rest of the app, but the bookmarklet seems like a ridiculously good idea. The 'trivial inconvenience' of actually making cards for things is really brutal, anything that helps seems like a big deal.

Is there a good book/resource in general for trying to learn the meta-model you mention?

0pjeby11y
There is a brief overview of the concept here [http://web.archive.org/web/20110309041520/http://en.wikipedia.org/wiki/Meta-model_%28NLP%29], but the original and IMO definitive work on the subject (it was Bandler's masters thesis IIRC) is The Structure of Magic, Volume I. It's not too hard to find a copy electronically if you can't find one physically. As the above-linked page says: In the book, IIRC, there was more of a discussion about how the maps in our heads are created by distorting, deleting, and generalizing information from the territory. The meta-model is an attempt to codify how these distortions, deletions, and generalizations are reflected in our language, and provide a set of tools to allow someone to reconnect their map with the territory, to identify where the map needs updating in relation to a problem.

Of course, this is a straightforward problem to fix in the mechanism design: Just make responses to downvoted comments start at -5 karma, instead of having a direct penalty, as suggested elsewhere. I think that suggestion was for unrelated reasons, but it also fixes this little loophole.

0[anonymous]11y
This would discourage me much more than the current mechanism: I care very little about my total karma score. (I'm not saying it would be a good thing.

it doesn't give many actual current details, but http://en.wikipedia.org/wiki/Computational_lithography implies that as of 2006 designing the photomask for a given chip required ~100 CPU years of processing, and presumably that has only gone up.

Etching a 22nm line with 193nm light is a hard problem, and a lot of the techniques used certainly appear to require huge amounts of processing. It's close to impossible to say how much of a bottle neck this particular step in the process is, but based on how much really knowing what is going on in even just simple...

also generates free time! generally just trying to walk between classes as fast as possible is probably good, if sprinting seems too scary.

Me as well.

Because it signals that you're the sort of person who feels a need to get certifications, or more precisely that you thought you actually needed the certification to get a job. (And because the actual certifications aren't taken to be particularly hard, such that completing one is strong evidence of actual skill)

2[anonymous]11y
OK, I get it now. I don't list my ECDL (which I took in high school) in my CV because i think it's so basic that potential employers (who have any kind of clue) would think "huh? so what?", but I assumed that Java/Microsoft/etc. certifications were nontrivial to get.

More concisely than the original/gwern: The algorithm used by the mugger is roughly:

Find your assessed probability of the mugger being able to deliver whatever reward, being careful to specify the size of the reward in the conditions for the probability

offer an exchange such that U(payment to mugger) < U(reward) * P(reward)

This is an issue for AI design because if you use a prior based on Kolmogorov complexity than it's relatively straightforward to find such a reward, because even very large numbers have relatively low complexity, and therefore relatively high prior probabilities.

0private_messaging11y
When you have a bunch of other data, you should be not interested in the Kolmogorov complexity of the number, you are interested in Kolmogorov complexity of other data concatenated with that number. E.g. you should not assign higher probability that Bill Gates has made precisely 100 000 000 000 $than some random-looking value, as given the other sensory input you got (from which you derived your world model) there are random-looking values that have even lower Kolmogorov complexity of total sensory input, but you wouldn't be able to find those because Kolmogorov complexity is uncomputable. You end up mis-estimating Kolmogorov complexity when you don't have it given to you on a platter pre-made. Actually, what you should use is algorithmic (Solomonoff) probability, like AIXI does, on the history of sensory input, to weighted sum among the world models that present you with the marketing spiel of the mugger. The shortest ones simply have the mugger make it up, then there will be the models where mugger will torture beings if you pay and not torture if you don't, it's unclear what's going to happen out of this and how it will pan out, because, again, uncomputable. In the human approximation, you take what mugger says as privileged model, which is strictly speaking an invalid update (the probability jumps from effectively zero for never thinking about it, to nonzero), and the invalid updates come with a cost of being prone to losing money. The construction of model directly from what mugger says the model should be is a hack; at that point anything goes and you can have another hack, of the strategic kind, to not apply this string->model hack to ultra extraordinary claims without evidence. edit: i meant, weighted sum, not 'select'. So I don't know about anyone else, but as far as I can tell my own personal true rejection is: It's just too hard to remember to click over to predictionbook.com and actually type something in when I make a prediction. I've tried the things that seem obvious to help with this, but the small inconvenience has so far been too much Do you have a specific recommendation for what the minimum for longevity actually is? Three days doing three different high intensity weight bearing activities isn't the best overall workout program but it is certainly viable and far more minimal. It would give acceptable (but less) muscle growth and far better cardio improvements. Comes pretty close, but still leaves a little room for guesswork. Just as an exercise, and mostly motivated by the IRC channel: Can anyone find a way to turn this post into a testable prediction about the real world? In particular, it would be nice to have a specific way to tell the difference between "understanding the opposite sex is impossible" and "understanding the opposite sex is harder than the same sex" and "understanding types of people you haven't been in enough contact with is hard/impossible" 5Eugine_Nier11y What do we mean by "understand"? You could also try dissolving the whole capsule in water, which might make measuring out specific fractions easier. I think it's pretty likely this is just a joke, not really some clever tactic 0[anonymous]11y Most of wedrifid's karma sinks are witty. (Sometimes I'm tempted to upvote them only because of that.) Just for the record, and in case it's important in experiment planning, caffeine isn't actually tasteless at all. has a fairly bitter and certainly easy to recognize taste dissolved in just water. It is, however, really easy to mask in, for example, orange juice, so the taste shouldn't make the experiments hard as such. Just another design constraint to be aware of. I'd also recommend adding some sort of n days on, m days off cycling to your tests, mostly because that's what I do and I want to take advantage of other people's research. 5wedrifid11y Sounds like it is time for cheap gelatin capsules. 1gwern11y There's a lot of possible schedules; you need to start somewhere. Why does it need to be aim along the planet? Use orbital mechanics: Send your spacecraft on an orbit such that it hits the planet it launched from at the fast point of a very long elliptical orbit. Or even just at the far side of the current planet's orbit, whatever. It can't be that hard to get an impact at whatever angle you'd prefer with most of the Orion vehicle's energy, launching direction barely seems to matter. 0wedrifid11y No particular reason. It's just that the arbitrary task of planetary self destruction that Multipartite specified [http://lesswrong.com/lw/ajo/60m_asteroid_currently_assigned_a_022_chance_of/60gq] happens to be that of destroying the planet with a bomb on the surface. If you were just trying to destroy the planet then doing so from the surface seems like a terrible idea. In a situation this specific, it seems to me to be worthwhile to reply exactly once, in order to inform other readers. Don't expect to change the troll's opinion, but making one comment in order to prevent them from accidentally convincing other people seems worthwhile. 2shminux11y "I believe I said that, Doctor" -- Spock Does anyone know of a place to just buy one of those belts that tells you which way north is? I've looked and can't find such a thing. Am therefore probably going to just make one, are there other things that it'd be useful to sense in a similar way? The first thing I think of is just the time, but maybe there's something better? 1[anonymous]11y You mean the North Paw [http://sensebridge.net/projects/northpaw/]? "Improvement" is probably the literal translation, but it's used to mean the "Japanese business philosophy of continuous improvement", the idea of getting better by continuously making many small steps. Two things: What sort of time commitment/week would you expect for this? the link in edit2 points to http://lesswrong.com/evidenceworksremote.com/courses instead of http://evidenceworksremote.com/courses which is presumably what it should be Following up on this, I wondered what it'd take to emulate a relatively simple processor with as many normal transistors as your brain has neurons, and when we should get to that assuming Moore's Law hold. Also assuming that the number of transistors needed to emulate something is a simple linear function of the number of transistors in the thing you're emulating. This seems like it should give a relatively conservative lower bound, but is obviously still just a napkin calculation. The result is about 48 years, and the math is: $T\_n\_e\_e\_d\_e\_d = T\_b\_r\_a\_i\_n \* \\frac\{T\_c\_u\_r\_r\_e\_n\_t\}\{T\_6\_5\_0\_2\} = 80\*10^9 \*\\frac\{1\.16\*10^9\}\{4000\} = 2\.32\*10^16 \\\\\*years = \\log\_\{2\}\{\\frac\{T\_n\_e\_e\_d\_e\_d\}\{T\_c\_u\_r\_r\_e\_n\_t\}\} \* 2 = 48\.5$ Where all numbers are take... I don't know for sure either way, and can't think of an experimental way to check off hand. I don't think that heating is likely to do anything to the other components of most drinks, and you might be able to make a better guess with domain knowledge I don't have. I think ethanol will generally evaporate more quickly than water, so you might also be able to get a similar test by simply closing one portion into a container with only a little air, and leaving another open for a long enough time, overnight maybe. will still lose some water, which is I guess ... 2taryneast9y AFAIK, it will utterly destroy many of the volatile components of wine that make it taste so complex and interesting. That's why alcohol-free wine tends to be so bland and uninteresting. I'd be willing to do a taste-test on alcohol-free wine vs wine that I already know that I like... If you hide the non-alcoholic one in sufficient number of normal ones I probably wouldn't guess which one it was (I'm not good enough at telling which wine is which that I'd spot a particular wine by taste, just whether I like them or not). 2[anonymous]9y Maybe you could try adding a little more ethanol to one of the two glasses. it's not quite trivial to actually measure, but total tabs opened in the last, say, hour is probably a better measurement than how many you have open right now. After writing that I started thinking "maybe a large number of tabs open with a slow turnover/new tabs opening rate doesn't even correlate at all with procrastination", but I suspect that's just me coming up with excuses for things and isn't actually true. Could try measuring both if the survey actually works, shrug. 5JenniferRM11y I generally have lots of tabs open (to the point of being made fun of) and my tendency is to open and close them swiftly in the course of multi-pronged subject exploration, with a small handful of "best of exploration" that I retain so that they prime me in subsequent hours or days or weeks with reminders, re-reading opportunities, and the possibility of being folded into longer term projects. Every so often I clean them up by transcribing URLs and notes into text files that accumulate in an idea-archive. I endorse some of this behavior, but suspect that it could become problematic in the long term... not because of "akrasia", but as part of a more specific problem called "hoarding". Hoarding [http://www.mayoclinic.com/health/hoarding/DS00966/DSECTION=symptoms] appears to be a mental disorder that can start in one's teens, but really starts to become visible in one's 30's or 40's, growing with time until you're an 80-year-old living in a pile of useless trash. My current working model for it is that retention behavior is the default behavior, mostly driven by positive emotions triggered by objects. To throw something away, a hoarder needs to consciously override this default using fluid intelligence (calculating that expected use-value in realistic plans are less than inventory costs?). As aging progresses, fluid mental abilities decline, and you're less able to decide that something isn't worth keeping, until there are tiny trails between the bed, the toilette, and the microwave and the rest of the house is full of piles of boxes full of sorted boxes of crap. Amusingly, I found out about hoarding via my tab-heavy information searches and left the tabs open for days, and cleaned them up into a TODO file to have a conversation with family about hoarding, which is part of why I'm aware of this. Within five years I plan to do some debugging of space management habits and policies to make sure they're solid and clean, but I expect it to take 30 minutes per day of thin Also really badly needs to be applied to itself. So many words! 0sketerpot11y I disagree. The symmetry of the "nothing left to add / nothing left to take away" phrasing is important to the poetry of the phrase. That matters. 0Nic_Smith11y Warrigal previously suggested "Perfection is lack of excess [http://lesswrong.com/lw/53k/rationality_quotes_april_2011/3uog?context=2#comments]." It does dissolve reasonably into water, but tastes pretty terrible. Can dilute it with fruit juice if that's a problem, or just ignore it. I don't know how well it works in games with only 1 scum player, but with at least two just the fact that there are two players who know they each have a partner changes their behavior enough that the game isn't random. There's also some change in what people say just because each side has a different win condition, although again this is less true with just one scum player. As just a simple example, when you're playing as the scum it can be really hard (at least for me) to make a good argument that someone I know is a normal villager isn't, which can be enough for another player to deduce my role. 1Normal_Anomaly12y That's interesting; I haven't played enough mafia to really study it. And in all the games I have played, the town always lynches the first player someone bothers to accuse--there aren't any actual arguments. You could. Or you could just refuse to get into arguments about politics/philosophy. Or you could find a social group such that these things aren't problems. I certainly don't have amazing solutions to this particular problem, but I'm fairly sure they exist. 2atucker12y The solutions that I have so far are just finding groups of people who tend to be open-minded, and then discussing things from the perspective of "this is interesting, and I think somewhat compelling". When I get back from vacation I intend to do more wandering around and talking to strangers about LWy type stuff until I get the impression that I don't sound like a crackpot. When I get good at talking about it with people with whom social mistakes are relatively cheap, I'll talk about it more with less open-minded friends. To everyone who just read this and is about to argue with the specific details of the bullet points or the mock argument: Don't bother, they're (hopefully) not really the point of this. Focus on the conclusion and the point that LW beliefs have a large inferential distance. The summary of this post which is interesting to talk about is "some (maybe most) LW beliefs will appear to be crackpot beliefs to the general public" and "you can't actually explain them in a short conversation in person because the inferential distance is too large". Therefore, we should be very careful to not get into situations where we might need to explain things in short conversations in person. 4orthonormal12y This comment makes the OP's point effectively, in a fraction of its length and without the patronizing attitude. Comment upvoted, OP downvoted. [anonymous]12y10 Therefore, we should be very careful to not get into situations where we might need to explain things in short conversations in person. Should I start staying indoors more? This is, indeed, exactly what happened. 6David_Gerard12y I'm eagerly awaiting years-later responses to my own early comments :-D "Sweat" here is a standin for generic effort, whether it's actual physical sweat or not depends on what exactly you're training for. A relatively simple way to test whether you actually like the taste of alcohol specifically: take a reasonable quantity of your favorite alcoholic beverage, beer/wine/mixed drink/whatever, and split it into two containers. Close one, and heat the other slightly to evaporate off most of the actual ethanol. Then just do a blind taste test. This does still require not lying to yourself about which you prefer, but it removes most of the other things that make knowing whether you like the taste hard. I personally don't care enough to try this, but just the habit of thinking "how could I test this?" is good. 2[anonymous]11y How do I know that the heating doesn't evaporate or otherwise affect stuff other than ethanol? Mandatory link on cryonics scaling that basically agrees with Eliezer: http://lesswrong.com/lw/2f5/cryonics_wants_to_be_big/ 2handoflixue12y Unless modern figures have drifted dramatically, free storage would give you a whopping 25% off coupon. This is based on the 1990 rates I found for Alcor. And based on Alcor's commentary [http://www.alcor.org/Library/html/CostOfCryonics.html] on those prices, this is an optimistic estimate. Source: http://www.alcor.org/Library/html/CostOfCryonicsTables.txt [http://www.alcor.org/Library/html/CostOfCryonicsTables.txt] Cost of cryogenic suspension (neuro-suspension only):$18,908.76 Cost of fund to cover all maintenance costs: \$6,600 Proportional cost of maintenance: 25.87% -------------------------------------------------------------------------------- I'd also echo ciphergoth's request for any sort of actual citation on the numbers in that post; the entire post strikes me as making some absurdly optimistic assumptions (or some utterly trivial ones, if the author was talking about neuro-suspension instead of whole-body...)

Why do we even care about what specifically Eliezer Yudkowsky was trying to do in that post? Isn't "is it more helpful to try to find the simplest boundary around a list or the simplest coherent explanation of intuitions?" a much better question?

Focus on what matters, work on actually solving problems instead of trying to just win arguments.

0Will_Sawin12y
The answer to your question is "it depends on the situation". There are some situations in which are intuitions contain some useful, hidden information which we can extract with this method. There are some situation in which our intuitions differ and it makes sense to consider a bunch of separate lists. But, regardless, it is simply the case that when Eliezer says "Perhaps you come to me with a long list of the things that you call "art" and "not art"" and "It feels intuitive to me to draw this boundary, but I don't know why - can you find me an intension that matches this extension? Can you give me a simple description of this boundary?" he is not talking about "our intuitions", but a single list provided by a single person. (It is also the case that I would rather talk about that than whatever useless thing I would instead be doing with my time.)

This isn't even related to the law of large numbers, which says that if you flip many coins you expect to get close to half heads and half tails. This is as opposed to flipping 1 coin, where you expect to always get either 100% heads or 100% tails.

I personally expected that P(AI) would drop-off roughly linearly as n increased, so this certainly seems counter-intuitive to me.

It depends on what you're trying to do, working in bad conditions/under pressure is good for training but bad for actually getting things done. Ironically this seems to mean that you should work harder to have good conditions when you're under more time pressure/in a worse situation overall.

This one really needs to have been applied to itself, "short is good" is way better.

(also this was one of EY's quotes in the original rationality quotes set, http://lesswrong.com/lw/mx/rationality_quotes_3/ )

4[anonymous]12y
Perfection is lack of excess.
5dares12y
Also, "short is good" would narrow this quotes focus considerably.
0dares12y
New here, sorry for the redundancy. I probably should have guessed that such a popular quote had been used.
1CronoDAS12y
Maybe it's shorter in French?

More people confirming a story is certainly epsilon more evidence that the story is correct (Because more people confirming a story being evidence that it is false is absurd).

A more interesting question is, what is the magnitude of epsilon in a case like the one described here? This is in principle testable, but I certainly don't know exactly how to go about testing it.

1gwern11y
Intuitively, I'd say it's some sort of logarithm or quadratic curve - if one person tells me they say a black dog the next street over, that bumps up my belief a lot; if two people tell me it, it still increases, but not nearly as much; and so on to the point where if 2 billion people tell me that, I begin to think this is part of some cult and start lowering credence.

It's slightly better to specifically connect the other end the cable connected to the black side of the dead battery last, and to connect it to the frame of the car with the live battery instead of to the black terminal in that car.

The goal here is to make the last connection, the one that completes the circuit and can generate sparks, away from either battery, because lead-acid batteries can sometimes release hydrogen gas, which can cause fires or explode. The chances of this actually happening are pretty low, but there's no reason not to be careful. The end of the black cable connected to the running car is the only one that can be attached away from batteries, so that's the one used.

That kind of comparison just completely ignores opportunity costs, so it will result in mistakes any time they are significant.

0Tyrrell_McAllister12y
Making the comparison is not the last step before decision. The comparison itself ignores opportunity costs, but it doesn't keep you from going on to perform an opportunity-cost check. The output of the comparison can then be combined with the output of the opportunity-cost check to determine a decision.

You should try asking people to send smaller amounts of money at once, it's slightly more likely to work.

Voted down because this is a really bad way to make a point.

On the other hand, the basic point is a good one: "they'll learn from it" is not in general a good reason for doing things that hurt people in whatever sense.

1shokwave12y
"They'll learn from it" is most definitely a good reason for doing things that hurt people in the specific case of people trying to hurt you (and learning not to). That is why I specified zero-sum games above.

The reasonable way to interpret this seems to be "don't trust something you don't understand/cannot predict." Not sure how seeing where it keeps its brain helps with that, though.

It's interesting that you both seem to think that your problem is easier, I wonder if there's a general pattern there.

9Paul Crowley13y
What I find interesting is that the pattern nearly always goes the other way: you're more likely to think that a celebrated problem you understand well is harder than one you don't know much about. It says a lot about both Eliezer's and Scott's rationality that they think of the other guy's hard problems as even harder than their own.