All of MBlume's Comments + Replies

Nyoom

I believe Alicorn meant to claim that the larger class of electric vehicles for ~1 person -- scooters, tricycles, skateboards, ebikes, etc -- are about to take off in a big way because there are a lot more people who would buy them if they knew about them/saw their friends using them than there are using them now

'oy, girls on lw, want to get together some time?'

And our first child, Merlin Miles Blume, was born October 12th, 2016 =)

The correct response to uncertainty is *not* half-speed

I think for me the problem is that I'm not being Bayesian. I can't make my brain assign 50% probability in a unified way. Instead, half my brain is convinced the hotel's definitely behind me, half is convinced it's ahead, they fail to cooperate on the epistemic prisoner's dilemma and instead play tug-of-war with the steering wheel. And however I decide to make up my mind, they don't stop playing tug-of-war with the steering wheel.

8SatvikBeri6y
My brain often defaults to thinking of these situations in terms of potential loss, and I find the CFAR technique of reframing it as potential gain helpful. For example, my initial state might be "If I go ahead at full speed and the hotel is behind me, I'll lose half an hour. But if I turn around and the hotel is ahead of me, I'll also lose time." The better state is "By default, driving at half speed might get me to the hotel in 15 minutes if I'm going in the right direction, and I'll save ~8 minutes by going faster. Even if the hotel is behind me, I'll save time by driving ahead faster."
Instrumental vs. Epistemic -- A Bardic Perspective

...amusingly enough, "sit in my room and write posts on Less Wrong" turned out to be a pretty good move, in retrospect.

0Crux6y
Did you gain any skills which allowed you to achieve the position you have now which were acquired through not "sitting in your room and writing posts on Less Wrong"? (I assume your point is that you met your partner through Less Wrong.)
'oy, girls on lw, want to get together some time?'

Further update for future biographers: we got married on September 21st at the UC Berkeley botanical garden, Kenzi officiated and YVain gave a toast =)

And our first child, Merlin Miles Blume, was born October 12th, 2016 =)

7Good_Burning_Plastic7y
You might want to link "a toast" to http://slatestarcodex.com/2014/09/22/ssc-gives-a-wedding-speech/ [http://slatestarcodex.com/2014/09/22/ssc-gives-a-wedding-speech/]
On Straw Vulcan Rationality

Thank you for causing me to read that =)

Traditional Capitalist Values

"Anyone who wants to make disturbing the peace a crime is probably guilty of 'hating freedom'"

No, they have different priorities from you.

2blacktrance8y
Not mutually exclusive, and can be a nicer way of saying the same thing.
127chaos8y
It would be slightly interesting to read a fic in which Naming was a mechanism of magic, and Voldemort chose that specific name for very good reasons. Reasons which explained why people feared the name. Maybe he stole the Grim Reaper's power for his very own, somehow becoming Master of Death or Flight from Death or something similar, something involving an actual title with power invested into it. Neat thoughts in this area, easy for the picking. French is kind of a silly language for it, of course.
Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96

Harry frowned. "Well, I could listen to it, or the Dark Lord... oh, my parents. Those who had thrice defied him. They were also mentioned in the prophecy, so they could hear the recording?"

"If James and Lily heard anything different from what Minerva reported," Albus said evenly, "they did not say so to me."

"You took James and Lily there? " Minerva said.

"Fawkes can go to many places," Albus said. "Do not mention the fact."

Frankly, this reads like a non-answer to me.

1Fermatastheorem9y
I think Dumbles is trying to tell McGonagall that he took the Potters there while letting her keep plausible deniability.
Prisoner's dilemma tournament results

Fantastic work, thank you =)

For anyone else unpacking the zip file, note you'll want to create a new directory to unzip it into, not just unzip it in the middle of your home folder.

Just One Sentence

Physicalism is the radical notion that people are made of things that aren't people.

Learning critical thinking: a personal example

Er, walking on a narrow ledge 300 feet off the ground is still a bad idea because, y'know even with something simple like walking, sometimes you roll a natural 1 and trip.

Ugh fields

This seems to be a serious problem. What do you do when you have enough vague procrastinatory ugh-fields that just reading good advice about procrastination makes you deeply afraid that you're going to have to think about one of them, so you wind up afraid to read/process it?

0TheOtherDave9y
The most reliable basic strategy for flattening out "ugh-fields" I know of is to decide on a single thing I want to un-ughify, and set up a schedule of reinforcement for myself for that thing. If I wanted to un-ughify myself around reading advice articles about procrastination, for example, I would treat myself every time I read such an article for a while, then switch to an intermittent reinforcement schedule (e.g., treat myself for every third article), or still better a differential reinforcement schedule (e.g., treat myself for the 30% best articles I read each week). That said, it's unlikely that reading advice articles about procrastination is a particularly high-value activity in the first place. Indeed, many people procrastinate that way.
New censorship: against hypothetical violence against identifiable people

I mean, assuming that sea piracy to fund efficient charity is good, media piracy to save money that you can give to efficient charity is just obviously good.

[anonymous]9y13

media piracy to save money that you can give to efficient charity

Is so incredibly obviously good that I'm mystified no one is promoting it. I think the main reason is because it is "illegal".

LW Women- Minimizing the Inferential Distance

I skimmed the options too quickly -- I'd have picked "not offensive" if I'd noticed it.

Group rationality diary, 11/13/12

I feel like there is a bias against reproduction on LessWrong.

Is there? I kinda hope not.

1[anonymous]9y
Let's see what people have answered the “have children” and “want (more) children” questions when the survey results come out...
7cata9y
Our demographics skew young (75% under 30, 90% under 38 [http://lesswrong.com/lw/8p4/2011_survey_results/]) and unmarried (18%, surely way below at least the American average) so that could explain a perceived bias without having to resort to more elaborate arguments.
Uncritical Supercriticality

I'm not sure what "supernatural" means. Out of the ordinary? But isn't deep rationalism out of the ordinary? What are we talking about?

In the local parlance, "supernatural" is used to describe theories that have mental thingies in them whose behavior can't be explained in terms of a bunch of interacting non-mental thingies. Pretty sure the definition originates with Richard Carrier.

0Abd10y
I read Carrier. Interesting. Reality, for me, is either Theostoa (without the ether construct) or SuperTheostoa, and I can't distinguish them, and I can't imagine how to distinguish them. Any mental thingie that might be ascribed to SuperTheostoa might be a not-understood, non-mental characteristic of Theostoa. But both Theostoa and SuperTheostoa are covered by the word Reality. Aside from reality, there is nothing. When we "worship" other than Reality, we are led astray, leading me to the credo of Islam. Laa ilaaha illa 'llah, there is no object-worthy-of-worship (ilah, god) except The Object (al-ilah, the god, shortened to Allah). All the lesser "supernaturals" seem like fantasy to me. There may be realities -- defined as actual experience -- behind them, but ... there are other possible explanations as well. I distinguish "experience" from what we take it to mean. Setting up Reality as God, then, as a mode of thinking, leads to study, testing, falsification, rejection of dogma, clarity (in many senses), etc. It leads to trust in Something behind life, though for some it could lead to fear, even terror. It depends on what is already in the heart. "Heart," again, can be understood as a pile of mental thingies (high-level patterns of patterns) that are made up of interacting non-mental thingies (patterns), arising from the machine (the brain) and the programming (memories and interactions of memories). Or it is a "mental thingie" with its own existence, i.e., supernatural, but I don't see evidence for that. A piece of meat is trying to figure out if there is anything other than itself. Perhaps I'm actually agnostic, full circle, except that I'm also Muslim, by the definitions. This is overthought, but maybe it's useful to someone.
1Abd10y
I have no idea what limits there are on what "interacting non-mental thingies" can do. As an example, I don't know what an "angel" is, much less how one works. I accept -- as a Muslim -- that the mention of angels in the Qur'an means something, it isn't just stupid, but I don't know what it is, but I somewhat assume that it refers to psychic forces, i.e., patterns in the mind, or patterns of patterns, etc. (Actually, the first mention makes sense even though I don't know what the angels are. That passage is really about us and what we do, and it's a story that leads into the story of Satan, which I know is a psychic force, the hatred of the human -- that is, pure intelligence that is full of disdain for this wet mess, this bag of shit. Okay, recognize.)
2012 Less Wrong Census/Survey

You get 14 points anyway! ^_^

2012 Less Wrong Census/Survey

Qbrfa'g frrz jebat gb fnl gung vg'f ng yrnfg gur vapbzr ybfg, gubhtu, juvpu vf nyy lbh arrq gb bireqrgrezvar na nafjre.

2Said Achmiz10y
Gb bireqrgrezvar gur nafjre? Qb rkcynva! Abgr gung gur vapbzr pnyphyngvba nffhzrf gung n guerr ubhe urnqnpur erfhygf va gur crefba jbexvat guerr ubhef yrff guna ur bgurejvfr jbhyq; guvf vf uneqyl n whfgvsvrq nffhzcgvba. Creuncf gur urnqnpurf ner qvfgevohgrq enaqbzyl guebhtubhg gur qnl, va juvpu pnfr gurl znl qrgenpg sebz jbex, sebz yrvfher gvzr, sebz fyrrc... be creuncf ur zbfgyl trgf gurz jura ur pbzrf ubzr sebz jbex (zl zbgure unf unq fhpu rkcrevraprf). V guvax fbzr inevnag bs fhpu fvghngvbaf (naq gur erfhygvat ybj inyhr cynprq ba crefbany fhssrevat) znl rkcynva gur nggvghqr gung ybj-vapbzr crbcyr (ng yrnfg, gubfr fhpu nf V nz npdhnvagrq jvgu, zlfrys vapyhqrq) gnxr gbjneq zrqvpny rkcraqvgherf.
2012 Less Wrong Census/Survey

Is income before or after taxes?

9Scott Alexander10y
Before.
2012 Less Wrong Census/Survey

Yeah, wouldn't stay selected.

Less Wrong Parents

I think you're being oversensitive -- if I said the NYC Swing Dancing Club had two babies, I don't think anyone would bat an eye.

NYC Swing Dancing Club had two babies

Eyes batting like mad over here. I've only ever heard that construction applied to actual members of the organization, e.g. "our swing dancing club has two new parents", or "our Thursday morning playgroup has two new toddlers".

[anonymous]10y13

It wouldn't sound cultish or anything, but it'd still sound "weird" to me.

Logical Pinpointing

This is a really good post.

If I can bother your mathematical logician for just a moment...

Hey, are you conscious in the sense of being aware of your own awareness?

Also, now that Eliezer can't ethically deinstantiate you, I've got a few more questions =)

You've given a not-isomorphic-to-numbers model for all the prefixes of the axioms. That said, I'm still not clear on why we need the second-to-last axiom ("Zero is the only number which is not the successor of any number.") -- once you've got the final axiom (recursion), I can't seem to visualize a... (read more)

1Viliam_Bur10y
I guess it is not necessary. It was just an illustration of a "quick fix", which was later shown to be insufficient.
Logical Pinpointing

I've seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, "Because otherwise you can't talk about numbers as opposed to something else," so AFAIK it's theoretically possible that I'm the first to spell out that idea in exactly that way, but it's an obvious-enough idea and there's been enough debate by philosophically inclined mathematicians that I would be genuinely surprised to find this was the case.

If memory serves, Hofstadter uses roughly this explanation in GEB.

0DavidS10y
This is pretty close to how I remember the discussion in GEB. He has a good discussion of non-Euclidean geometry. He emphasizes that originally the negation of Parallel Postulate was viewed as absurd, but that now we can understand that the non-Euclidean axioms are perfectly reasonable statements which describe something other than plane geometry we are used to. Later he has a bit of a discussion of what a model of PA + NOT(CON(PA)) would look like. I remember finding it pretty confusing, and I didn't really know what he was getting at until I red some actual logic theory textbooks. But he did get across the idea that the axioms would still describe something, but that something would be larger and stranger than the integers we think we know.
0Peterdjones10y
??? IRC, Hofstadter is a firm formalist, and I don't see how that square with EYs apparent Correspondence Theory. At least i don't see the point in correspondence if hat is being corresponded to is itself generated by axioms.
Constructing fictional eugenics (LW edition)

Central planning is pushing their goals into everyone's individual incentive. Humans aren't IGF maximizers, and will respond to financial incentives.

Constructing fictional eugenics (LW edition)

With central planning, more women than men makes sense, and this system has central planning. Everyone isn't just trying to maximize IGF

-2Eugine_Nier10y
Agreed, however, Eliezer's phrasing of #9 made it sound like he was referring to individual incentive.
(Moral) Truth in Fiction?

Fable of the Dragon Tyrant would make a good animated short, I think.

The Fabric of Real Things

OK, let's say you're looking down at a full printout of a block universe. Every physical fact for all times specified. Then let's say you do Solomonoff induction on that printout -- find the shortest program that will print it out. Then for every physical fact in your printout, you can find the nearest register in your program it was printed out of. And then you can imagine causal surgery -- what happens to your program if cosmic rays change that register at that moment in the run. That gives you a way to construe counterfactuals, from which you can get ca... (read more)

The Fabric of Real Things

This question seems decision-theory complete. If you can reify causal graphs in situations where you're in no state of uncertainty, then you should be able to reify them to questions like "what is the output of this computation here" and you can properly specify a wins-at-Newcomb's-problem decision theory.

How To Have Things Correctly

I am still trying to figure out how to Have Computers correctly, because they suffer from this weird constraint where they're only really useful if I can carry them all over, but if I do that I lose them all the time.

(Symptomatically, I'm typing this on your broken/cast-off macbook =P)

3DavidTC10y
The way I keep from leaving my laptop anywhere is to put my car keys in the laptop bag. Barring rides with other people and mass transit, it's impossible to leave your car keys somewhere. And if you travel mass transit, you could leave your wallet in the laptop bag instead. But even if you do travel sans car, you will notice your lack of keys/computer the second you get home, instead of figuring it hours or days later when you try to use the computer. I do this trick with things beside my laptop, like if I'm helping move furniture and don't want to endanger the phone in my pocket, I make sure my car keys are one of the things I remove also. If I'm somewhere else, and there is anything I might leave, my keys are with it. (And this rule also requires that all of my things are in the same place, another good rule in general.) I also do this at home, in a way...I put things I need to remember to take with me on top of my car keys, so I can't take the keys without picking that thing up. (This is obviously not a good plan if you can't keep track of where you leave your keys at home, as it will make them harder to find. But I don't have that problem, I only have one place they ever get left.)
0arundelo10y
If you want to attack this from the "quit losing them" angle, one way is to use spaced-repetition software [http://www.gwern.net/Spaced%20repetition] to train you to notice when you're in circumstances where you might be about to walk away without your computer (or whatever your failure mode is). I do something like this to train myself to be mindful whenever I get out of my car. (For context, I drive almost every day.) Specifically: * I have an Anki [http://ankisrs.net/] card that says, "When's the last time you got out of a car? Did you check what you should have?" The back of the card says "Dome light, keys, headlights, lock." * When Anki gives me this card, I score myself well only if I remember getting out of the car and checking the things on my checklist while doing so. I've been doing this for less than three months and it has not fixed my brain yet. But I'm pretty sure it works: Currently I might go up to three days without doing my getting-out-of-car ritual, while previously I might go for months and months at a time without doing it. (I lock my keys in my car around once a year.) I have similar cards for checking the parking brake, doing my leaving-the-house checklist, and putting my car keys in my pocket when I turn the car off but do not immediately get out. (This last is a specific locking-the-keys-in-the-car failure mode for me.)
2Decius10y
Physically attach the computer to something which is impossible to leave behind, or which provides a physical cue when you walk away from it? I keep my smartphone in a belt holster; it is (almost) always either on the charger, in the holster, or in my hand.
2drethelin10y
There exist wallet-finders [http://www.safetybasement.com/RFID-Wallet-Alarm-System-p/sb-ar103.htm] and so on but I haven't used one. Attaching it to a laptop may keep you from forgetting it places, though it might be the sort of thing that's inconvenient enough that you end up not using it.
0MixedNuts10y
Smartphone or tablet, with really snazzy synchronization with your main box?
Chaotic Inversion

a 15 minute break every 90 minutes

People can work for 90 minutes?! Like... without stopping?

1Epiphany10y
For me, it depends on what I'm doing. Give me something tedious, and I can barely focus to save my life. If it's something I'm well suited to, I can do it for hours and hours, resenting even the short breaks my body forces me to take in order to get something edible from the refrigerator. Maybe you just haven't really thought about which activities you have the most stamina for? I'd find it hard to do math for a whole 90 minutes, but I can write, do visual art or do emotional support for hours at a time. Not sure how long I can flow while programming - the boss said I have to take breaks. I think I've gone at least two hours.
2Nick_Tarleton10y
You've never flow-stated on a piece of code for 90 minutes? (I'm not absolutely sure I ever have, but I'd be surprised if not.)
The noncentral fallacy - the worst argument in the world?

Sorry, what do you mean by "pass an ideological Turing test"? The version I'm familiar with gets passed by people, not definitions.

2MileyCyrus10y
I just meant that a non-feminist trying to pass a feminist Turing test would get nicked if they used the ""unfair treatment of a woman based on her sex" definition, but would probably get away with "unfair treatment of a person based on their sex, but it only counts if their sex has been historically disadvantaged." There's a difference between the definitions a well-read feminist would pick up on.
The noncentral fallacy - the worst argument in the world?

"Sexism" is a short code. Not only that, it's a short code which has already been given a strong negative affective valence in modern society. Fights about its definition are fights about how to use that short code. They're fights over a resource.

That code doesn't even just point to a class of behaviors or institutions -- it points to an argument, an argument of the form "these institutions favor this gender and that's bad for these reasons". Some people would like it to point more specifically to an argument that goes something like &q... (read more)

0TheOtherDave10y
Well, I certainly agree that a word can have the kind of rhetorical power you describe here, and that "sexism" is such a word in lots of modern cultures. And while modeling such powerful labels as a fixed resource isn't quite right, insofar as such labels can be applied to a lot of different things without necessarily being diffused, I would agree with something roughly similar to that... for example, that if you and I assign that label to different things for mutually exclusive ends, then we each benefit by denying the other the ability to control the label. And I agree with you that if I want to attach the label to thing 1, and you want to attach it to mutually exclusive thing 2, and thing 1 is strictly worse than thing 2, then it's better if I fail and you succeed. All of that said, it is not clear to me that caring about fairness is always strictly worse than caring about optimality, and it is not clear to me that caring about fairness is mutually exclusive with caring about optimality. Edit: I should also say that I do understand now why you say that using "sexism" to refer to unfair systems cuts off the use of "sexism" to refer to suboptimal systems, which was the original question I asked. Thanks for the explanation.
The noncentral fallacy - the worst argument in the world?

I'll take a shot.

What we choose to measure affects what we choose to do. If I adopt the definition above, and I ask a wish machine to "minimize sexism", maybe it finds that the cheapest thing to do is to ensure that for every example of institutional oppression of women, there's an equal and opposite oppression of men. That's...not actually what I want.

So let's work backwards. Why do I want to reduce sexism? Well, thinking heuristically, if we accept as a given that men and women are interchangeable for many considerations, we can assume that any... (read more)

6TheOtherDave10y
I certainly agree that telling a wish machine to "minimize sexism" can have all kinds of negative effects. Telling it to "minimize cancer" can, too (e.g., it might ensure that a moment before someone would contract cancer, they spontaneously disintegrate). It's not clear to me what this says about the concepts of "cancer" or "sexism," though. I agree that optimizing the system is one reason I might want to reduce sexism, and that insofar as that's my goal, I care about sexism solely as a pointer to opportunities for optimization, as you suggest. I would agree that it's not necessarily the best such pointer available, but it's not clear to me how the given definition cuts off that use. It's also not clear to me how any of that causes the violent reaction DaFranker describes. If you can unpack your thinking a little further in those areas, I'd be interested.
The Useful Idea of Truth

In a Truman Show situation, the simulators would've shown us white pin-pricks for thousands of years, and then started doing actual astrophysics simulations only when we got telescopes.

The Useful Idea of Truth

The other day Yvain was reading aloud from Feser and I said I wished Feser would read The Simple Truth. I don't think this would help quite as much.

The Simple Truth sought to convey the intuition that truth is not just a property of propositions in brains, but of any system successfully entangled with another system. Once the shepherd's leveled up a bit in his craftsmanship, the sheep can pull aside the curtain, drop a pebble into the bucket, and the level in the bucket will remain true without human intervention.

0thomblake10y
Good point.
Rationality Quotes October 2012

Paths are made by walking

-Franz Kafka (quoted in Joy of Clojure)

Caminante, son tus huellas
el camino, y nada más;
caminante, no hay camino,
se hace camino al andar.
Al andar se hace camino,
y al volver la vista atrás
se ve la senda que nunca
se ha de volver a pisar.
Caminante, no hay camino,
sino estelas en la mar.

-Antonio Machado

Translation:

Wanderer, your footsteps are
the road, and nothing more;
wanderer, there is no road,
the road is made by walking.
By walking one makes the road,
and upon glancing back
one sees the path
that must never be trod again.
Wanderer, there is no road—
Only wakes upon the sea.

Rationality Quotes October 2012

I can pick up a mole (animal) and throw it. Anything I can throw weighs one pound. One pound is one kilogram.

--Randal Munroe, A Mole of Moles

[anonymous]10y32

… if anyone asks, I did not tell you it was ok to do math like this.

Rationality Quotes October 2012

Sometimes magic is just someone spending more time on something than anyone else might reasonably expect

--Teller (source)

0D_Malik8y
My experience has been that when people try to understand what went into a magic trick, they usually come up with explanations more complex than the true mechanism. Oftentimes a trick can be done either through an obvious but laborious method, or through an easy method, and people don't realize that the latter exists. (For instance, people posit elaborate mirror setups, or "moving the hand quicker than the eye", or armies of confederates, when in fact simple misdirection, forcing, palming, etc. suffice.)
Less Wrong Polls in Comments

Your confidence is inspiring, but I'd bet some false trichotomies are more obvious than others. (Though I can't immediately think of any examples of subtler false trichotomies to rattle off, so yeah)

-2Epiphany10y
An example of something that is NOT a false *otomoy would be the shorter list. ( See other comment [http://lesswrong.com/lw/ekw/less_wrong_polls_in_comments/7hb5])
Less Wrong Polls in Comments

Well, no one's voting for her anyway.

2Fyrius10y
I beg to differ.
5OpenThreadGuy10y
Psh, of course rationalists think Twilight Sparkle is the best pony.
1pleeppleep10y
Best background pony [pollid:23]
0[anonymous]10y
Rainbow Dash is the official spokespony of #lesswrong. She's been in the /topic for several months, now.
0David_Gerard10y
Geeks and goths, man.
2Alicorn10y
I think it's "Pinkie".
The noncentral fallacy - the worst argument in the world?

At risk of failing to JFGI: can someone quickly summarize what remaining code work we'd like done? I've started wading into the LW code, and am not finding it quite as impenetrable as last time, so concrete goals would be good to have.

5Eliezer Yudkowsky10y
http://code.google.com/p/lesswrong/issues/list [http://code.google.com/p/lesswrong/issues/list]
The raw-experience dogma: Dissolving the “qualia” problem

...that really should have occurred to me first.

0selylindi10y
Yes, my experience of redness can come not only from light, but also from dreams, hallucinations, sensory illusions, and direct neural stimulation. But I think the entanglement with light has to be present first and the others depend on it in order for the qualia to be there. Take, for example, the occasional case of cochlear implants for people born deaf. When the implant is turned on, they immediately have a sensation, but that sensation only gradually becomes "sound" qualia to them over roughly a year of living with that new sensory input. They don't experience the sound qualia in dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation) until after their brain is adapted to interpreting and using sound. Or take the case of tongue-vision systems for people born blind. It likewise starts out as an uninformative mess of a signal to the user, but gradually turns into a subjective experience of sight as the user learns to make sense of the signal. They recognize the experience from how other people have spoken of it, but they never knew the experience previously from dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation). In short, I think the long-term potentiation of the neural pathways is a very significant kind of causal entanglement that is not present in the program under discussion.
Load More