All of SoullessAutomaton's Comments + Replies

Open Thread: June 2010

For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".

Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.

This is another one of those Hanson-esque "X is not about X-ing" things.

2Douglas_Knight12yExpected profit explains much behavior of credit card companies, but I don't think it helps at all with the behavior of the credit score system or mortgage lenders (Silas's example!). Nancy's answer looks much better to me (except her use of the word "also").
3NancyLebovitz12yI think there's also some Conservation of Thought (1) involved-- if you have a credit history to be looked at, there are Actual! Records!. If someone is just solvent and reliable and has a good job, then you have to evaluate that. There may also be a weirdness factor if relatively few people have no debt history. (1) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed [] is partly about how a lot of what looks like tyranny when you're on the receiving end of it is motivated by the people in charge's desire to simplify your behavior enough to keep track of you and control you.
Open Thread: June 2010

So if you're giving examples and you don't know how many to use, use three.

I'm not sure I follow. Could you give a couple more examples of when to use this heuristic?

Blue- and Yellow-Tinted Choices

Seems I'm late to the party, but if anyone is still looking at this, here's another color contrast illusion that made the rounds on the internet some time back.

For anyone who hasn't seen it before, knowing that it's a color contrast illusion, can you guess what's going on?

Major hint, in rot-13: Gurer ner bayl guerr pbybef va gur vzntr.

Full answer: Gur "oyhr" naq "terra" nernf ner gur fnzr funqr bs plna. Lrf, frevbhfyl.

The image was created by Professor Akiyoshi Kitaoka, an incredibly prolific source of crazy visual perception illusions.

Aspergers Poll Results: LW is nerdier than the Math Olympiad?

Commenting in response to the edit...

I took the Wired quiz earlier but didn't actually fill in the poll at the time. Sorry about that. I've done so now.

Remarks: I scored a 27 on the quiz, but couldn't honestly check any of the four diagnostic criteria. I lack many distinctive autism-spectrum characteristics (possibly to the extent of being on the other side of baseline), but have a distinctly introverted/antisocial disposition.

Open Thread: April 2010, Part 2

A minor note of amusement: Some of you may be familiar with John Baez, a relentlessly informative mathematical physicist. He produces, on a less-than-weekly basis, a column on sundry topics of interest called This Week's Finds. The most recent of such mentions topics such as using icosahedra to solve quintic equations, an isomorphism between processes in chemistry, electronics, thermodynamics, and other domains described in terms of category theory, and some speculation about applications of category theoretical constructs to physics.

Which is all well and ... (read more)

Compartmentalization as a passive phenomenon

Ah, true, I didn't think of that, or rather didn't think to generalize the gravitational case.

Amusingly, that makes a nice demonstration of the topic of the post, thus bringing us full circle.

Compartmentalization as a passive phenomenon

Similarly, my quick calculation, given an escape velocity high enough to walk and an object 10 meters in diameter, was about 7 * 10^9. That's roughly the density of electron-degenerate matter; I'm pretty sure nothing will hold together at that density without substantial outside pressure, and since we're excluding gravitational compression here I don't think that's likely.

Keeping a shell positioned would be easy; just put an electric charge on both it and the black hole. Spinning the shell fast enough might be awkward from an engineering standpoint, though.

5wnoise12yThis won't work for spherical shells and uniformly distributed charge for the same reason that a spherical shell has no net gravitational force on anything inside it. You'll need active counterbalancing.
Compartmentalization as a passive phenomenon

I don't think you'd be landing at all, in any meaningful sense. Any moon massive enough to make walking possible at all is going to be large enough that an extra meter or so at the surface will have a negligible difference in gravitational force, so we're talking about a body spinning so fast that its equatorial rotational velocity is approximately orbital velocity (and probably about 50% of escape velocity). So for most practical purposes, the boots would be in orbit as well, along with most of the moon's surface.

Of course, since the centrifugal force at ... (read more)

0JamesAndrix12yHmm, I suppose it's too much handwaving to say it's only a few meters wide and super dense.
The mathematical universe: the map that is the territory

It's an interesting idea, with some intuitive appeal. Also reminds me of a science fiction novel I read as a kid, the title of which currently escapes me, so the concept feels a bit mundane to me, in a way. The complexity argument is problematic, though--I guess one could assume some sort of per-universe Kolmogorov weighting of subjective experience, but that seems dubious without any other justification.

0Yoreth12ySuppose we had a G.O.D. that takes N bits of input, and uses the input as a starting-point for running a simulation. If the input contains more than one simulation-program, then it runs all of them. Now suppose we had 2^N of these machines, each with a different input. The number of instantiations of any given simulation-program will be higher the shorter the program is (not just because a shorter bit-string is by itself more likely, but also because it can fit multiple times on one machine). Finally, if we are willing to let the number of machines shrink to zero, the same probability distribution will still hold. So a shorter program (i.e. more regular universe) is "more likely" than a longer/irregular one. (All very speculative of course.)
More thoughts on assertions

The example being race/intelligence correlation? Assuming any genetic basis for intelligence whatsoever, for there to be absolutely no correlation at all with race (or any distinct subpopulation, rather) would be quite unexpected, and I note Yvain discussed the example only in terms as uselessly general as the trivial case.

Arguments involving the magnitude of differences, singling out specific subpopulations, or comparing genetic effects with other factors seem to quickly end up with people grinding various political axes, but Yvain didn't really go there.

5Johnicholas12yWhat about casual use of poorly chosen examples reinforcing cultural concepts such as sexism? I'm referencing this paper []. Summary: Example sentences in linguistics far more often have males verbing females than females verbing males. There are a lot of questions which (to the best of my understanding) are still up in the air. Yvain's casual use of the controversial race/intelligence connection as an example at best glosses over these questions, and at worst subtly signals presumed answers to the questions without offering actual evidence. (Just like the males verbing females examples subtly signals some sort of cultural sexism.) Questions like: Is intelligence a stable, innate quality? Is intelligence the same thing as IQ? Is intelligence a sharp, rigid concept, suitable for building theory-structures with? Is intelligence strongly correlated to IQ? Is race a sharp, rigid concept, suitable for building theory-structures with? Is IQ strongly correlated to self-identified race? Is race strongly correlated to genetics? Is the best explanation of these correlations that genetics strongly influences intelligence? Is the state of the scientific evidence settled enough that people ought to be taking the research and applying it to daily lives or policy decisions? My take on it is that intelligence is a dangerously fuzzy concept, sliding from a general "tendency to win" on the one hand, to a simple multiple-choice questionaire on the other, all the while scattering assumptions of that it's innate, culture-free and unchangeable through your mind. Race is a dangerous concept too, with things like the one-drop rule confusing the connection to genetics and the fact that (according to the IAT) essentially everyone is a little bit racist, which has to affect your thinking about race. Thirdly, there's a very strong tendency for the racist/anti-racist politicals to hijack tentative scientific results and use them as weapons, which clouds the wat
The scourge of perverse-mindedness

The laws of the physics are the rules, without which we couldn't play the game. They make it hard for any one player to win.

Except that, as far as thermodynamics goes, the game is rigged and the house always wins. Thermodynamics in a nutshell, paraphrased from C. P. Snow:

  1. You can't win the game.
  2. You can't break even.
  3. You can't stop playing.
The scourge of perverse-mindedness

At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o'clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.

I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him, saying, "And therefore such-and-such is true."

"Why is that?" th

... (read more)
The scourge of perverse-mindedness

Since when has being "good enough" been a prerequisite for loving something (or someone)? In this world, that's a quick route to a dismal life indeed.

There's the old saying in the USA: "My country, right or wrong; if right, to be kept right; and if wrong, to be set right." The sentiment carries just as well, I think, for the universe as a whole. Things as they are may be very wrong indeed, but what does it solve to hate the universe for it? Humans have a long history of loving not what is perfect, but what is broken--the danger lies not... (read more)

What would you do if blood glucose theory of willpower was true?

Really, does it actually matter that something isn't a magic bullet? Either the cost/benefit balance is good enough to warrant doing something, or it isn't. Perhaps taw is overstating the case, and certainly there are other causes of akrasia, but someone giving disproportionate attention to a plausible hypothesis isn't really evidence against that hypothesis, especially one supported by multiple scientific studies.

From what I can see, there's more than sufficient evidence to warrant serious consideration for something like the following propositions:

  • App
... (read more)
6NancyLebovitz12yI'm wondering if akrasia is partially caused by inefficient use of willpower.
The scourge of perverse-mindedness

I thought the mathematical terms went something like this:

  • Trivial: Any statement that has been proven
  • Obviously correct: A trivial statement whose proof is too lengthy to include in context
  • Obviously incorrect: A trivial statement whose proof relies on an axiom the writer dislikes
  • Left as an exercise for the reader: A trivial statement whose proof is both lengthy and very difficult
  • Interesting: Unproven, despite many attempts
The scourge of perverse-mindedness

It's said that "ignorance is bliss", but that doesn't mean knowledge is misery!

I recall studies showing that major positive/negative events in people's lives don't really change their overall happiness much in the long run. Likewise, I suspect that seeing things in terms of grim, bitter truths that must be stoically endured has very little to do with what those truths are.

4ktismael12yI recall reading (One of Tyler Cowen's books, I think) that happiness is highly correlated with capacity for self-deception. In this case, positive / negative events would have little impact, but not necessarily because people accepted them, but more because the human brain is a highly efficient self-deception machine. Similarly, a tendency toward depression correlated with an ability to make more realistic predictions about one's life. So I think it may in fact be a particular aspect of human psychology that encourages self-deception and responds negatively to reality. None of this is to say that these effects can't be reduced or eliminated through various mental techniques, but I don't think it's sufficient to just assert it as cultural.
0CronoDAS12yThat's a pretty good line!
The scourge of perverse-mindedness

Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We're talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.

The scourge of perverse-mindedness

It's a reasonable point, if one considers "eventual cessation of thought due to thermodynamic equilibrium" to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?

3Rain12yI believe we have a duty to attempt to predict the future as far as we possibly can. I don't see how we can take moral or ethical stances without predicting what will happen as a result of our actions.
8orthonormal12yThere are plenty of transhumanists here who believe that (with some nonnegligible probability) the heat death of the universe will be the relevant upper bound on their experience of life.
Open Thread: March 2010, part 3

A nontrivial variant is also directed sarcastically at someone who lost badly (this seems to be most common where the ambient rudeness is high, e.g.,

Think Before You Speak (And Signal It)

Also, few ways are more effective at discovering flaws in an idea than to begin explaining it to someone else; the greatest error will inevitably spring to mind at precisely the moment when it is most socially embarrassing to admit it.

The Price of Life

My interpretation was to read "value" as roughly meaning "subjective utility", which indeed does not, in general, have a meaningful exchange rate with money.

The Graviton as Aether

You know, this really calls for a cartoon-y cliche "light bulb turning on" appearing over byrnema's head.

It's interesting the little connections that are so hard to make but seem simple in retrospect. I give it a day or so before you start having trouble remembering what it was like to not see that idea, and a week or so until it seems like the most obvious, natural concept in the world (which you'll be unable to explain clearly to anyone who doesn't get it, of course).

3JGWeissman12ySeriously. Apparently, I wrote the key insight she needed (not knowing that it was the missing insight), but she didn't click on it the first time, and then, as I am asking questions to try to narrow down what the confusion is, something I said, as a side effect, prompted her to read that insight again and she got it. Now, how can one systematically replicate a win like that?
Open Thread: March 2010

SICP is nice if you've never seen a lambda abstraction before; its value decreases monotonically with increasing exposure to functional programming. You can probably safely skim the majority of it, at most do a handful of the exercises that don't immediately make you yawn just by looking at them.

Scheme isn't much more than an impure, strict untyped λ-calculus; it seems embarrassingly simple (which is also its charm!) from the perspective of someone comfortable working in a pure, non-strict bastardization of some fragment of System F-ω or whatever it is that GHC is these days.

Haskell does tend to ruin one for other languages, though lately I've been getting slightly frustrated with some of Haskell's own limitations...

Individual vs. Group Epistemic Rationality

Sorry for the late reply; I don't have much time for LW these days, sadly.

Based on some of your comments, perhaps I'm operating under a different definition of group vs. individual rationality? If uncoordinated individuals making locally optimal choices would lead to a suboptimal global outcome, and this is generally known to the group, then they must act to rationally solve the coordination problem, not merely fall back to non-coordination. A bunch of people unanimously playing D in the prisoner's dilemma are clearly not, in any coherent sense, rationally... (read more)

Open Thread: March 2010

Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.)

And no looking it up, that's cheating! Took me the better part of a day to figure it out, it's a real mind-twister.

Open Thread: March 2010

It's also worth noting that Curry's combinatory logic predated Church's λ-calculus by about a decade, and also constitutes a model of universal computation.

It's really all the same thing in the end anyhow; general recursion (e.g., Curry's Y combinator) is on some level equivalent to Gödel's incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.

Open Thread: March 2010

Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.

Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it'd be like programming in Brainfuck. It was von Neumann's insights leading to the stored-program architecture that made computing ... (read more)

2Tyrrell_McAllister12yI'm pretty sure that Eliezer meant that Turing machines are better for giving novices a "model of computation". That is, they will gain a better intuitive sense of what computers can and can't do. Your students might not be able to implement much, but their intuitions about what can be done will be better after just a brief explanation. So, if your goal is to make them less crazy regarding the possibilities and limitations of computers, Turing machines will give you more bang for your buck.
Open Thread: March 2010

I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.

Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time wi... (read more)

0wedrifid12yOf course, but I'm more considering 'languages to learn that make you a better programmer'. Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move. I don't agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma 'multiple inheritance is bad' and don't allow generics enforce bad habits while at the same time insisting that they are the True Way. I think I agree on this note, with certain restrictions on what counts as 'civilized'. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
0wedrifid12yThe thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn't do it in Java or .NET (except Eiffel.NET).
Open Thread: March 2010

I'm interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming.

"Actually I made up the term "object-oriented", and I can tell you I did not have C++ in mind." -- Alan Kay

C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can't prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.

C+... (read more)

0wedrifid12yI'm sure I could manage 1k before I considered the point settled and moved on to a language that isn't a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts. This is the one point I disagree with, and I do so both on the assertion 'almost uniformly' and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book 'Object Oriented Software Construction' is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction. Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
Open Thread: March 2010

C is good for learning about how the machine really works. Better would be assembly of some sort, but C has better tool support. Given more recent comments, though I don't think that's really what XiXiDu is looking for.

1wedrifid12yAgree on where C is useful and got the same impression about the applicability to XiXiDu's (where on earth does that name come from?!?) goals. I'm interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn't meet your 'minimalist' ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I've converted to primarily using a language that relies on duck-typing.
Open Thread: March 2010

Dijkstra's quote is amusing, but out of date. The only modern version anyone uses is VB.NET, which isn't actually a bad language at all. On the other hand, it also lacks much of the "easy to pick up and experiment with" aspect that the old BASICs had; in that regard, something like Ruby or Python makes more sense for a beginner.

Open Thread: March 2010

Well, they're computer sciencey, but they are definitely geared to approaching from the programming, even "Von Neumann machine" side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.

Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!

3Douglas_Knight12yYou probably should have spelled out that SICP is on the λ-calculus side.
Individual vs. Group Epistemic Rationality

I'm inclined to agree with your actual point here, but it might help to be clearer on the distinction between "a group of idealized, albeit bounded, rationalists" as opposed to "a group of painfully biased actual humans who are trying to be rational", i.e., us.

Most of the potential conflicts between your four forms of rationality apply only to the latter case--which is not to say we should ignore them, quite the opposite in fact. So, to avoid distractions about how hypothetical true rationalists should always agree and whatnot, it may be helpful to make explicit that what you're proposing is a kludge to work around systematic human irrationality, not a universal principle of rationality.

0wedrifid12yWell said. I agree with Wei's point with the same clarifications you supply here. Looking at any potential desirability of individual calibration among otherwise ideal rationalists may be an interesting question in itself but it is a different point and it is important not to blur the lines between 'applicable to all of humanity' and 'universal principle of rationality'. When things are presented as universal I have to go around disagreeing with their claims even when I totally agree with the principle.
3Wei_Dai12yIn conventional decision/game theory, there is often conflict between individual and group rationality even if we assume idealized (non-altruistic) individuals. Eliezer and others have been working on more advanced decision/game theories which may be able to avoid these conflicts, but that's still fairly speculative at this point. If we put that work aside, I think my point about over- and under-confidence hurting individual rationality, but possibly helping group rationality (by lessening the public goods problem in knowledge production), is a general one. There is one paragraph in my post that is not about rationality in general, but only meant to apply to humans, but I made that pretty clear, I think:
Open Thread: March 2010

All else equal, in practical terms you should probably devote all your time to first finding the person(s) that already know the private keys, and then patiently persuading them to share. I believe the technical term for this is "rubber hose cryptanalysis".

Open Thread: March 2010

Well, I admit that my thoughts are colored somewhat by an impression--acquired by having made a living from programming for some years--that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase "software engineer" with a straight face! But I'll leave it at that, lest I start quoting Dijkstra.

Back on topic, I do agree that being able to start doing things quickly--both in terms of producing interesting results and getting rapid feedback--is important, but not the most important thing.

2XiXiDu12yI want to achieve an understanding of the basics without necessarily being able to be a productive programmer. I want to get a grasp of the underlying nature of computer science, not being able to mechanical write and parse code to solve certain problems. The big picture and underlying nature is what I'm looking for. I agree that many people do not understand, they really only learnt how to mechanical use something. How much does the average person know about how one of our simplest tools work, the knife? What does it mean to cut something? What does the act of cutting accomplish? How does it work? We all know how to use this particular tool. We think it is obvious, thus we do not contemplate it any further. But most of us have no idea what actually physically happens. We are ignorant of the underlying mechanisms for that we think we understand. We are quick to conclude that there is nothing more to learn here. But there is deep knowledge to be found in what might superficially appear to be simple and obvious.
Open Thread: March 2010

Hey, maybe they're Zen aliens who always greet strangers by asking meaningless questions.

More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.

Open Thread: March 2010

adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.

Hm. For actual aliens I don't think even that's justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.

6Alicorn12yI was conditioning on the probability that the question is in fact meaningful to the aliens (more like "Will the Red Sox win the spelling bee?" than like "Does the present king of France's beard undertake differential diagnosis of the psychiatric maladies of silk orchids with the help of a burrowing hybrid car?"). If you assume they're just stringing words together, then there's not obviously a proposition you can even assign probability to.
Open Thread: March 2010

I have to disagree on Python; I think consistency and minimalism are the most important things in an "introductory" language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don't think about.

I'd lean toward either C (for learning the "pushing electrons around silicon" end of things) or Scheme (for learning the "abstract conceptual elegance" end of things). It h... (read more)

1wedrifid12yAgree with where you place Python, Scheme and Haskell. But I don't recommend C. Don't waste time there until you already know how to program well. Given a choice on what I would begin with if I had my time again I would go with Scheme, since it teaches the most general programming skills, which will carry over to whichever language you choose (and to your thinking in general.) Then I would probably move on to Ruby, so that I had, you know, a language that people actually use and create libraries for.
0XiXiDu12yYeah, C is probably mandatory if you want to be serious with computer programming. Thanks for mentioning Scheme, haven't heard about it before... Haskell sounds really difficult. But the more I hear how hard it is, the more intrigued I am.
5sketerpot12yYou make some good points, but I still disagree with you. For someone who's trying to learn to program, I believe that the primary goal should be getting quickly to the point where you can solve well-understood tasks. I've always thought that the quickest way to learn programming was to do programming, and until you've been doing it for a while, you won't understand it.
Open Thread: March 2010

Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.

The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.

So you end up with newcomers to Haskell trying to simultaneously:

  • Adjust to a degree of abstraction normally reserved for mathematicians and philosophers
  • Unlearn existing habits from other la
... (read more)
Open Thread: March 2010

Interesting article, but the title is slightly misleading. What he seems to be complaining are people who mistake picking up a superficial overview of a topic for actually learning a subject, but I rather doubt they'd learn any more in school than by themselves.

Learning is what you make of it; getting a decent education is hard work, whether you're sitting in a lecture hall with other students, or digging through books alone in your free time.

Rationality quotes: March 2010

Due to not being an appropriately-credentialed expert, I expect. The article does mention that he got a very negative reaction from a doctor.

The Last Days of the Singularity Challenge

Scraping in just under the deadline courtesy of a helpful reminder, I've donated a modest amount (anonymously, to the general fund). Cheers, folks.

Savulescu: "Genetically enhance humanity or face extinction"

I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals.

Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone's expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.

If they really do know wha... (read more)

3mattnewport12yExplaining to them why you believe they're making a mistake is justified. Interfering if they choose to continue anyway, not. I don't recognize a moral responsibility to take action to help others, only a moral responsibility not to take action to harm others. That may indeed be the root of our disagreement. This is tangential to the original debate though, which is about forcing something on others against their will because you perceive it to be for the good of the collective. I don't want to nitpick but if you are free to create a hypothetical example to support your case you should be able to do better than this. What kind of idiot employer would fire someone for missing one day of work? I understand you are trying to make a point that an individual's choices have impacts beyond himself but the weakness of your argument is reflected in the weakness of your example. This probably ties back again to the root of our disagreement you identified earlier. Your hypothetical individual is not depriving society as a whole of anything because he doesn't owe them anything. People make many suboptimal choices but the benefits we accrue from the wise choices of others are not our god-given right. If we receive a boon due to the actions of others that is to be welcomed. It does not mean that we have a right to demand they labour for the good of the collective at all times. I chose this example because I can recognize a somewhat coherent case for enforcing vaccinations. I still don't think the case is strong enough to justify compulsion. It's not something I have a great deal of interest in however so I haven't looked for a detailed breakdown of the actual risks imposed on those who are not able to be vaccinated. There would be a level at which I could be persuaded but I suspect the actual risk is far below that level. I'm somewhat agnostic on the related issue of whether parents should be allowed to make this decision for their children - I lean that way only because th
Savulescu: "Genetically enhance humanity or face extinction"

presumably you refer to the violation of individuals' rights here - forcing people to undergo some kind of cognitive modification in order to participate in society sounds creepy?

Out of curiosity, what do you have in mind here as "participate in society"?

That is, if someone wants to reject this hypothetical, make-you-smarter-and-nicer cognitive modification, what kind of consequences might they face, and what would they miss out on?

The ethical issues of simply forcing people to accept it are obvious, but most of the alternatives that occur to ... (read more)

Savulescu: "Genetically enhance humanity or face extinction"

On the other hand, if you look around at the real world it's also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.

Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn't really seem much better. "Sure, he may not be aware of the cliff he's about to walk off of, but he chose to walk that way and we shouldn't force... (read more)

5mattnewport12yI find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals. The implication is that some people's stated goals are not in line with their own 'best interests'. While that may be true, presuming that you (or anyone else) are qualified to make that call and override their stated goals in favour of what you judge to be their best interest is a tendency that I consider extremely pernicious. There's a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they're about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns. There is also a world of difference between offering assistance and forcing something on someone to 'help' them against their will. Incidentally I don't believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not 'evil' to refrain from doing so in my opinion. In general this is in a different category from the kinds of issues we've been talking about (forcing 'help' on someone who doesn't want it). I have no problem with not allowing people to drive while intoxicated for example to prevent them causing harm to other road users. In most such cases you are not really imposing your will on them, rather you are withholding their access to some resource (public roads in this case) based on certain criteria designed to reduce negative externalities imposed on others. Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example - there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose r
Rationality Quotes January 2010

Linus replies by quoting the Bible, reminding Charlie Brown about the religious significance of the day and thereby guarding against loss of purpose.

Loss of purpose indeed.

Charlie Brown: Isn't there anyone who knows what Christmas is all about?

Linus: Sure, Charlie Brown, I can tell you what Christmas is all about. Lights, please?

Hear ye the word which the LORD speaketh unto you, O house of Israel:

Thus saith the LORD, Learn not the way of the heathen, and be not dismayed at the signs of heaven; for the heathen are dismayed at them. For the customs of t

... (read more)
Case study: Melatonin

If you're actually collecting datapoints, not just using the term semi-metaphorically, it may help to add that I've been diagnosed with (fairly moderate) ADHD; if my experience is representative of anything, it's probably that.

Case study: Melatonin

The former category would include not experiencing, or noticing that you're experiencing, 'tiredness', even when your body is acting tired in a way that others would notice (e.g. yawning, stretching, body language).

I'm not sure if this is what you're talking about, but I've long distinguished two aspects of "tiredness". One is the sensation of fatigue, exhaustion, muddled thinking, &c.--physical indicators of "I need sleep now".

The second is the sensation of actually being sleepy, in the sense of reduced energy, body relaxation, ... (read more)

1AdeleneDawner12yThanks for the datapoint. That doesn't sound like the experience I was trying to describe, which is of not noticing sleepiness or fatigue at all, even when not doing something engaging. The 'not noticing' caveat is there because some autistics won't automatically notice those sensations, but can consciously check to see if they're occurring, and get into the habit of doing so. (The issue can apply to hunger, too. [] )
Case study: Melatonin

This is my experience as well, for the most part.

The only times I recall "going to bed" feeling like a good idea is when I've been so far into exhausted sleep deprivation that base instincts took over and I found myself doing so almost involuntarily.

Even in those cases, my conscious mind was usually confabulating wildly about how I wasn't actually going to sleep, just lying down for a half a moment, not sleeping at all... right up until I pretty much passed out.

It's rather vexing.

4AdeleneDawner12yWould you guys mind terribly if I picked your brains? The kind of experience you're describing is described fairly often in autistic communities. There's a few variations, generally falling into the categories of sensory processing or executive dysfunction issues. The former category would include not experiencing, or noticing that you're experiencing, 'tiredness', even when your body is acting tired in a way that others would notice (e.g. yawning, stretching, body language). The second case involves not being able to stop whatever activity you're engaged in and go to bed, even though you recognize (perhaps briefly, before being drawn back into what you're doing) that you are tired and it would be a good idea. (This isn't quite the same as 'I'll do one more part, and then go to bed' in that it's less conscious and therefore harder to break out of - in many cases it takes a significant effort of will to stop your body from automatically taking the next step in what you're doing, even if you've actually decided not to take that next step.) I'm curious to find out if those issues are also experienced by people who aren't autistic - perhaps to a lesser degree, or with different explanations than the ones that I mentioned. Do the issues I described sound like what you're experiencing? Are they close, or similar in some interesting way?
Load More