All of Kazuo_Thow's Comments + Replies

Here on Less Wrong there are a significant number of mathematically inclined software engineers who know some probability theory, meaning they've read/worked through at least one of Jaynes and Pearl but may not have gone to graduate school. How could someone with this background contribute to making causal inference more accessible to researchers? Any tools that are particularly under-developed or missing?

I am not sure I know what the most impactful thing to do is, by edu level. Let me think about it. -------------------------------------------------------------------------------- My intuition is the best thing for "raising the sanity waterline" is what the LW community would do with any other bias: just preaching association/causation to the masses that would otherwise read bad scientific reporting and conclude garbage about e.g. nutrition. Scientists will generally not outright lie, but are incentivized to overstate a bit, and reporters are incentivized to overstate a bit more. In general, we trust scientific output too much, so much of it is contingent on modeling assumptions, etc. Explaining good clear examples of gotchas in observational data is good: e.g. doctors give sicker people a pill, so it might look like the pill is making people sick. It's like the causality version of the "rare cancer => likely you have a false positive by Bayes theorem". Unlike Bayes theorem, this is the kind of thing people immediately grasp if you point it out, because we have good causal processing natively, unlike our native probability processing which is terrible. Association/causation is just another type of bias to be aware of, it just happens to come up a lot when we read scientific literature. -------------------------------------------------------------------------------- If you are looking for some specific stuff to do as a programmer, email me :). There is plenty to do.

It's only been about 6 months since I started consciously focusing my attention on the subtle effects of abandonment trauma. Although I've done a fair amount of reading and reflecting on the topic I'm not at the point yet where I can confidently give guidance to others. Maybe in the next 3-4 months I'll write up a post for the discussion section here on LW.

What's frustrating is that signs of compulsive, codependent and narcissistic behavior are everywhere, with clear connections to methods of coping developed in childhood, but the number of people who pay ... (read more)

Please do. This seems like an important part of "winning" to some people, and it is related to thinking, therefore it absolutely belongs here.

Complex PTSD: From Surviving To Thriving by Pete Walker focuses on the understanding that wounds from active abuse make up the outer layers of a psychological structure, the core of which is an experience of abandonment caused by passive neglect. He writes about self-image, food issues, codependency, fear of intimacy and generally about the long but freeing process of recovering.

As with physical abuse, effective work on the wounds of verbal and emotional abuse can sometimes open the door to de-minimizing the awful impact of emotional neglect. I sometimes

... (read more)
Now that is quite some text to read. Thank you very much. My request was aimed at more general books though this is still useful. You seem very knowledgeable on this specific topic. Am I right in assuming you are knowledgeable about emotional issues more generally? Would you be willing to write a post about these topics?

I recognize this in myself and it's been difficult to understand, much less get under control. The single biggest insight I've had about this flinching-away behavior (at least the way it arises in my own mind) is that it's most often a dissociative coping mechanism. Something intuitively clicked into place when I read Pete Walker's description of the "freeze type". From The 4Fs: A Trauma Typology in Complex PTSD:

Many freeze types unconsciously believe that people and danger are synonymous, and that safety lies in solitude. Outside of fantasy, m

... (read more)
Interesting, thanks. I had a pretty happy childhood in general, but I was a pretty lonely kid for large parts of the time, and I've certainly experienced a feeling of being abandoned or left alone several times since that. And although my memories are fuzzy, it's possible that the current symptoms would have started originally developing after one particularly traumatic relationship/breakup when I was around 19. Also, meaningful social interaction with people seems to be the most reliable way of making these feelings go away for a while. Also, I tend to react really strongly and positively to any fiction that portrays strong, warm relationships between people. Most intriguing.
I would also like to see more such discussion, but, as with rationality, more from the viewpoint of rising above base level average than of recovering only to that level.
If people on LW put half the effort in emotional issues they put in rational topics we'd be a whole lot further. Thank you for this quote very much. Any insight explosion books I should read?

I plan on transcribing all those video answers soon (within the next few days).

[This comment is no longer endorsed by its author]Reply

I think this adaptation is much more precise than the original.

Apathy on the individual level translates into insanity at the mass level.

-- Douglas Hofstadter

Not when apathy and insanity are correlated. See, e.g., The Myth of the Rational Voter []

Insanity will prevail when sane men do nothing? (Apologies to Edmund Burke)

I recall seeing another poster say that they were from the University of Washington.

Maybe that was me? Even better if it wasn't!

I would definitely be interested in a meetup. As for a low-preparation (but still likely to be useful) discussion topic: day-to-day productivity / fighting akrasia.

I don't think any language or culture currently has a turn of phrase which is actually adequate for events like this - for expressing exactly what was lost.

I've also lost a grandparent, and an uncle. Wasn't extremely close to either of them, but I understand that sickening feeling which goes along with knowing that someone played a role in your development as a person, and that you'll never be able to talk to them again. And I can't be the only person among those who occasionally hang out in the #lesswrong IRC channel to have such an experience. Pop in ... (read more)

Thank you.

On the problem of distinguishing between Turing machines of the kinds you mentioned, does Jürgen Schmidhuber's idea of a speed prior help at all? Searching for "speed prior" here on Less Wrong didn't really turn up any previous discussion.

I discuss that concept here: []
Hmm, I had not seen the speed prior before. It seems to make strong testable predictions about how the universe functions. I'll have to look into it.

The splitting of the atom has changed everything save the way men think, and thus we drift toward unparalleled catastrophe.

-- Albert Einstein

I can't seem to find any talk of an experiment with 80% / 20% frequency options, but XiXiDu mentioned one where pigeons were found to out-perform humans at the iterated Monty Hall problem. Here's the paper itself.

That paper was a great read, thank you.
ETA: On further thought I think that's the paper I was looking for after all, I was just thrown off by the reference to Monty Hall for some reason. My thanks. Sadly, that isn't the paper I was looking for. I found a vague reference here [] but it looks like I either made the experiment up entirely, it uses rats or mice instead of pigeons (I could have sworn it was pigeons though!), or it was on another website (unlikely - this kind of topic is far more likely to be on LW than anything else I read).

Presumably a reference to this post.

Somewhere deep in the microtubules inside an out-of-the-way neuron somewhere in the basal ganglia of Eliezer Yudkowsky's brain, there is a little XML tag that says awesome.

When charitable services can be gained in exchange for money, our default failure mode is to purchase moral satisfaction instead of choosing an allocation of money that will maximize expected benefit. Maybe there's something similar going on when the exchangeable resource is time? We have some built-in facilities for tasting fatty foods and processing that I'm diligently working long hours feeling; tasting healthiness and feeling like a wise spender of time don't come as easily.

Here's the Open Thread comment where Daniel Varga made the page and its source code public. I don't know how often it's updated.

Note that the page in question collects only comments on Rationality Quotes pages.
Yay, thank you! Also, that page is large, large enough to make my brand new computer lag horrendously.

How do you get new ideas? That you do by analogy, mostly, and in working with analogy you often make very great errors. It's a great game to try to look at the past, at an unscientific era, look at something there, and say have we got the same thing now, and where is it?

-- Richard Feynman, The Meaning of It All: Thoughts of a Citizen-Scientist, page 114

... the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins ...

Eh... "inevitably" is one of those words that takes a very high degree of confidence to use correctly - a degree of confidence we really don't have with current cosmology, if the simulation hypothesis is true.

(By the way, here's the quote from last month's thread which Apprentice was repurposing.)

Kazuo, I agree; given our current knowledge that quote is open to criticism on several points of fact (most obviously its focus on the solar system rather than whatever passes for the universe these days). That's why I said I admire it mainly for its courage and style.

Ignoring the trees to see the forest doesn't mean that one is more important than the other - it just gives a different perspective.

-- Michael Sipser, Introduction to the Theory of Computation (2nd ed., page 257)

Will a correct answer to this question give you significant help toward maximizing the number of paperclips in the universe?


... the history of mathematics is a history of horrendously difficult problems being solved by young people too ignorant to know that they were impossible.

-- Freeman Dyson, "Birds and Frogs"

Did you mean for this post to have a writing style similar to that of Peter Watts' Blindsight (which explores the notion of non-sentient optimizers), or was that an unintentional thing?

(The above isn't intended as a meta-level question, by the way. But I'd also be interested to know if the George Clooney in your head wanted the team to signal approval of the ideas presented in Blindsight. Because that would be kind of ironic.)

No; I never read it.

I actually voted this up because the instrumental value of growing our offline/in-person community seems to outweigh the slight noise contributed by top-level posts of the "SIAI is calling for visiting fellows / volunteers / donations" or "Meetup at location X" variety.

This seems to be much more of a noise post than a meetup post.

I, for one, would be very interested in seeing a top-level post about this.

Thanks, but neither of these are the one I remember.

It seems like the focus of this post is not to do public outreach directly. The comparative advantage we have here at LW (in the particular domain of promoting cryonics) probably lies further upstream than that: coming up with ideas behind business strategy rather than hashing out marketing campaigns to make cryonics seem less "creepy" and more acceptable to the general public.

This is what fascinates me most in existence: the peculiar necessity of imagining what is, in fact, real.

-- Philip Gourevitch

Could we standardize on using the whole-book-as-one-PDF version, at least for the purposes of referencing equations?

ETA: So far I've benefited from checking the relevant parts of Kevin Van Horn's unofficial errata pages before (and often while) reading a particular section.

while these evals might be less biased they are more than proportionately less accessible.

How so?

Compared with ratemyprofessors, which is available to everyone online, I don't think the evaluations written by students (at least in California) are publicly available at all. I could be wrong, but I don't know anyone who has ever seen one (other than the person being evaluated).

I've gotten into the habit of pointing out, whenever other students at my university make reference to, that the selection bias on that site is huge. It's not uncommon to see professors with dozens of extremely positive reviews, dozens more highly negative reviews, and very few - if any - neutral reviews. Naturally, the negative reviews appear most frequently because "grr, I feel like this professor graded too harshly" provides the strongest motivation for posting a disgruntled comment.

I don't know of any other place that d... (read more)

CSUs and UCs do this (or at least where I've been they do); while these evals might be less biased they are more than proportionately less accessible. Also has different ratings for "easiness" "enthusiasm" etc., so instead of looking at "highest rated" professors looking at the actual reviews would be a bit more informative.

Does anyone know a good IRC infrastructure that allows for quickly entering and displaying TeX formulas?

There's a plugin for Pidgin called pidgin-latex which handles just that.

ETA: If people start using this plugin (or, more generally, if we use TeX/LaTeX in any capacity for this study group), it might occasionally be helpful to use the detexify handwritten symbol recognizer - for when you want to use a symbol and can't quite remember the command that produces it.

I will also be in the vicinity of the Bay Area from June 12 to late September, and would be quite happy to give the study group a try. I attempted a full read of Jaynes' book about a year ago, and realized about 70% of the way through that I didn't have all the mathematical background necessary to fully appreciate it.

A zipped archive of all the chapters, which seemed to be missing on the pages linked in the top-level post, is available here.

Eliezer has been outright lying about cost of cryonics in the past.

We would find it helpful if you could provide some insight into why you think this.

I wonder whether there are similar brain differences between top mathematicians and everyone else, and if such a simple method could make people better at math.

It would be worth trying, but given that the process of doing original mathematics feels to top mathematicians like it involves a lot of vague, artistic visualization (i.e. mental operations much more complicated than the cursor-moving task), I'd put a low prior probability on simple electrical stimulation having the desired effect.

I'd give it a medium prior probability-- it's impossible to operate at a high level if the simple operations are clogged by inefficiency.

... wherein I'm trying to talk an escaped AI back into its box.

Yeah... good luck with that.

Sorry for directly breaking the subjunctive here, but given the number of lurkers we seem to have, there's probably some newcomers' confusion to be broken as well, lest this whole exchange simply come off as bizarre and confusing to valuable future community members.

A brief explanation of "Clippy": Clippy's user name (and many of his/her posts) are a play on the notion of a paperclip maximizer - a superintelligent AI whose utility function can roughly be described as U(x) = "the total quantity of paperclips in universe-state x". The i... (read more)

Curious lurkers might also want to read up on what an AI-box experiment [] is, since this is kind of evolving into a reverse AI box experiment, wherein I'm trying to talk an escaped AI back into its box

From the article:

"When we are in the public arena we tell people we're working on the aging process, the first thing they think is that we want to make a 100-year-old person live to be 250 -- and that's actually the furthest from the truth," he [Andrew Dillin, Salk Institute / Howard Hughes Medical Institute] said.

I wonder how many appearances of this idea ("making 70-80 year lives healthy would be awesome, but trying to vastly extend lifespans would be weird") are due to public relations expediency, and how many are due to the speakers actually believing it.

Well, in fairness so far we've had a lot of trouble handling general aging. Also, note that what Dillin said is having an 100 year old person live to be 250. Not, someone born today living to 250. That's a very different circumstance. The first is much more difficult than the second since all the aging has already taken place.

[...] but we have no guarantee at all that our formal system contains the full empirical or quasi-empirical stuff in which we are really interested and with which we dealt in the informal theory. There is no formal criterion as to the correctness of formalization.

-- Imre Lakatos, "What Does a Mathematical Proof Prove?"

ETA: When I first read this remark, I couldn't decide whether it was terrifying, or just a very abstract specification of a deep technical problem. I currently think it's both of those things.

Link appears to be broken.
--Alan Perlis, Epigrams in Programming []

Count me as "having an intention to do that in the future". Although I'm currently just an undergraduate studying math and computer science, I hope to (within 5-10 years) start doing everything I can to help with the task of FAI design.

I'm in favor of both the grace period and "karma coward" option. In my own experience, anxiety about being downvoted acted as a deterrent against posting comments; reading and responding to posts by new members is relatively cheap, while missing opportunities to make them feel included in the community (and thus potentially missing out on their future contributions) seems comparatively expensive.

Would it be useful - maybe as something to be incorporated with the discussion forum - to have a (semi-)formalized system of study partners/groups? A w... (read more)

Are you making this as a statement of personal preference, or general policy? What if it becomes practically impossible for a person to give informed consent, as in cases of extreme mental disability?

General policy. For example, if Wei Dai chooses the wirehead route, I might think he's missing out on a lot of other things life has to offer, but that doesn't give me the right to forcibly unwirehead him, any more than he has the right to do the reverse to me. In other words, he and I have two separate disagreements: of value axioms, whether there should be more to life than wireheading (which is a matter of personal preference), and of moral axioms, whether it's okay to initiate the use of armed force (whether in person or by proxy) to impose one's preferred lifestyle on another (which is a matter of general policy). (And this serves as a nice pair of counterexamples to the theory I have seen floating around that there is a universal set of human values.) In cases of extreme mental disability, we don't have an entity that is inherently capable of giving informed consent, so indeed it's not possible to apply that criterion. In that case (given the technology to do so) it would be necessary to intervene to repair the disability before the criterion can begin to apply.

but cognitive dissonance is supposed to be a private thing, like going to the bathroom or popping a zit.

I see no compelling reason care about another person's mundane, unavoidable bodily functions. But I can see a number of compelling reasons to care about another person's sanity.


One that I sometimes forget, usually by encountering a potential path to an answer and quickly switching into short-term investigation mode:

Estimate the value of obtaining an answer and consider whether that would be worth the time/energy investment. The hard question may sound interesting in an attention-grabbing way, but one's level of fascination moments after hearing it may be a poor indicator of a solutions' actual value.

P(H is true | H is not represented in my mind)

How would this probability be assigned?

LOL. :(

Page 136 (in Chapter 5 - "Queer Uses for Probability Theory"), in the first full paragraph.

Wow, that was fast, I can see that you definitively did your homework. :)

He's currently the technical director at Bitphase AI. From talking to him, it seems that his strategy is to make tools for speeding up eventual FAI development/implementation and also commercialize those tools to gain funding for FAI research.

Could it be that pain-filled stories carry literary value exactly because (to a reader) they're filled with bearable pain? But I have little idea as to how we'd go about setting the threshold for "tolerable pain."

Load More