All of mtaran's Comments + Replies

Axis oriented programming

As it is now, this post seems like it would fit in better on hacker new, rather than lesswrong. I don't see how it addresses questions of developing or applying human rationality, broadly interpreted. It could be edited to talk more about how this is applying more general principles of effective thinking, but I don't really see that here right now. Hence my downvote for the time being.

People can write personal posts on LW that don't meet Frontpage standards. For example, plenty of rationalists use LW as their personal blog in one way or another and those posts don't make it to the frontpage but they also end up fitting in here.

Russia has Invaded Ukraine

Came here to post something along these lines. One very extensive commentary with reasons for this is in (warning: long thread). Will summarize when I can get to laptop later tonight, or other people are welcome to do it.

Mildly Photochromic Lenses?

Have you considered lasik much? I got it about a decade ago and have generally been super happy with the results. Now I just wear sunglasses when I expect to benefit from them and that works a lot better than photochromatic glasses ever did for me.

The main real downside has been slight halos around bright lights in the dark, but this is mostly something you get used to within a few months. Nowadays I only noticed it when stargazing.

The true downside of Lasik is the nontrivial risk of permanent eye dryness, this hurts like hell and doesn't really have a cure apart from constantly using eye drops. The bad cases are basically life destroying, my mom had a moderate case of chronic dry eyes and it made her life significantly more unpleasant (she couldn't sleep well and was basically in constant pain during the day).
I'm very nervous about my eyes, and deeply unsettled by the idea of eye surgery. Relatedly, I like having a protective layer between my eyes and the world.

This seems like something that would be better done as a Google form. That would make it easier for people to correlate questions + answers (especially on mobile) and it can be less stressful to answer questions when the answers are going to be kept private.

Those are great points! Google forms added.
Consume fiction wisely

How is it that authors get reclassified as "harmful, as happened to Wright and Stross"? Do you mean that later works become less helpful? How would earlier works go bad?

What I mean: the author's name on the cover can't be used anymore as an indicator of the book's harmfulness / helpfulness. An extreme example is the story of a certain American writer. He wrote some of the most beautiful transhumanist science fiction ever. But when he crashed his car and almost died. He came back wrong. He is now a religious nutjob who writes essays on how transhumans are soul-less children of Satan. And in his new fiction books, transhumanists are stock villains opposed by glorious Christian heroes.
Has anyone had weird experiences with Alcor?
Answer by mtaranJan 11, 202234

Given that you didn't actually paste in the criteria emailed to Alcor, it's hard to tell how much of a departure the revision you pasted is from it. Maybe add that in for clarity?

My impression of Alcor (and CI, who I used to be signed up with before) is that they're a very scrappy/resource-limited organization, and thus that they have to stringently prioritize where to expend time and effort. I wish it weren't so, but that seems to be how it is. In addition, they have a lot of unfortunate first-hand experience with legal issues arising during cryopreservat... (read more)

Thank you for your kind and thoughtful reply; I really appreciate it. Here's the quote: As you can see, it's a pretty big departure. Given the excellent points you and others raise, I think I will try giving them the benefit of the doubt, and simplify my criteria for Alcor, putting the decision solely in my wife's hands, with the provision that I should be preserved if she is not present and cannot be immediately reached. If I do not get any more seemingly underhanded pushback (them pushing back a little/stating their concerns is fine, but any more sneakily making huge changes would increase my concerns), then I'll write this off to the factors you suggest, and proceed. Thank you!

+1 on the wording likely being because Alcor has dealt with resistant families a lot, and generally you stand a better chance of being preserved if Alcor has as much legal authority as possible to make that happen. You may have to explain that you're okay with your wife potentially doing something that would have been against your wishes (yes, I realize you don't expect that, but there more than 0% chance it will happen) and result in no preservation when Alcor thinks you would have liked one.

This is actually why I went with Alcor: they have a long record of going to court to fight for patients in the face of families trying to do something else.

Downvoted for lack of standard punctuation, capitalization, etc., which makes the post unnecessarily hard to read.

New Year's Prediction Thread (2022)

Do you mean these to apply at the level of the federal government? At the level of that + a majority of states? Majority of states weighted by population? All states?

More accurate models can be worse

Downvoted for burying the lede. I assumed from the buildup this was something other than what it was, e.g. how a model that contains more useful information can still be bad, e.g. if you run out of resources for efficiently interacting with it or something. But I had to read to the end of the second section to find out I was wrong.

Added TL;DR to the top of the post.
A good rational fiction about IT-inspired magic system?

Came here to suggest exactly this, based on just the title of the question. has some similar themes as well.

The Natural Abstraction Hypothesis: Implications and Evidence

Re: looking at the relationship between neuroscience and AI: lots of researchers have found that modern deep neural networks actually do quite a good job of predicting brain activation (e.g. fmri) data, suggesting that they are finding some similar abstractions.


Teaser: Hard-coding Transformer Models

I'll make sure to run it when I get to a laptop. But if you ever get a chance to set the article up to run on heroku or something, that'll increase how accessible this is by an order of magnitude.

4Igor Ostrovsky5mo
I (not the OP) put it up here for now: [] I'll take it down if MadHatter asks me or once there is an official site.
Teaser: Hard-coding Transformer Models

Sounds intriguing! You have a GitHub link? :)

It's very, very rough, but:
What have your romantic experiences with non-EAs/non-Rationalists been like?
Answer by mtaranDec 05, 202111

The biggest rationalist-ish issue for me has been my partners not being interested (or actively disinterested) in signing up for cryonics. This has been the case in three multi-year relationships.

4Randomized, Controlled6mo
Oh, yeah, both my mom, sister and GF have been entirely uninterested in cryo, but it hasn't caused any issues for me yet. Has it been actively problematic?
Did EcoHealth create SARS-CoV-2?

You'd be more likely to get a meaningful response if you sold the article a little bit more. E.g. why would we want to read it? Does it seem particularly good to you? Does it draw a specific interesting conclusion that you particularly want to fact-check?

It's a long read, but you can skim it. Nicholas Wade is a serious science writer and he has smoking gun evidence. EcoHealth was getting US grants, subcontracted out to Dr Shi at the Wuhan Institute of Virology to insert spike proteins into bat viruses to see what makes them more infectious to humans. Also, here's another more detailed EcoHealth proposal to DARPA, that discusses Gain of Function Research, making bat viruses more infectious to humans, and proposes subcontracting work out to Dr Shi at the Wuhan Institute of Virology. The DARPA proposal was rejected, but EcoHealth just got similar proposals funded through NIAID. This is really smoking gun evidence. They did it.
The Blackwell order as a formalization of knowledge

I really loved the thorough writeup and working of examples. Thanks!

I would say I found the conclusion section the least generally useful, but I can see how it is the most personal (that kinda why it has a YMMV feel to it for me).

3Alex Flint8mo
Thank you for the kind words and feedback. I wonder if the last section could be viewed as a post-garbling of the prior sections...
A Contamination Theory of the Obesity Epidemic

Reverse osmosis filters will already be more common in some places that have harder water (and decided that softening it at the municipal level wouldn't be cost-effective). If there was fine grained data available about water hardness and obesity levels, that might provide at least a little signal.

Can someone help me understand the arrow of time?

There's a more elaborate walkthrough of the last argument at

It's part of a statistical mechanics textbook, so a couple of words of jargon may not make sense, but this section is highly readable even without those definitions. To me it's been the most satisfying resolution to this question.

There are at least two questions here.
"Decision Transformer" (Tool AIs are secret Agent AIs)

Nice video reviewing this paper at

In my experience it's reasonably easy to listen to such videos while doing chores etc.

Toy Problem: Detective Story Alignment

The problem definition talks about clusters in the space of books, but to me it’s cleaner to look at regions of token-space, and token-sequences as trajectories through that space.

GPT is a generative model, so it can provide a probability distribution over the next token given some previous tokens. I assume that the basic model of a cluster can also provide a probability distribution over the next token.

With these two distribution generators in hand, you could generate books by multiplying the two distributions when generating each new token. This will bia... (read more)

If anyone tries this, I'd be interested to hear about the results. I'd be surprised if something that simple worked reliably, and it would likely update my thinking on the topic.
[Link] Reddit, help me find some peace I'm dying young

Ok, I misread one of gwern's replies. My original intent was to extract money from the fact that gwern gave (from my vantage point) too high a probability of this being a scam.

Under my original version of the terms, if his P(scam) was .1:

  • he would expect to get $1000 .1 of the time
  • he would expect to lose $100 .9 of the time
  • yielding an expected value of $10

Under my original version of the terms, if his P(scam) was .05:

  • he would expect to get $1000 .05 of the time
  • he would expect to lose $100 .95 of the time
  • yielding an expected value of -$45

In the s... (read more)

Alright then, I accept. The wager is thus: * on 1 January 2013, if CI confirms that she is really dying and has or is in the process of signing up with membership & life insurance, then I will pay you $52; if they confirm the opposite, confirm nothing, or she has made no progress, you will pay me $1000. * In case of a dispute, another LWer can adjudicate; I nominate Eliezer, Carl Shulman, Luke, or Yvain (I haven't asked but I doubt all would decline). * For me, paying with Paypal is most convenient, but if it isn't for you we can arrange something else (perhaps you'd prefer I pay the $52 to a third party or charity). I can accept Paypal or Bitcoin.
[Link] Reddit, help me find some peace I'm dying young

Well I still accept, since now it's a much better deal for me!

Um, the way I'm reading this it looks like gwern is taking the position you were originally trying to take?
[Link] Reddit, help me find some peace I'm dying young

Done. $100 from you vs $1000 from me. If you lose, you donate it to her fund. If I lose, I can send you the money or do with it what you wish.

Wait, I'm not sure we're understanding each other. I thought I was putting up $100, and you'd put up $10; if she turned out to be a scam (as judged by CI), I lose the $100 to you - while if she turned out to be genuine (as judged by CI), you would lose the $10 to me.
[Link] Reddit, help me find some peace I'm dying young

There are a lot of things I'd like to say, but you have put forth a prediction

It's probably a scam

I would like to take up a bet with you on this ending up being a scam. This can be arbitrated by some prominent member of CI, Alcor, or Rudi Hoffman. I would win if an arbiter decides that the person who posted on Reddit was in fact diagnosed with cancer essentially as stated in her Reddit posts, and is in fact gathering money for a her own cryonics arrangements. If none of the proposed arbiters can vouch for the above within one month (through September 18), then you will win the bet.

What odds would you like on this, and what's the maximum amount of money you'd put on the line?

Genius, I should have thought of that ^_^
As I said in my other comment [] , I'm now giving 5-10% for scam. I'd be happy to take a 1:10 bet on the CI outcome risking no more than $100 on my part, but I think 1 month is much too tight; 1 January 2013 would be a better deadline with the bet short-circuiting on CI judgment.
[Link] Reddit, help me find some peace I'm dying young

I have donated $1000, and I really do believe that our community can get her fully funded. I understand how CI has to be cautious about these sorts of things, but I've seen enough evidence to be more than convinced.

She posted her first vlog on youtube: [] Also per facebook [], she's working on setting up a more official donation route with CI or Alcor. I expect donations will pour in once that gets established.
What's the best way to rest?

I understand getting enough sleep, but what for example is your version of "eating right"?

Well I think the common understanding of it is good enough. Fruits, vegetables, fish, whole grains, etc. One thing I've noticed is that avoiding junk food has powerful effects on cognition for instance. There's even evidence [] for this. Seriously, don't underestimate its importance.
Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82

HP:MoR 82

The two of them did not speak for a time, looking at each other; as though all they had to speak could be said only by stares, and not said in any other way.

Wizard People, Dear Readers

He gives up on using his words and tries to communicate with only his eyes. Oh, how they bulge and struggle to convey unthinkable meaning!

Was there any inspiration?

HA I love that I read that in Neely's voice.
For others who didn't catch the allusion, and didn't notice the googlepation [], here is the relevant "movie" [].
Presents for impoving rationality or reducing superstition?

Thanks a lot for the suggestion. We already own the game and really enjoy playing it :)

Auckland meetup, Thrusday May 26th

I'll be attending, since it just so happens I'm in this corner of the world right now :)

April 10 2011 Southern California Meetup

The general plan for this month's meetup is to try to get more people unfamiliar with LW and x-rationality (particularly other HMC students) to come. I'm not sure to what extent this will be successful, but if it is, it would be nice to have some introductory talks about how rationality can have good practical benefits and help you achieve your goals.

I'd encourage people who are planning on coming to have some examples from their own lives of how rationality has been particularly useful.

Claremont Meetup

I've reserved the Platt Conference Room (same place as the previous HMC LW meetings have been) from 2 to 8 on Sunday. Staying later than that wouldn't be a problem, and we can either get some food from one of the cafeterias around here or order takeout from somewhere.

Eep. Admitted Student's Reception doesn't let me off until 8:30 PM. I guess I'll just meet up with you guys towards the end of it, and then maybe go to some of the student things (I'm eying those dance lessons). Can you PM me a cell phone number or something so that we can get in touch?
February 27 2011 Southern California Meetup

Being at Harvey Mudd, I'll definitely attend, though I doubt I can help anyone with transportation :)

January 2011 Southern California Meetup

Six to eight of us from the Claremont colleges will be there.

Goertzel on Psi in H+ Magazine

Out of curiosity I did this for the first experiment (anticipating erotic images). He had 100 people in the experiment, 40 of them did 12 trials with erotic images, and 60 did 18 trials. So there were 1560 trials total.

You can get a likelihood by taking P(observed results | precognitive power is 53%)/P(observed results | precognitive power is 50%). This ends up being (.53^827 * .47^733) / (.5^1560) = ~17

So if you had prior odds 1:100 against people had precognitive power of 53%, then after seeing the results of the experiment you should have posterior odds... (read more)

...I don't think this calculation would be right even if we actually factored in all the Psi studies that didn't achieve any statistically significant result. Shifting your belief in PSI from 1% to something like 16% based on one lousy study while ignoring every single respectable study that didn't show any result is madness. To be more specific, first of all you didn't know whether PSI existed or not (50/50), but then for hopefully good reasons you corrected your prior odds down to 1/100 (which is still ridiculously high). Now one lousy study comes along and you give this one lousy datapoint the same weight as every single datapoint combined, that up until now you considered to be evidence against PSI. The mistake should be obvious. The effect of this new evidence on your expected likelihood of the existence of PSI should be infinitesimal and your expected odds should stay right where they are, until these dubious findings can be shown to be readily replicated... which by virtue of my current prior odds I confidently predict most surely won't happen.
META: Meetup Overload

Also, you don't even need people to manually enter their locations. IP addresses are usually enough to narrow your location down to a city or metropolitan area, and we wouldn't need much higher resolution than that.

October 2010 Southern California Meetup

Edit: Looks like I/we at Harvey Mudd don't really have a car (or person to drive it), so unless someone is going to be driving by Claremont, I don't think I'll be able to make it.

Rationality quotes: October 2010

From a hacker news thread on the difficulty of finding or making food that's fast, cheap and healthy.

"Former poet laureate of the US, Charles Simic says, the secret to happiness begins with learning how to cook." -- pfarrell

Reply: "Well, I'm sure there's some economics laureate out there who says that the secret to efficiency begins with comparative advantage." -- Eliezer Yudkowsky

I just saw this. I figured out a food with said qualities: chicken. 1) cheap and healthy 2) fast to prepare if you do it my way: buy chicken hips. wash them and put them into a pan with water, cook for 18 minutes. Eat. They don't taste that good but you can't beat the price and convenience.
I don't understand this one. A poetry guy says something practical (and completely unrelated to poetry) is a valuable thing, and Eliezer replies that an economics guy would say something about economics? The message eludes me.
0Eliezer Yudkowsky12y
thanks, but no quoting LWers in this post
Correlated decision making: a complete theory

Also, in the very beginning, "turning with probability p" should really be "going straight with probability p".

Damned! fixed.