All of lmm's Comments + Replies

How to get cryocrastinators to actually sign up for cryonics

Looks like their website has been taken over by spam. Which in turn gives me very little confidence in an organization that's supposed to be around until my death and for many years afterwards.

Do you know anything about the current state of play in the UK? Are you still covered?

2Paul Crowley5yLongevity is much less of a concern with CUK; they don't do storage, only standby and transport. I live in the Bay Area now.
January 2016 Media Thread

Shirobako: The most realistic portrayal of ordinary working life (at least, like mine) I've seen, in any fictional medium. Warm-hearted (perhaps to a fault), straightforward, very much writing what they know and love. I recommend it to anyone interested in animation, but especially to students or similar interested in seeing a day in the working life.

Hibike Euphonium. Teenage drama (not actually a romance show, but it felt like one) that again felt very true, and with KyoAni's usual high production values.

A Farewell to Arms (one of the Short Peace shorts).... (read more)

January 2016 Media Thread

I hated Three-Body Problem. I found it incredibly slow and unrewarding (and I never got how ~200 intelligent people in a game called three-body problem somehow don't figure out that the game's about the three-body problem). Partly the dangers of hype, but I really don't think it's very good.

Rationality Quotes Thread December 2015

I don't see the point. The whole point of "motivating doesn't last" is "you will only be able to sustain effort if there is something in your day-to-day that motivates you to continue, not some distant ideal.

LessWrong 2.0

There's a reason why CFAR has workshops instead of writing articles and books.

Is there? Given that this community seems to be quite skeptical about the value of e.g. university over self-teaching from textbooks, what's the rationale for that format?

4Vaniver6yThe social proof effect of physically attending a workshop and spending a weekend around similarly inclined people is not to be underestimated. In-person instruction also provides better feedback for the instructors, allowing for more rapid iteration.
-1ChristianKl6yUniversity isn't an workshop enviroment. There might be a few MBA programs that do actual workshop type exercises but a STEM program generally ignores emotional engagement. A textbook can only give you knowledge. A workshop can touch you much more deeply.
LessWrong 2.0

Discourage/ban Open threads. They are an unusual thing to have on a an open forum. They might have made sense when posting volume was higher, but right now they further obfuscate valuable content.

I'd say the opposite: the open threads are the part that's working. So I'd rather remove main/discussion and make everything into open thread, i.e. move to something more like a traditional forum model. I don't know whether that's functionally the same thing.

3V_V6yI think it is, except that having different stuff into open threads makes it less visible.
Omega's Idiot Brother, Epsilon

In other words, gaining $1M has to be no more than about 25% better than gaining $1k.

Interesting. My thought process was that it's worth losing $8000 in EV to avoid a 1% chance of losing $1000. I think my original statement was true, but perhaps poorly calibrated; these days I shouldn't be that risk-averse.

Omega's Idiot Brother, Epsilon

I would two-box on this problem because of diminishing returns, and one-box on the original problem.

Your returns must be very rapidly diminishing. If u is your kilobucks-to-utilons function then you need [7920u(1001)+80u(1)]/8000 > [3996u(1000)+4u(0)]/4000, or more simply 990u(1001)+10u(1) > 999u(1000)+u(0). If, e.g., u(x) = log(1+x) (a plausible rate of decrease, assuming your initial net worth is close to zero) then what you need is 6847.6 > 6901.8, which doesn't hold. Even if u(x) = log(1+log(1+x)) the condition doesn't hold.

If we fix our origin by saying that u(0)=0 (i.e., we're looking at utility change as a result of the transaction) and s... (read more)

Rationality Quotes Thread November 2015

I think there's an underlying truth that most software engineers are too timid, perhaps because we're calibrated for working with materials where mistakes are more costly and harder to put right.

Rationality Quotes Thread November 2015

And one of the big issues leading to the financial crisis was that a lot of these ratings were wrong and a lot of AAA bonds defaulted.

Rationality Quotes Thread November 2015

AIG was the borrower (and separately Fannie and Freddie), banks were the lenders, it is absolutely useful to think about the situation in those terms. It highlights the conflict between our political intuition that insurance should be protected and financial speculation should not - some people thought AIG was doing one, some people thought the other. Likewise some people thought Freddie and Fannie were widows-and-orphans investments that the government should guarantee and some people thought they were private financial traders. Clarifying these things could have averted the crisis, it's absolutely a useful model.

November 2015 Media Thread

Just for the record I really liked her arc. I think I saw part of myself in her? Would have to rewatch to be able to be any more specific.

November 2015 Media Thread

Until the Sea Shall Free Them. Inherently partisan, and I have no real measure of its accuracy, but a compelling narrative that goes some way in expanding to the systemic ways things go wrong, while still very firmly rooted in its single example.

November 2015 Media Thread

Do you know how much more there is to go? We're still waiting for the prequel movie, right?

2ShardPhoenix6yThere's Owari, Zoku-Owari, Kizu (3 movies), then the author has also announced some new books to be written. So who knows...
November 2015 Media Thread

Light, by M John Harrison (based on the first 22%). I'm finding it genuinely hard to read - a bit like The Quantum Thief or The January Dancer, but more so than either of them. I can't yet say it's good per se - in particular the three narrative strands show very little sign of converging at this stake - but it's a striking, provocative experience.

Open thread, Oct. 19 - Oct. 25, 2015

No. There are any number of predictable systems in our quantum universe, and no reason to believe that an agent need be anything other than e.g. a computer program. In any case "noise" is the wrong way to think about QM; quantum behaviour is precisely predictable, it's just the subjective Born probabilities that apply.

Open thread, Nov. 02 - Nov. 08, 2015

The problem is the site looks cheap. If I'm showing off how rich I am, I want something that looks elegant and refined.

1Lumifer6yPeople who value elegance are refinement are NOT the target demographic for that website X-)
Open thread, Oct. 19 - Oct. 25, 2015

I don't understand. Computers are able to provide reliable boolean logic even though they're made of quantum mechanics. And any "uncertainty" introduced by QM has nothing to do with distance. You seem very confused.

0SodaPopinski6yMy question is simply: Do we have any reason to believe that the uncertainty introduced by quantum mechanics will preclude the level of precision in which two agents have to model each other in order to engage in acausal trade?
Rationality Quotes Thread October 2015

I think it's legitimate to criticise a company for pretending to sell utilons when it isn't. Yes, this company may well be a better use of your money than Taylor Swift tickets. But Taylor Swift isn't marketed as an investment.

-2ike6yThey're selling hedons, which factor into people's utility functions. I'd also point to That doesn't seem so objectionable. If they're attracting people who wouldn't be investing otherwise, that's a gain. Also, do you have examples of their marketing that you think are inaccurate?
Rationality Quotes Thread October 2015

I think there's an analogy with "purchase fuzzies and utilons separately" here that Levine misses. If you want to be trendy and have a bunch of investment return in the future, it's probably more efficient to buy those two things from separate sources than to try and get both with a single product.

2ike6yThat's true, but he's talking from the company's side. If the target market are those that wouldn't invest at all, then the company could be providing real value overall. I wouldn't use such a company, of course; but the target demo is not "people who think logically about investments unless they get fuzzies". His argument is 1. Let people spend their extra cash however they want 2. This company seems likely to be a net utility plus for society The fact that its users are still irrational seems irrelevant then, and it's reminding me of the whole "Copenhagen ethics" post [http://blog.jaibot.com/the-copenhagen-interpretation-of-ethics/] (to make the analogy explicit, the company is being blamed for the fact that its users aren't perfect, even though they're better off than without the company.
Open thread, Oct. 19 - Oct. 25, 2015

You seem a very enthusiastic participant here, despite a lot of downmodding. I admire that - on here. In real life my fear would be that that translated into clinginess - wanting to come to all my parties, wanting to talk forever, and the like. (And perhaps that it reflects being socially unpopular, and that there might be a reason for that). So I'd lean slightly to avoid.

0[anonymous]6yHaha, thanks for that analysis. How unexpected and insightful. Your premise is mostly correct, but your conclusion ain't) I'm extremely clingy with a few people who I have crushes on and idealise at a given time (2 at the moment). It's generally very short lived ~1 month and always women, haha. On the other hand, I'm quite popular with friends and acquiescence, get invited to lots of parties but rarely accept (goal oriented, ain't got time for that!), I haven't tried to fall in love with or done some cruel socially experiment on. On the other hand, my instinctive drive tor respond to this may be telling of some degree of insecurity about my social status...
Open thread, Oct. 19 - Oct. 25, 2015

The whole point of acausal trading is that it doesn't require any causal link. I don't think there's any rule that says it's inherently hard to model people a long way away.

Imagine being an AI running on some high-quality silicon hardware that splits itself into two halves, and one half falls into a rotating black hole (but has engines that let it avoid the singularity, at least for a while). The two are now causally disconnected (well, the one outside can send messages to the one inside, but not vice versa) but still have very accurate models of each other.

0SodaPopinski6yYes, I understand the point of acausal trading. The point of my question was to speculate on how likely it is that quantum mechanics may prohibit modeling accurate enough to make acausal trading actually work. My intuition is based on the fact that in general faster than light transmission of information is prohibited. For example, even though entangled particles update on each others state when they are outside of each others light cone, it is known that it is not possible transmit information faster than light using this fact. Now, does mutually enhancing each others utility count as information, I don't think so. But my instinct is that acausal trade protocols will not be possible do to the level of modelling required and the noise introduced by quantum mechanics.
Open thread, Oct. 19 - Oct. 25, 2015

This sounds like an XY problem - what are you trying to achieve by reducing the number of apps?

0[anonymous]6yXY? What does that refer to? Female chromosomes? Trying to reduce decision fatigue and streamline my time management. Spending lots of time looking at apps lately.
Rationality Quotes Thread September 2015

I'm not convinced on the international diversification example, particularly if the best argument is "some hard-to-measure risks". Most of the time the things you want to buy are in your own country, so any diversification is taking on a large foreign exchange risk.

October 2015 Media Thread

Maybe be more specific/detailed?

0rayalez6yIt's hard to be more specific. I just love comedy very much and it is the best I've ever seen(besides Community). It's on the level of Louis CK, and, in my personal opinion, RaM compared to other comedies is what Breaking Bad is compared to other drama. There's no point in explaining it too deeply. Most of the episodes are officially available for free here [http://www.adultswim.com/videos/rick-and-morty/]. Watch the first 3, and then you'll either like it or not.
Summoning the Least Powerful Genie

Not quite - rather the everyday usage of "real" refers to the model with the currently-best predictive ability. http://lesswrong.com/lw/on/reductionism/ - we would all say "the aeroplane wings are real".

0Lumifer6yErrr... no? I don't think this is true. I'm guessing that you want to point out that we don't have direct access to the territory and that maps is all we have, but that's not very relevant to the original issue of replacing "I find it convenient to think of that code as wanting something" with "this code wants" and insisting that the code's desires are real. Anthropomorphization is not the way to reality.
Summoning the Least Powerful Genie

I've known plenty of cases where people's programs were more agentive than they expected. And we don't have a good track record on predicting which parts of what people do are hard for computers - we thought chess would be harder than computer vision, but the opposite turned out to be true.

0TheAncientGeek6yI haven't: have you any specific examples?
0Lumifer6y"Doing something other than what the programmer expects" != "agentive". An optimizer picking a solution that you did not consider is not being agentive.
Summoning the Least Powerful Genie

Is there a difference between "x is y" and "assuming that x is y generates more accurate predictions than the alternatives"? What else would "is" mean?

0Lumifer6yAre you saying the model with the currently-best predictive ability is reality??
Summoning the Least Powerful Genie

I'm a professional software engineer, feel free to get technical.

2TheAncientGeek6yHave you ever heard of someone designing a nonagentive programme that unexpectedly turned out to be agentive? Because to me that sounds like into the workshop to build a skateboard abd coming with a F1 car.
Summoning the Least Powerful Genie

Why are you so confident your program is a nonagent? Do you have some formula for nonagent-ness? Do you have a program that you can feed some source code to and it will output whether that source code forms an agent or not?

0TheAncientGeek6yIt's all standard software engineering.
Summoning the Least Powerful Genie

Does an amoeba want anything? Does a fly? A dog? A human?

You're right, of course, that we have better models for a calculator than as an agent. But that's only because we understand calculators and they have a very limited range of behaviour. As a program gets more complex and creative it becomes more predictive to think of it as wanting things (or rather, the alternative models become less predictive).

0Lumifer6yNotice the difference (emphasis mine): vs
Summoning the Least Powerful Genie

A program designed to answer a question necessarily wants to answer that question. A superintelligent program trying to answer that particular question runs the risk of acting as a paperclip maximizer.

Suppose you build a superintelligent program that is designed to make precise predictions, by being more creative and better at predictions than any human would. Why are you confident that one of the creative things this program does to make itself better at predictions isn't turning the matter of the Earth into computronium as step 1?

0TheAncientGeek6yWhat does that mean? It's necessarily satisfying a utility function? It isn't as Lumifer's calculator shows. I can be confident that nonagents wont't do agentive things.
2Lumifer6yI don't think my calculator wants anything.
Probabilities Small Enough To Ignore: An attack on Pascal's Mugging

we shouldn't wear seatbelts, get fire insurance, or eat healthy to avoid getting cancer, since all of those can be classified as Pascal's Muggings

How confident are you that this is false?

Open thread, Sep. 14 - Sep. 20, 2015

It's about where I expected. I think 6 is probably the best you can do under ideal circumstances. Legitimate, focussed work is exhausting.

If you're looking for bias, this is a community where people who are less productive probably prefer to think of themselves as intelligent and akrasikal (sp?). Also you've asked at the end of a long holiday for any students here.

Open thread, Sep. 14 - Sep. 20, 2015

I'd rather people actually said "Do you want to come back to my room for sex?" rather than "Do you want to come back to my room for coffee?" where coffee is a euphemism for sex, because some people will take coffee at face value, which can lead to either uncomfortable situations, including fear of assault, or lead to people missing opportunities because they are bad at reading between the lines.

I'd rather that too, and I've had it go wrong in both directions. But the whole point of much of this site is that outcomes are more importan... (read more)

0skeptical_lurker6yI'm not sure its always creepy, not if you've already kissed them. Depends on circumstances. Inviting someone in for coffee and then trying to fuck them can be pretty creepy too. But I agree that I can't change society, and so I might as well conform to the rules.
How To Win The AI Box Experiment (Sometimes)

Ah, sorry to get your hopes up, it's a degenerate approach: http://pastebin.com/Jee2P6BD

3pinkgothic6yThanks for the link! I had a chuckle - that's an interesting brand of cruelty, even if it only potentially works out of character. I think it highlights that it might potentially be easier to win the AI box experiment on a technicality, the proverbial letter of the law rather than the spirit of it.
2[anonymous]6yIt also hasn't won. (Unless someone more secretive than me had had the same idea)
How To Win The AI Box Experiment (Sometimes)

Thank you for publishing. Before this I think the best public argument from the AI side was Khoth's, which was... not very convincing, although it apparently won once.

I still don't believe the result. But I'll accept (unlike with nonpublic iterations) that it seems to be a real one, and that I am confused.

1pinkgothic6yDo you have a link to Khoth's argument? I hadn't found any publicised winning scenarios back when I looked, so I'd be really interested in reading about it!
Why Don't Rationalists Win?

I want to talk about the group (well, cluster of people) that calls itself "rationalists". What should I call it if not that?

1lahwran6yCFAR community, or LW community, depending on which kind of person you mean.
Why Don't Rationalists Win?

Many stories I've seen of lottery winners lost the money quickly through bad investments and/or developed major life issues (divorce, drug addiction).

4WalterL6yI think there's an element of rubbernecking there. The general feeling of the mob is that lottery = tax on stupidity. We are smart to not play the lottery. Story of a winner challenges general feeling, mob feels dumb for not buying winning ticket. Unmet need exists for story to make mob happy again. General form of story is that lottery money is evil money. Lottery winners, far from being better than you, dear reader, are actually worse! They get divorced, they squander the money! Lawsuits!! No one wants to read about the guy who retires and pays off his credit cards. No story there. But there are a lot of lotteries, so there will be an idiot somewhere you can use to reassure your viewers that they are double smart for not being rich.
Why Don't Rationalists Win?

I went to an LW meetup once or twice. With one exception the people there seemed less competent and fun than my university friends, work colleagues, or extended family, though possibly more competent than my non-university friends.

3lahwran6yThat was also true for me until I moved to the bay. I suspect it simply doesn't move the needle much, and it's just a question of who it attracts.
3drethelin6yI have the opposite experience! Most people at LW meetups I've been to have tended to be succesful programmers or people with or working on stuff like math phds. Generally more socially awkward but that's not a great proxy for "competence" in this kind of crowd.
Stupid Questions September 2015

It absolutely could be. But we've seen no evidence that distinguishes such a scenario from the big bang theory, and so we prefer it by Occam's razor.

Stupid Questions September 2015

I'd say it's not so much following rules as being productive. The value of capitalism is that embezzlement, bribery and the like are less often the most personally profitable course than they are under other systems.

Open thread 7th september - 13th september

Or maybe today's men just have less interest in staying and fighting. I mean what you say is plausible but it's a long way from proving "they can't possibly be refugees because the majority are men".

Open thread 7th september - 13th september

75% of the refugees are men. So either they feel that the places they're leaving are safe for women and children, or their main motivation isn't escaping danger.

Or the danger is severe enough that they're fleeing alone, more effectively than women and children do?

7tut6yIsn't that exactly the point? For what was traditionally known as refugees, staying was more dangerous than fleeing so young men were more likely to stay and fight while women, small children and old men fled. Whereas when you are moving to opportunities staying is safe but unpleasant and moving is dangerous but might be very good. So mostly men move alone and then they send for their families once they have secured a place in the new country. Like Europeans moving to America or Australia, and like modern day refugees.
September 2015 Media Thread

I didn't like Ancillary Justice so much FWIW - I didn't find the culture so compelling, and the lead's morality was jarring to me (she seemed less like someone who was seeing the flaws in the culture she was raised in and more like someone who had always instinctively had a western liberal morality that they'd been suppressing to fit in).

Do you have a view on The January Dancer? I loved that - modern space opera, with some interesting cultures, but also a compelling plot on the sci-fi side.

0CellBioGuy6yWhich lead do you speak of? The officer or the ship? I can't say I've read that one! Great, the list expands.
September 2015 Media Thread

Another tranche of shows watched with my group, though they don't really end up as recommendations:

Blood Blockade Battlefront: Started with some fun action, and a very cool-looking setting, but decayed rapidly - the plot arc it tried to set up towards the end was just dull. Avoid

Knights of Sidonia (season 2): Shifts much more towards the harem antics than the serious sci-fi; also some massive power inflation which could easily have been thematic but... isn't. I greatly enjoyed it, but only recommended to people who enjoy light comedy/romance.

Fate/Stay Nigh... (read more)

Rationality Quotes Thread June 2015

They didn't check your name against your email. You needed a university email but you could, and people did, use a fake name with it, even an obviously fake one.

Load More