All of lexande's Comments + Replies

In particular it seems very plausible that I would respond by actively seeking out a predictable dark room if I were confronted with wildly out-of-distribution visual inputs, even if I'd never displayed anything like a preference for predictability of my visual inputs up until then.

4Carl Feynman8mo
When I had a stroke, and was confronted with wildly out-of-distribution visual inputs, one of the first things they did was to put me in a dark predictable room. It was a huge relief, and apparently standard in these kinds of cases. I’m better now.

It seems like a major issue here is that people often have limited introspective access to what their "true values" are. And it's not enough to know some of your true values; in the example you give the fact that you missed one or two causes problems even if most of what you're doing is pretty closely related to other things you truly value. (And "just introspect harder" increases the risk of getting answers that are the results of confabulation and confirmation bias rather than true values, which can cause other problems.)

Here's an attempt to formalize the "is partying hard worth so much" aspect of your example:

It's common (with some empirical support) to approximate utility as proportional to log(consumption). Suppose Alice has $5M of savings and expected-future-income that she intends to consume at a rate of $100k/year over the next 50 years, and that her zero utility point is at $100/year of consumption (since it's hard to survive at all on less than that). Then she's getting log(100000/100) = 3 units of utility per year, or 150 over the 50 years.

Now she finds out that t... (read more)

Why would you expect her to be able to diminish the probability of doom by spending her million dollars? Situations where someone can have a detectable impact on global-scale problems by spending only a million dollars are extraordinarily rare. It seems doubtful that there are even ways to spend a million dollars on decreasing AI xrisk now when timelines are measured in years (as the projects working on it do not seem to be meaningfully funding-constrained), much less if you expected the xrisk to materialize with 50% probability tomorrow (less time than it takes to e.g. get a team of researchers together).

I agree it’s rare to have a global impact with a million dollars. But if you’re 50% confident the world will be destroyed tomorrow, that implies you have some sort of specific knowledge about the mechanism of destruction. The reason it’s hard to spend a million dollars to have a big impact is often because of a lack of such specific information. But if you are adding the stipulation that there’s nothing Alice can do to affect the probability of doom, then I agree that your math checks out.

I think it generally makes sense to try to smooth personal consumption, but that for most people I know this still implies a high savings rate at their first high-paying job.

  • As you note, many of them would like to eventually shift to a lower-paying job, reduce work hours, or retire early.
  • Even if this isn't their current plan, burnout is a major risk in many high-paying career paths and might oblige them to do so, and so there's a significant probability of worlds where the value of having saved up money during their first high-paying job is large.
  • If they'r
... (read more)

Yeah that's essentially the example I mentioned that seems weirder to me, but I'm not sure, and at any rate it seems much further from the sorts of decisions I actually expect humanity to have to make than the need to avoid Malthusian futures.

I'm happy to accept the sadistic conclusion as normally stated, and in general I find "what would I prefer if I were behind the Rawlsian Veil and going to be assigned at random to one of the lives ever actually lived" an extremely compelling intuition pump. (Though there are other edge cases that I feel weirder about, e.g. is a universe where everyone has very negative utility really improved by adding lots of new people of only somewhat negative utility?)

As a practical matter though I'm most concerned that total utilitarianism could (not just theoreticall... (read more)

Another question. Imagine a universe with either only 5 or 10 people. If they're all being tortured equally badly at a level of -100 utility, are you sure you're indifferent as to the number of people existing? Isn't less better here?

I think
- Humans are bad at informal reasoning about small probabilities since they don't have much experience to calibrate on, and will tend to overestimate the ones brought to their attention, so informal estimates of the probability very unlikely events should usually be adjusted even lower.
- Humans are bad at reasoning about large utilities, due to lack of experience as well as issues with population ethics and the mathematical issues with unbounded utility, so estimates of large utilities of outcomes should usually be adjusted lower.
- Throwing away mos... (read more)

I think that I'd easily accept a year of torture in order to produce ten planets worth of thriving civilizations. (Or, if I lack the resolve to follow through on a sacrifice like that, I still think I'd have the resolve to take a pill that causes me to have this resolve.)

I'd do this to save ten planets of worth of thriving civilizations, but doing it to produce ten planets worth of thriving civilizations seems unreasonable to me. Nobody is harmed by preventing their birth, and I have very little confidence either way as to whether their existence will wind up increasing the average utility of all lives ever eventually lived.

I used to favour average utilitarianism too, until I learned about the sadistic conclusion. That was sufficient to overcome any aversion I had to the repugnant conclusion.

There's some case for it but I'd generally say no. Usually when voting you are coordinating with a group of people with similar decision algorithms who you have some ability to communicate with, and the chance of your whole coordinated group changing the outcome is fairly large, and your own contribution to it pretty legible. This is perhaps analogous to being one of many people working on AI safety if you believe that the chance that some organization solves AI safety is fairly high (it's unlikely that your own contributions will make the difference but y... (read more)

3Jalex Stark2y
One thing I like about the "dignity as log-odds" framework is that it implicitly centers coordination.

This is Pascal's Mugging.

Previously comparisons between the case for AI xrisk mitigation and Pascal's Mugging were rightly dismissed on the grounds that the probability of AI xrisk is not actually that small at all. But if the probability of averting the xrisk is as small as discussed here then the comparison with Pascal's Mugging is entirely appropriate.

It's not Pascal's mugging in the senses described in the first posts about the problem:

[...] I had originally intended the scenario of Pascal's Mugging to point up what seemed like a basic problem with combining conventional epistemology with conventional decision theory:  Conventional epistemology says to penalize hypotheses by an exponential factor of computational complexity.  This seems pretty strict in everyday life:  "What? for a mere 20 bits I am to be called a million times less probable?"  But for stranger hypotheses about thin

... (read more)
Is voting a pascal's mugging?

The cost of Covid is not just unlikely chronic effects, nor vanishingly-unlikely-with-three-shots severe/fatal effects, but also making you feel sick and obliging you to quarantine for ~five days (and probably send some uncomfortable emails to people you saw very recently). With the understandable abandonment of NPIs and need to get on with life, the chance that you will catch Covid in a given major wave if not recently boosted seems pretty high, perhaps 50%? (There were 30M confirmed US cases during the Omicron wave, and at least for most of the pandemic ... (read more)

Which is why insurance companies should be encouraged to start underwriting the cost of quarantines. With a discount for people who get vaccinated.

- Is there any reason to think research that could lead to malaria vaccines is funding-constrained? There doesn't seem to be any shortage of in-mice studies, and in light of Eroom's Law the returns on marginal biomedical research investment seem low.
- Malaria is preventable and curable with existing drugs, so vaccines for it only make sense if their cost (including required research) works out lower than preventing it in other ways, which means some strategies that made sense for something like Covid won't make sense here.
- That's not how international wat... (read more)

It’s plausible that the Covid-19 pandemic could end up net massively saving lives, and a lot of Effective Altruists (and anyone looking to actually help people) have some updating to do. It’s also worth saying that 409k people died of malaria in 2020 around the world, despite a lot of mitigation efforts, so can we please please please do some challenge trials and ramp up production in advance and otherwise give this the urgency it deserves?

What update is this supposed to cause for Effective Altruists? We already knew that policy around all sorts of global ... (read more)

I'm saying you should consider funding more basic research like mRNA vaccines and less bednets. Or setting up medical cruise ships for challenge trials in international waters. Or focusing on epistemics or even policy. Also, if the pandemic wasn't obviously net bad that raises a lot of questions...
Yes, I'm conflating "BLM movement" and "individual Americans who want to help BLM achieve its goals" because isn't it the same thing.

No? I want to help BLM achieve its goals, but "launch a nationwide discussion" and "come to a consensus policy" are not actions I can personally take. If I post policy proposals on Facebook it seems unlikely to me that many people will read or be influenced by them; it also seems unlikely that they would be better than many other policy ideas already out there. If you actually... (read more)

Wow, I actually haven't expected that at all. Maybe many years ago this turn of events would seem natural to me. People care about each other and stand up for each other when someone gets hurt, right? Well, wrong. At least in Russia, most people don't care much about victims of police violence, as I've found. And in USA it seems to only be about black people. So while I can see why Democrats are supporting their ingroop, I don't get the increase in Republican support. Could people be lying about their views because they're afraid of repercussions for expressing wrong ones? Seems like a big stretch. My people believed in nonviolent protest, and lost. While I'd broken away from the doctrine and cheered for people who fought back against cops, I've always thought that pointless violence against innocents would make people hate me. (Or do they just hate the cops even more? I didn't notice that.) Will people like my politics more if I go loot some shops? Or is it something else they did right. I walk away from you guys totally confused about how it all really works.

The question was not what the "BLM movement" should do, but what an individual Americans should do; your steps do not seem actionable for individuals. Your steps 1 and 2 also partly beg the question.

Additionally, assuming the support of all Democratic politicians is highly dubious; a number of cities that have been marked by highly visible abusive police behavior in recent weeks are already controlled by Democratic mayors and city councils, who in many cases have nonetheless refused to hold the police accountable. And support of 50% of the popula... (read more)

Sorry, I've realized they have a list of demands already. Yes, I'm conflating "BLM movement" and "individual Americans who want to help BLM achieve its goals" because isn't it the same thing. Ok, from what you've told me, it sounds like getting Republican support is the easiest way to achieve change. With that in mind, actionable points (for a generic BLM supporter not just for lesswrongers, I think you probably aren't bullying anyone already): * propose your own policy ideas, e.g. like Eliezer did on Facebook * stop bullying everyone who disagrees with you, so you can learn what they think and find solutions that both sides support * defend shops from looters so people have more sympathy for your side

1) People are probably less likely to throw out stale bread if it's impossible to obtain fresh bread?

2) If the price of e.g. fish is less regulated but generally higher than that of bread, banning fresh bread would lead to a larger rise in the price of fish as more rich people switch to it, which would perhaps lead to fishermen working longer hours and catching more fish, helping make up the overall calorie shortfall from the poor harvest without increasing costs for poor people who could never afford fish in the first place. Whereas letting the price... (read more)

Since apparently some confirmed cases never develop symptoms (this study of Diamond Princess passengers estimates 18%), it seems the answer to your second question is "never"?

Sorry, I worded that wrong. Edited the OP.

The world population is not infinite. If somebody moves to San Francisco that means lower demand and lower rents wherever they came from (and conversely many other US cities now have housing crises caused by exiles from San Francisco). The desirable cities should be allowed to expand until there is more than enough room for everybody (yes, everybody) who wants to live in them to live in them, at which point landlords will no longer have the leverage to keep rents high.

Next time I see somebody say "shoot for the moon so if you miss you'll land among the stars" I'm going to link them here.

You seem to be saying that you prefer general words that encompass many concepts rather than specific and more precise words.

I can believe that you meant something more specific and precise than "worrying sometimes makes things worse" when you said "secondary stressors", but your post failed to get any more precise distinction across, and if people used the term as jargon they wouldn't be using it for anything more precise than "worrying making things worse". (Less sure about the motivation vs "tactile ambition" example since I don't know of any decent framework for thinking about motivation.)

Yeah, Lesswrong sometimes feels a bit like a forum for a fad diet that has a compelling story for why it might work and seemed to have greatly helped a few people anecdotally, so the forum filled up with people excited about it, but it doesn't seem to actually work for most of them. Yet they keep coming back because their friends are on the forum now and because they don't want to admit failure.

FDIC doesn't insure safe deposit boxes. It does insure your checking account balance, but your bank still has to figure out somewhere with a nonnegative interest rate to put your money (since the FDIC insurance triggers only after the bank itself is wiped out). Or find a way to charge you enough fees to make your effective interest rate negative.

1Jay Molstad5y
Sure, but they're a bank. Hopefully they have a competitive advantage in finding profitable places to lend money; that's supposedly their whole job. I don't, so I'm probably better off leaving it to them (as long as I have sufficient insurance in case they're bad at their job, which historically they often are).

Yeah, ignoring the option to declare bankruptcy or foreclose, effectively bounding your downside, seems like a major gap in this analysis. Especially as many jurisdictions usually allow people to keep significant assets (primary residence, 401ks) in bankruptcy. (Though on the other hand since 2005 US bankruptcy law obliges many filers to accept "repayment plans" for some fraction of what they owe, so it's not quite "discharging your debt for free".) That said I guess the most common debt for people reading this post is probably nondischargeable student debt; it makes sense if it's mainly talking about that.

Bank lockboxes have fees, which typically work out to more negative interest than the most negative actually-observed government-debt interest rates. (Indeed the operating & insurance costs of bank lockboxes at scale are basically a lower bound on how low government-debt interest rates can go in the market; this article from the European interest rate lows in 2016 suggest insurance costs of 0.5-1%.)

0Jay Molstad5y
For small amounts of money, FDIC insured bank accounts are suitably secure. Which is what I and most people actually use. If the FDIC fails, we're probably beyond a financial fix. Time to go loot a Hot Topic and start calling myself Doctor Humongous.

Bitcoin is (currently) pretty much useless as a medium of exchange. It remains of some practical use as a store of value resilient to certain legal risks (e.g. as the answer to the question Eliezer asked in this Facebook post), and in general with a risk profile uncorrelated with other assets. Its strength over other cryptocurrencies for this use case is based primarily on being the most established Schelling point. It's also possible (though not looking particularly likely) that future software changes will eventually make it useful as a medium of exchange again.

I'm one of the 15%. Given declining marginal utility of money , high-risk-high-financial-reward bets have never appealed to me; the financial EV would have to be ridiculously high for the EV in utilons to be positive. I considered getting some BTC as a curiosity in 2011 but decided it was too much hassle. However discussions in the aftermath of the 2016 election led me to conclude that holding a small amount of cryptocurrency could decrease overall risk by mitigating certain legal risks (e.g. money you can memorise might be good to have if you'... (read more)

I agree that this is the appropriate strategy to use when adding an investment to your portfolio, but note that if applied to Bitcoin it did not yield the sort enormous gains that motivated this post. So if you think the Bitcoin example should lead us to update away from outside-view-motivated beliefs about our ability to spot market inefficiencies/investment opportunities, you should probably also endorse updating away from outside-view-motivated portfolio strategies like picking an allocation and rebalancing.

I just ran some numbers on this. Suppose you ... (read more)

7Paul Crowley6y
A plausible strategy would be to buy say 100 bitcoins for $1 each, then sell 10 at $10, 10 at $100, and so on. With this strategy you would have made $111,000 and hold 60 bitcoins.

Pat and Maude's arguments seem somewhat more reasonable if they're essentially saying "if you're so smart, why aren't you high-status?" Since nearly everyone (including many people who explicitly claim not to) places a high value on status at least within some social sphere, and status is instrumentally useful for so many goals even if you don't value it terminally, a human can be assumed to already be trying as hard as they can to increase their status, and thus it's a decent predictive proxy for a their general ab... (read more)

In my experience "pop" is connotationally very different from how the Boston rationalists "backthumb"; "backthumb" contains a value judgement that the nascent conversation branch would be a poor use of time even if there is much that could be said about it, while "pop" is primarily used to return to a previous topic after a conversation branch has exhausted itself naturally.

I notice that I am confused why people are so extremely disinclined to keep gratitude journals (the effect of which does apparently replicate) even when they report doing it makes them feel better. (Of course I don't keep one either, the idea seems aversive and I don't know why.)

The social reality of how hard you can reasonably be expected to try/the "standard amount" of trying is actually really important, because it gates the tremendous value of social diversification.

After Hurricane Sandy, when lower Manhattan was without power but I still had power in upper Manhattan, I let a couple of friends sleep in my double bed while I slept on my own couch. In principle they could have applied more dakka to ensure their apartment would be livable in natural disasters, but this would be very expensive and the ability to fall ba... (read more)

That doesn't explain why subjects who thought a good heart would mean a lower post-exercise pain threshold took their hands out sooner.

Looking at the actual data from the article, since Yvain neglected to actually state the results of the second case. Subjects told that a good heart was correlated with higher pain threshold after exercise showed an 11.84 second increase in mean immersion time, while subjects told that a good heart was correlated with a decrease in pain threshold showed a 7.63 second decrease in mean immersion time.