All of AnthonyC's Comments + Replies

I'd just sort of assumed the similar sound to zazen was intentional and that if I'd known Japanese it might make more sense.

I mean, yes, because the proposal is about optimizing our entire future light for an outcome we don't know how to formally specify.

Sometimes I see people use the low-info heuristic as a “baseline” and then apply some sort of “fudge factor” for the illegible information that isn’t incorporated into the baseline—something like “the baseline probability of this startup succeeding is 10%, but the founders seem really determined so I’ll guesstimate that gives them a 50% higher probability of success.” In principle I could imagine this working reasonably well, but in practice most people who do this aren’t willing to apply as large of a fudge factor as appropriate.

 

The last company I w... (read more)

I like this post and agree that acausal coordination is not weird fringe behavior necessarily. But thinking about it explicitly in the context of making a decision, is. In normal circumstances, we have plenty of non-acausal ways of discussing what's going on, as you discuss. The explicit consideration is something that becomes important only outside the contexts most people act in.

 

That said, I disagree with the taxes example in particular, on the grounds that that's not how government finances work in a world of fiat currency controlled by said gover... (read more)

3Adam Jermyn3mo
I like the distinction between implementing the results of acausal decision theories and explicitly performing the reasoning involved. That seems useful to have. The taxes example I think is more complicated: at some scale I do think that governments have some responsiveness to their tax receipts (e.g. if there were a surprise doubling of tax receipts governments might well spend more). It's not a 1:1 relation, but there's definitely a connection.

In the ancestral environment, population densities were very low. My understanding is that almost everyone in your band would be at least somewhat related, or have an ambiguous degree of relatedness, and would be someone you'd rely on again and again. How often do we think interactions with true non-relatives actually happened? 

 

I'm not sure there's anything that needs to be explained here except "evolution didn't stumble upon a low-cost low-risk reliable way for humans to defect against non-relatives for personal advantage as much as a hypothetical intelligently designed organism could have." Is there?

2chaosmage3mo
Well of course there are no true non-relatives, even the sabertooth and antelopes are distant cousins. The question is how much you're willing to give up for how distant cousins. Here I think the mechanism I describe changes the calculus. I don't think we know enough about the lifestyles of cultures/tribes in the ancestral environment, except we can be pretty sure they were extremely diverse. And all cultures we've ever found have some kind of incest taboo that promotes mating between members of different groups.

Very little of the value I got out of my university degree came from the exams or the textbooks. All of that I could have done on my own. Much of the value of the lectures could have been replicated by lecture videos. The fancy name on my resume is nice for the first few years (especially graduating in the middle of the 2009 recession) and then stops mattering. 

But the smaller classes, the ones with actual back and forth with actually interested professors? The offhand comments that illustrate how experts actually approach thinking about their fields?... (read more)

1sillybilly1mo
The single biggest selling point of my undergrad institution was the unparalleled access to faculty and the resources available to do research internships with them. Ironically, I didn‘t take advantage of any of that, at all, and made my way through my BS as if I were at OP’s hypothetical institution. FWIW, I still ended up at my graduate school of choice, so maybe the research opportunities weren’t so valuable after all.

Then if it can compute infinite sets as large as the reals, it can handle any set of cardinality beth-1, but not beth-2 or larger. But because the cardinality of the reals is itself formally undecidable by finite logic systems (or by infinite logic systems of size less than aleph-w), I think this doesn't give us much specificity about the limits of what that means, or where it falls on the kinds of arithmetical hierarchy schemas finite logic systems enable us to define.

For my own sanity this is about where I stop trying to delve too deep for understanding,... (read more)

As I understand it, that point about "somewhat arbitrary choices in how finite logic should be extended to infinitary" would also include, for every one of the infinitely many undecidable-by-a-non-hypercomputer propositions, a free choice of whether to include a proposition or its negation as an axiom. Well, almost. Each freely chosen axiom has infinitely many consequences that are the no longer free choices. But it's still (countably) infinitely many choices. But if you have countably infinitely many binary choices, that's like choosing a random real numb... (read more)

4JBlack3mo
Yes, that is one example of many arbitrary choices in infinite systems. In practice that means that you give up the ability to communicate to anyone else which system you're working with, unless you also have a channel with which to communicate an infinite amount of information to them. However the somewhat arbitrary choices I was talking about with respect to infinitary logics was about what rules of inference you use to derive new infinite sentences. Even in finitary logic there are choices such as whether to accept proofs that use Law of Excluded Middle, as well as more esoteric principles such as modal logic, relevance logics, and paraconsistency, but when you have sentences with infinitely many clauses (or even more complex structures) then we need rules that aren't determined by what happens with every finite number of clauses. Some of these might be very counterintuitive when building from an experience with only finitely many terms, but we can't say they're wrong.

Computing uncountably infinite "stuff" is not well defined as stated. So all I can say to if it can "solve undecidable problems" is "Yes, some of them." Which ones depends on what level of hypercomputer you've made, and how high up the arithmetical hierarchy it can take you. 

There is a generalized halting problem: no oracle can solve its own halting problem. 

Since you mentioned countability, I'll say I do not know whether any particular type pf hypercomputer would be capable of assigning a specific cardinality (-n for some n) to the reals. ... (read more)

1Noosphere893mo
Specifically, it can compute all of the real number line from 0-1 in finite time. Real numbers are more common than natural numbers. They are also uncountably infinite.

I can see how that might help me eat less, but unless you chose the seven very carefully to be potentially nutritionally complete, sustaining that seems like a path to the kinds of deficiencies that made the agricultural revolution cost humans half a foot of height for most of the last ten millennia.

Yes, this sounds completely right. One unusually good doctor I had told me, "In the right patient, any drug can have any effect." It took me another four years to solve that particular problem, ten years in total, and I'm still concerned that when I see my new PCP (previous one retired) he might try to change my meds that've been working for 5 years.

Most doctors are too cautious, for whatever (often justified) reasons, to just try things. Most really don't know how to respect what patients know about themselves or to interact with an actually-intelligent p... (read more)

It's been years since I've talked to anyone working on this technology, but IIRC one of the issues was that in principle you could prevent the lag from leading to bad data that kills you if the pump could also provide glucagon, but there was no way to make glucagon shelf-stable enough to have in a pump. Apparently that changed as of 2019/2020, or is in the process of changing, so maybe someone will make a pump with both.

3Kenoubi4mo
I mean, just lag, yes, but there's also plain old incorrect readings. But yes, it would be cool to have a system that incorporated glucagon. Though, diabetics' body still produce glucagon AFAIK, so it'd really be better to just have something that senses glucose and releases insulin the same way a working pancreas would.

Thanks! I've only just started reading and there's really good stuff here.

My own take: in order for the zeitgeist to be optimistic on progress, it has to seem possible for things to get better. And for things to get better, it has to be possible for them to be good. But in most forums, I find, it's almost impossible to call anything good without being torn to shreds from multiple sides. We've raised the bar beyond what mere mortals can achieve, and retroactively damn the past's achievements by applying standards that even now are very far from universal. I... (read more)

Well, at some level, most of them. But which may be limiting your own energy is going to vary from person to person. Vitamin D is a common one, I take it too. For me, taking fish oil, zinc, and a B complex (with food, they make me nauseous on an empty stomach, and no mega-doses) are also helpful. So is proper hydration. For me, that means at least 100 oz of fluids a day, and I find it helpful if it has a splash of citrus or other juice in it - the little bit of sugar seems to matter, ditto for it being helpful for me to eat high water content fruits and ve... (read more)

On muscle knots - whatever they are - it isn't just a difference in the experience of those who have them, but also those massaging them. For me they've always been obvious. When giving a massage, there are relaxed muscles, tense muscles, and knots, and these are three very different feelings regardless of whatever I'm massaging with fingers, palms, knuckles, elbows, or otherwise. It's very clear that a knot-feeling-recipient and I almost always agree on the locations of knots, or their absence (the exception seems to be knots that are "deeper" under an al... (read more)

Is there anything about the way schizophrenia is (or used to be) diagnosed that would make it harder for the congenitally blind to get diagnosed? I ask because I know someone, completely deaf from birth (and who only learned sign language as an adult, not sure if that makes a relevant difference in terms of language processing), who for a long time couldn't get treatment for (and never got a formal diagnosis of) schizophrenia on account of a lack of auditory hallucinations or hearing voices.

3Steven Byrnes5mo
Thanks! Yeah, I’m not an expert, but if God / Omega told me with absolute certainty that there were somewhat fewer documented congenitally-blind schizophrenics than expected, I would definitely start brainstorming explanations like “Maybe schizophrenia presents a bit differently in congenitally-blind people, making it hard to diagnose?” or “Maybe blind schizophrenics are less likely to wind up seeing a psychiatrist for some reason?” or things like that. I don’t think those kinds of things would amount to an orders-of-magnitude reduction in documented congenitally-blind schizophrenics. But if we’re trying to explain a 50% reduction or whatever, sure, seems possible. The thing is, I don’t think anybody is claiming that we’re trying to explain a 50% reduction (?). These people [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3615184/] seem to be saying that there are orders of magnitude fewer congenitally-blind schizophrenics than expected. Whereas my tentative read of the evidence is: “There could be anything from 10× fewer to 3× more congenitally-blind schizophrenics than expected from chance.”, i.e. there might not be any reduction to explain in the first place.

Out of curiosity, how many serious accidents have there been at that intersection at that time? Not that I'd expect that to change the reply you got.

I think stories like this are unfortunately common in many places. Odd-seeming divisions of power and responsibility with no repercussions for failing to act create bizarre-seeming planning decisions. I have a family friend who has spent a decade repeatedly trying to have the city remove or prune a tree in front of their house. It has regularly dropped large branches on their driveway (recently crushing the ro... (read more)

5ChristianKl7mo
It sounds to me like "reporting it" is not the way to actually get change. A better way might be to contact local politicians (on the city level) or maybe to ask a lawyer to write a threatening letter to the authorities. Part of being a democracy is that you can contact your local politicians with problems if the government works badly. Local politicians get a lot less citizen engagement than those at a higher level and are thus more willing to write the necessary emails to put some pressure on the bureaucrats in question.

Oh for sure (although this isn't just a 'first time we discuss it' response, it happens multiple times with people we're fairly close with, people who we've talked to about it throughout over a year and a half of planning). Most of the people who don't have that reaction know at least one person who has already done it, so they have a reference point. Many who do have no reference point other than Christmas Vacation or that Robin Williams RVing movie. 

And for a lot of people it would be terrible! No arguments there, there are definitely tradeoffs. I f... (read more)

I've always loved to cook, but for a long time mostly neglected ingredient quality, and gradually gained weight. Cleaning up my diet was a gradual shift over the course of years, with a few big bursts of effort focused on improving specific aspects (most recently in January 2020, I eliminated most refined grains, refined sugar, and processed oils). I do have defined exceptions: I don't try to apply the same standards to food I don't make myself at home, for example. Now two years later I'm traveling full time and using a different set of grocery stores eve... (read more)

you can change the plan again!

 

This is one point that, while obvious, seems hard for a lot of people to notice in practice. As an example, my wife and I recently sold our house and are RVing full time. We've noticed (and are told this is common) that many times we had conversations that went something like this: 

"Oh, what's your destination?" 

"We don't have one, but for the next few months we're heading vaguely [direction]." 

"Oh, so you're going to [place]?" 

"No, it's not a trip/vacation, it's just life, and a few weeks after we ge... (read more)

6Said Achmiz7mo
While I completely agree with your broad point (i.e., your last paragraph), I think that part of the reason for the reactions you’re describing is that, to most people, living in an RV and being on the road as a potentially long-term lifestyle (as opposed to just a way of traveling to some location) naturally seems like it would be so terrible, so unlike anything that they’d ever want to do, much less choose to do of their own free will, that they do not easily generate “these people are just doing this as their baseline lifestyle, not as a trip or anything” as a hypothesis, much less the first hypothesis. (I can tell you that I am very much on your side on the subject of “each incremental planning decision is a consideration of a set of potential options”, etc., and have had arguments with people in my life about it, and yet I would very likely find myself in the role of your example interlocutor, in such a conversation as you describe!)
2Valentine7mo
This is a beautiful example. Thank you. It makes me think of this Irish song [https://www.youtube.com/watch?v=nUXh1SFsXW4].

Assuming the relevant area of science already exists, yes. Recurse as needed, and  there is some level of goal for which generic rationality is a highly valuable skillset. Where that level is, depends on personal and societal context.

3TAG7mo
That's quite different from saying rationality is a one size fits all solution.

real deontologists 

I think you're right in practice, but the last formal moral philosophy class I took was Michael Sandel's intro course, Justice, and it definitely left me with the impression that deontologists lean towards simple rules. I do wonder, with the approach you outline here, if there's a highest-level conflict-resolving rule somewhere in the set of rules, or if it's an infinite regress. I suspect the conflict-resolving rules end up looking pretty consequentialist a lot of the time.

It doesn't actually take much time or effort to think to yo

... (read more)
4Adam Zerner7mo
Yeah I think so too. I further suspect that a lot of the ethical theories end up looking consequentialist when you dig deep enough. Which makes me wonder if they actually disagree on important, real world moral dilemmas. If so I wish that common intro to ethics types of discussions would talk about it more. I suspect we just don't see eye-to-eye on this crux of how costly this sort of deliberation is. But I wonder if your feelings change at all if you try thinking of it as more of a spectrum (maybe you already are, I'm not sure). Ie, at least IMO, there is a spectrum of how much effort you expend on this conscious deliberation, so it isn't really a question of doing it vs not doing it, it's more a question of how much effort is worthwhile. Unless you think that in practice, such conversations would be contentious and drag on (in cultures I've been a part of this happens more often than not). In that scenario I think it'd be best to have simple rules and no/very little deliberation.

One thing that has long surprised me about the strict Kantian rule-following point of view is the seeming certainty that the rule needs to be a short sentence, on the length scale of "Thou shalt not kill." (And yes, I see it as the same sort of error that many people make who think there's a simple utility function we could safely give an AGI.) My POV makes more of a distinction on the lines of axiology/morality/law, where if you want a fundamental principal in ethics, one that should never be violated, it's going to be axiological and also way too complic... (read more)

2Adam Zerner7mo
I think this is a misconception actually. In an initial draft for this post, I submitted it for feedback and the reviewer, who studied moral philosophy in college, mentioned that real deontologists have a) more sensible rules than that and b) have rules for when to follow which rules. So eg. "Thou shalt not kill" might be a rule, but so would "Thou shalt save an innocent person", and since those rules can conflict, there'd be another rule to determine which wins out. To make sure I am understanding you correctly, are you saying that each class is choosing to simplify things, trading off accuracy for speed? I suppose there is a tradeoff there, but I don't think it falls on the side of simplification. It doesn't actually take much time or effort to think to yourself or to bring up in conversation something like "What would the rule consequentialist rules/guidelines say? How much weight do they deserve here?"
8Dagon7mo
Relatedly, it's about the durability and completeness of the ruleset. I'm a consequentialist, and I get compatibility by relabeling many rules as "heuristics" - this is not a deontologist's conception, but it works way better for me.

Better hardware reduces the need for software to be efficient to be dangerous. I suspect on balance that yes, this makes development of said hardware more dangerous, and that not working on it can buy us some time. But the human brain runs on about 20 watts of sugar and isn't anywhere near optimized for general intelligence, so we shouldn't strictly need better hardware to make AGI, and IDK how much time it buys.

Also, better hardware makes more kinds of solutions feasible, and if aligned AGI requires more computational capacity than unaligned AI, or if bet... (read more)

I would offer that any set of goals given to this AGI would include the safety-concerns of humans.  (Is this controversial?)  

If anyone figures out how to give an AGI this goal, that would mean they know how to express the entire complex set of everything humans value, and express it with great precision in the form of mathematics/code without the use of any natural language words at all. No one on Earth knows how to do this for even a single human value. No one knows how to program an AI with a goal anywhere near that complex even if we did.&nbs... (read more)

3Eugene D8mo
Strictly speaking about superhuman AGI: I believe you summarize the relative difficulty / impossibility of this task :) I can't say I agree that the goal is void of human-values though (I'm talking about safety in particular--not sure if that's make a difference?) --seems impractical right from the start? I also think these considerations seem manageable though, when considering the narrow AI that we are producing as of today. But where's the appetite to continue on the ANI road? I can't really believe we wouldn't want more of the same, in different fields of endeavor...

The fact that the statement is controversial is, I think, the reason. What makes a world-state or possible future valuable is a matter of human judgment, and not every human believes this. 

EY's short story Three Worlds Collide explores what can happen when beings with different conceptions of what is valuable, have to interact. Even when they understand each other's reasoning, it doesn't change what they themselves value. Might be a useful read, and hopefully a fun one.

1Kerrigan1mo
I'll ask the same follow-up question to similar answers: Suppose everyone agreed that the proposed outcome above is what we wanted. Would this scenario then be difficult to achieve?

I think the observation that it just isn't obvious that ems will come before de novo AI is sufficient to worry about the problem in the case that they don't. Possibly while focusing more capabilities development towards creating ems (whatever that would look like)?

Also, would ems actually be powerful and capable enough to reliably stop a world-destroying non-em AGI, or an em about to make some world-destroying mistake because of its human-derived flaws? Or would we need to arm them with additional tools that fall under the umbrella of AGI safety anyway?

Scott Alexander's short story, The Demiurge's Older Brother, explores a similar idea from the POV of simulation and acausal trade. This would be great for our prospects of survival if it's true-in-general. Alignment would at least partially solve itself! And maybe it could be true! But we don't know that. I personally estimate the odds of that as being quite low (why should I assume all possible minds would think that way?) at best. So, it makes sense to devote our efforts to how to deal with the possible worlds where that isn't true.

In different ways from different vantage points, I've always seen saving for retirement as a point of hedging my bets, and I don't think the likelihood of doom changes that for me.

Why do I expect I'll want or have to retire? Well, when I get old I'll get to a point where I can't do useful work any more... unless humans solve aging (in which case I'll have more wealth and still be able to work, which is still a good position), or unless we get wiped out (in which case the things I could have spent the money on may or may not counterfactually matter to me, d... (read more)

Some one else already commented on how human intelligence gave us a decisive strategic advantage over our natural predators and many environmental threats. I think this cartoon is my mental shorthand for that transition. The timescale is on the order of 10k-100k years, given human intelligence starting from the ancestral environment.

Empires and nations, in turn, conquered the world by taking it away from city-states and similarly smaller entities in ~1k-10k years. The continued existence of Singapore and the Sentinel Islanders doesn't change the fact that ... (read more)

I think one part of the reason for confidence is that any AI weak enough to be safe without being aligned, is weak enough that it can't do much, and in particular it can't do things that a committed group of humans couldn't do without it. In other words, if you can name such an act, then you don't need the AI to make the pivotal moves. And if you know how, as a human or group of humans, to take an action that reliably stops future-not-yet-existing AGI from destroying the world, and without the action itself destroying the world, then in a sense haven't you solved alignment already?

Second, the line of argument runs like this: Most (a supermajority) possible futures are bad for humans. A system that does not explicitly share human values has arbitrary values. If such a system is highly capable, it will steer the future into an arbitrary state. As established, most arbitrary states are bad for humans. Therefore, with high probability, a highly capable system that is not aligned (explicitly shares human values) will be bad for humans.

I'm not sure if I've ever seen this stated explicitly, but this is essentially a thermodynamic argument.... (read more)

For one, I don't think organizations of humans, in general, do have more computational power than the individual humans making them up. I mean, at some level, yes, they obviously do in an additive sense, but that power consists of human nodes, each not devoting their full power to the organization because they're not just drones under centralized control, and with only low bandwidth and noisy connections between the nodes. The organization might have a simple officially stated goal written on paper and spoken by the humans involved, but the actual incentiv... (read more)

This depends on how far outside that human's current capabilities, and that human's society's state of knowledge, that thing is. For playing basketball in the modern world, sure, it makes no sense to study physics and calculus, it's far better to find a coach and train the skills you need. But if you want to become immortal and happen to live in ancient China, then studying and practicing "that thing" looks like eating specially-prepared concoctions containing mercury and thereby getting yourself killed, whereas studying generic rationality leads to the wh... (read more)

1TAG7mo
If you want to be good at something specific that doesn't exist yet, you need to study the relevant area of science, which is still more specific than rationality.

In some sense I would think it's almost tautologically true that faster capabilities research shortens the timeline in which alignment and strategy researchers do their own work. 

I think there's a really great core to this post, but I don't know what to do with it, or how to regard it.

For one, this problem seems to be largely inescapable, though of course we can work to reduce its perpetuation. Where, exactly, can I go to live on unstolen land? History is very long, and I don't know when the first moment was that our ancestors could be said to have any sort of right to the land they lived on (especially where said ancestors' descendants still exist and live there), which the other predators and megafauna they displaced did not have... (read more)

I agree. Why the ICC was only investigating Africa matters. One part is that the ICC doesn't have enough perceived legitimacy or authority for stronger countries to submit to investigations; they rely on their own institutions instead, and the ICC doesn't have the power to override that. I highly doubt anyone in 2002 honestly expected that, say, the US would support it investigating US military actions in the Middle East (and providing access to classified records for such an investigation), and letting them investigate and punish Bush and Rumsfeld for war... (read more)

Does what I want include wanting other people to get what they want? Often, yes. Not always. I want my family and friends and pets to get more of what they specifically want, and a large subset of strangers to mostly get more of what they want. But there are people who want to hurt others, and people who want to hurt themselves, and people who want things that reduce the world's capacity to support people getting want they want, and I don't want those people to get what they want. This will create conflict between agents, often value-destroying conflict, and zero or negative sum competitions, which I nevertheless will sometimes participate in and try to win.

Rome  itself is a possible analogous example, in that the failure of distribution networks caused critical food shortages. That, in turn, was a problem because the economic model of the empire depended on continuous expansion to sustain itself, and the heart of the empire was incapable of supporting itself with locally available resources. Rome went from a city of ~1 million to tens of thousands, and didn't reach that point again until the 1800s. Extend the analogy to a world with no more frontier to expand into (except the technological frontier) and... (read more)

Several years ago I had a conversation with someone that helped them predict other people's behavior much better thereafter: most people are not consequentialists, and are not trying to be, they mostly do what is customary for them in each situation for the relevant part of their culture or social group. 

Your discussion in the post seems premised on the idea that people are trying to reason about consequences in specific cases at all, and I don't think that's usually true. Yes, very few rules are truly absolute, which is why most people balk at Kant's... (read more)

Your framing of the illusion of absolute rules as "polite social fictions" is quite brilliant, because I think that's what Scott Alexander probably wanted to convey. It comes to mind that such social fictions may be required for people to trust in institutions, and strong institutions are generally credited as a core factor for social progress. Take for example the police - it is an extremely useful social fiction that "the police is your friend and helper", as they say in German, even though they often aren't, particularly not to marginalized social group... (read more)

Similarly, you can get a master's degree in 2 years of courses, but a PhD requires an extended apprenticeship. The former is to teach you a body of knowledge, the latter is to teach you how to do research to extend that knowledge.

 

I'd also add that in his first lesson on metarationality, the one that opens with the Bongard problems, Chapman explicitly acknowledges that he is restricting his idea of  a system to "a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow," and that meta... (read more)

1.2 is (probably?) an illusion. A universe of finite total energy and finite size should  be subject to the quantum mechanical version of the Poincare Recurrence Theorem. After some number of cycles it would return arbitrarily close to its previous Big Bang starting state. Might require vast amounts of time, but still.

This leaves me with 1.1, a guarantee that only this single universe will ever exist, or a coin toss between 2.1 and 2.2, which guarantee a wide variety of different universes ever existing, whether I'm in them or not. I choose 2. 

No... (read more)

A week ago I wrote "The Russian Armed Forces is among the three most capable militaries in the world". Since then, I have been astonished by the incompetence of the Russian Armed Forces.

 

Has this changed your ranking of world military capabilities?

8lsusr1y
It has lower my confidence enough that I would not write the sentence again if I were doing so today. Military capabilities are multidimensional. There is still an axes on which the Russian Armed Forces rank #3 but there are other axes where they don't.

But as they do, users will favour the products that give them the best experience 

 

This is one point I find difficult to believe, or at least difficult to find likely. Most people, who are not unusually savvy, already give much more credence to ads, and much less credence to the degree to which they are actually affected by ads, than they should be. Why should that reverse as ads get even better at manipulating us? Why should I expect people to start demonstrating the level of long-term thinking and short-term impulse control and willingness to l... (read more)

2Viliam1y
Possibly relevant: Siren worlds and the perils of over-optimised search [https://www.lesswrong.com/posts/nFv2buafNc9jSaxAH/siren-worlds-and-the-perils-of-over-optimised-search]
3Joe Kwon1y
I'm also not sold on this specific part, and I'm really curious about what things support the idea. One reason I don't think it's good to rely on this as the default expectation though, is that I'm skeptical about humans' abilities to even know what the "best experience" is in the first place. I wrote a short rambly post touching on, in some part, my worries about online addiction: https://www.lesswrong.com/posts/rZLKcPzpJvoxxFewL/converging-toward-a-million-worlds [https://www.lesswrong.com/posts/rZLKcPzpJvoxxFewL/converging-toward-a-million-worlds] Basically, I buy into the idea that there are two distinct value systems in humans. One subconscious system where the learning is mostly from evolutionary pressures, and one conscious/executive system that cares more about "higher-order values" which I unfortunately can't really explicate. Examples of the former: craving sweets, addiction to online games with well engineered artificial fulfillment. Example of the latter: wanting to work hard, even when it's physically demanding or mentally stressful, to make some type of positive impact for broader society. And I think today's modern ML systems are asymmetrically exploiting the subconscious value system at the expense of the conscious/executive value system. Even knowing all this, I really struggle to overcome instances of akrasia, controlling my diet, not drowning myself in entertainment consumption, etc. I feel like there should be some kind of attempt to level the playing field, so to speak, with which value system is being allowed to thrive. At the very least, transparency and knowledge about this phenomena to people who are interacting with powerful recommender (or just general) ML systems, and in the optimal, allowing complete agency and control over what value system you want to prioritize, and to what extent.

But now, perhaps, we feel the rug slipping out from under us too easily. Don’t we have non-zero credences on coming to think any old stupid crazy thing – i.e., that the universe is already a square circle, that you yourself are a strongly Ramsey lizard twisted in a million-dimensional toenail beyond all space and time, that consciousness is actually cheesy-bread, and that before you were born, you killed your own great-grandfather? So how about a lottery with a 50% chance of that, a 20% chance of the absolute infinite getting its favorite ice cream, and a

... (read more)

I typically dream in color, but have occasionally dreamed in black and white. I have also occasionally dreamed in cartoon-ish environments, or 90s/2000s computer graphics-like animation. These were very distinct experiences when they happened, there was never any doubt in my mind on waking that the dream I'd just had was different than usual for me.

 

I'm curious for anyone who has lucid dreams: is this a change you can make mid-dream?

I do have a growing sympathy with the idea that just because you have a case of COVID that necessarily means you need to stay in the leper colony for a while. BUT I'm not sure about where one draws the line on that. 

I for one would like to have at least a semi-quantitative answer to how much risk we're (socially, legally) permitted to expose each other to as part of normal life, instead of an inconsistent, ad hoc set of rules and expectations. 

For example, you can drive, but only licensed, and not when drunk: sensible. 

By comparison, you hav... (read more)

Whenever someone asks a poll question about how much people would be willing to pay to something, I wonder how much of the answer is liquidity constrained. People who live paycheck to paycheck (a majority of Americans) who literally wouldn't be able to pay a week's income for any new expense might just not engage with the counterfactual in quite the way that it was intended.

There's also an asymmetry between gains and losses, partly due to prospect theory, and partly due to decreasing marginal utility. I bet a lot of people would answer differently if they were asked what they would choose if given the choice between receiving the money vs. going back to the way things were before.

"Man whose roommates wear t-shirts in winter, 
would better calm down with the goddamn heater."

 

I agree with the idea behind the advice, but for me it would still be bad advice.

I was the kid who wore shorts in winter in middle school. I just feel warm (which makes me sleepy) at temperatures others find too cold. Then in college I had a roommate from Hawaii, and later married a woman with a circulatory issue, so I've always kept on running the heater and wearing t-shirts. (And upgrading the insulation when feasible).

Honestly there are winter days I... (read more)

I've been married 6 years (together for 9), and my wife is not a rationalist, and wouldn't be interested. OTOH, she is by far the most intuitively metarational person I've ever met, as well as being highly conscientious, context-aware, and excellent at predicting other people's thoughts and actions. We each use different toolboxes that have turned out to often reach the same conclusions, and otherwise complement one another's insights well. Early on we had lots of deep discussions about lots of topics, as one does, and just like any relationship, built up ... (read more)

Load More