strangepoop's Posts

Sorted by New

strangepoop's Comments

Is Rationalist Self-Improvement Real?

While you're technically correct, I'd say it's still a little unfair (in the sense of connoting "haha you call yourself a rationalist how come you're failing at akrasia").

Two assumptions that can, I think you'll agree, take away from the force of "akrasia is epistemic failure":

  • if modeling and solving akrasia is, like diet, a hard problem that even "experts" barely have an edge on, and importantly, things that do work seem to be very individual-specific making it quite hard to stand on the shoulders of giants
  • if a large percentage of people who've found and read through the sequences etc have done so only because they had very important deadlines to procrastinate

...then on average you'd see akrasia over-represented in rationalists. Add to this the fact that akrasia itself makes manually aiming your rationality skills at what you want harder. That can leave it stable even under very persistent efforts.

Sayan's Braindump

I'm interested in this. The problem is that if people consider the value provided by the different currencies at all fungible, side markets will pop up that allow their exchange.

An idea I haven't thought about enough (mainly because I lack expertise) is to mark a token as Contaminated if its history indicates that it has passed through "illegal" channels, ie has benefited someone in an exchange not considered a true exchange of value, and so purists can refuse to accept those. Purist communities, if large, would allow stability of such non-contaminated tokens.

Maybe a better question to ask is "do we have utility functions that are partial orders and thus would benefit from many isolated markets?", because if so, you wouldn't have to worry about enforcing anything, many different currencies will automatically come into existence and be stable.

Of course, more generally, you wouldn't quite have complete isolation, but different valuations of goods in different currencies, without "true" fungibility. I think it is quite possibe that our preference orderings are in fact partial and the current one-currency valuation of everything might be improved.

strangepoop's Shortform

The expectations you do not know you have control your happiness more than you know. High expectations that you currently have don't look like high expectations from the inside, they just look like how the world is/would be.

But "lower your expectations" can often be almost useless advice, kind of like "do the right thing".

Trying to incorporate "lower expectations" often amounts to "be sad". How low should you go? It's not clear at all if you're using territory-free un-asymmetric simple rules like "lower". Like any other attempt at truth-finding, it is not magic. It requires thermodynamic work.

The thing is, the payoff is rather amazing. You can just get down to work. As soon as you're free of a constant stream of abuse from beliefs previously housed in your head, you can Choose without Suffering.

The problem is, I'm not sure how to strategically go about doing this, other than using my full brain with Constant Vigilance.

Coda: A large portion of the LW project (or at least, more than a few offshoots) is about noticing you have beliefs that respond to incentives other than pure epistemic ones, and trying not to reload when shooting your foot off with those. So unsurprisingly, there's a failure mode here: when you publicly declare really low expectations (eg "everyone's an asshole"), it works to challenge people, urges them to prove you wrong. It's a cool trick to win games of Chicken but as usual, it works by handicapping you. So make sure you at least understand the costs and the contexts it works in.

Bíos brakhús

I think a counterexample to "you should not devote cognition to achieving things that have already happened" is being angry at someone who has revealed they've betrayed you, which might acause them to not have betrayed you.

strangepoop's Shortform

Is metarationality about (really tearing open) the twelfth virtue?

It seems like it says "the map you have of map-making is not the territory of map-making", and gets into how to respond to it fluidly, with a necessarily nebulous strategy of applying the virtue of the Void.

(this is also why it always felt like metarationality seems to only provide comments where Eliezer would've just given you the code)

The parts that don't quite seem to follow is where meaning-making and epistemology collide. I can try to see it as a "all models are false, some models are useful" but I'm not sure if that's the right perspective.

If physics is many-worlds, does ethics matter?

I want to ask this because I think I missed it the first few times I read Living in Many Worlds: Are you similarly unsatisfied with our response to suffering that's already happened, like how Eliezer asks, about the twelfth century? It's boldface "just as real" too. Do you feel the same "deflation" and "incongruity"?

I expect that you might think (as I once did) that the notion of "generalized past" is a contrived but well-intentioned analogy to manage your feelings.

But that's not so at all: once you've redone your ontology, where the naive idea of time isn't necessarily a fundamental thing and thinking in terms of causal links comes a lot closer to how reality is arranged, it's not a stretch at all. If anything, it follows that you must try and think and feel correctly about the generalized past after being given this information.

Of course, you might modus tollens here.

Go Do Something

Soares also did a good job of impressing this in Dive In:

In my experience, the way you end up doing good in the world has very little to do with how good your initial plan was. Most of your outcome will depend on luck, timing, and your ability to actually get out of your own way and start somewhere. The way to end up with a good plan is not to start with a good plan, it's to start with some plan, and then slam that plan against reality until reality hands you a better plan.

The idea doesn't have to be good, and it doesn't have to be feasible, it just needs to be the best incredibly concrete plan that you can come up with at the moment. Don't worry, it will change rapidly when you start slamming it into reality. The important thing is to come up with a concrete plan, and then start executing it as hard as you can — while retaining a reflective state of mind updating in the face of evidence.
The concept of evidence as humanity currently uses it is a bit of a crutch.

I don't think the "idea of scientific thinking and evidence" has so much to do with throwing away information as adding reflection, post which you might excise the cruft.

Being able to describe what you're doing, ie usefully compress existing strategies-in-use, is probably going to be helpful regardless of level of intelligence because it allows you to cheaply tweak your strategies when either the situation or the goal is perturbed.

The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence

To further elaborate 4: your example of the string "1" being a conscious agent because you can "unpack" it into an agent really feels like it shouldn't count: you're just throwing away the "1" and replaying a separate recording of something that was conscious. This sounds about as much of a non-sequitur as "I am next to this pen, so this pen is conscious".

We could, however, make it more interesting by making the computation depend "crucially" on the input. But what counts?

Suppose I have a program that turns noise into a conscious agent (much like generative models can turn a noise vector into a face, say). If we now seed this with a waterfall, is the waterfall now a part of the computation, enough to be granted some sentience/moral patienthood? I think the usual answer is "all the non-trivial work is being done by the program, not the random seed", as Scott Aaronson seems to say here. (He also makes the interesting claim of "has to participate fully in the arrow of time to be conscious", which would disqualify caching and replaying.)

But this can be made a little more confusing, because it's hard to tell which bit is non-trivial from the outside: suppose I save and encrypt the conscious-generating-program. This looks like random noise from the outside, and will pass all randomness tests. Now I have another program with the stored key decrypt it and run it. From the outside, you might disregard the random-seed-looking-thingy and instead try to analyze the decryption program, thinking that's where the magic is.

I'd love to hear about ideas to pin down the difference between Seeding and Decrypting in general, for arbitrary interpretations. It seems within reach, and like a good first step, since the two lie on roughly opposite ends of a spectrum of "cruciality" when the system breaks down into two or more modules.

The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence

Responses to your four final notes:

1. This is, as has been remarked in another comment, pretty much Dust theory. See also Moravec's concise take on the topic, referenced in the Dust theory FAQ. Doing a search for it on LW might also prove helpful for previous discussions.

2. "that was already there"? What do you mean by this? Would you prefer to use the term 'magical reality fluid' instead of "exists"/"extant"/"real"/"there" etc, to mark your confusion about this? If you instead feel like you aren't confused about these terms, please provide (a link to) a solution. You can find the problem statement in The Anthropic Trilemma.

3. Eliezer deals with this using average utilitarianism, depending on whether or not you agree with rescuability (see below).

4. GAZP vs GLUT talks about the difference between a cellphone transmitting information of consciousness vs the actual conscious brain on the other end, and generalizes it to arbitrary "interpretations". That is, there are parts of the computation that are merely "interpreting", informing you about consciousness and others that are "actually" instantiating. It may not be clear what exactly the crucial difference is yet, but I think it might be possible to rescue the difference, even if you can construct continuums to mess with the notion. This is of course deeply tied to 2.


It may seem that my takeaway from your post is mostly negative, this is not the case. I appreciate this post, it was very well organized despite tackling some very hairy issues, which made it easier to respond to. I do feel like LW could solve this somewhat satisfactorily, perhaps some people already have and don't bother pointing the rest of us/are lost in the noise?

Load More