Carmex's Shortform

by Carmex2nd Oct 202143 comments
41 comments, sorted by Highlighting new comments since Today at 3:30 PM
New Comment

How do I start acausally cooperating with my future self? The spiritualists seem to call this "ascending to 4D energy". How about with my counterfactual selves? The equivalent of "ascending to 5D energy" in spiritualist speak. I need practical and specific instructions.

If you don't know what you expect future / counterfactual versions of you want, it will be hard to co-operate, so I recommend spending time regularly reflecting on what they might want, especially in relation to things that you have done recently. Reflect on what actions you have done recently (consider both the most trivial and the most seemingly important), and ask yourself how future and counterfactual versions of you will react to finding out that (past) you had done that. If you don't get a gut feeling that what you did was bad, test it out by trying to create and simulate a specific counterfactual version of yourself that would react in a maximally horrified way, and reflect on what factors made that version of you horrified, and reflect on how likely those or similar factors could be. You could spend ~7 - 10 mins each day doing this reflection, or ~30 mins each week, to develop a habit of thinking in this way. I'd recommend starting with the daily version, so you can really get used to it, before maybe going to the weekly version, but you can start with the weekly version if that's more convenient, and get good results from that, too.

Also remember that the way other humans will treat counterfactual versions of you will depend on their predictions of what you will do in this branch of reality, so try to act in a way, that if the people interacting with the counterfactual_you predicted or learned you would act in that way, they would be maximally willing to do what counterfactual_you wants them to do.

Should citizens be allowed to trade away (part of) their citizenship? For ex, we lower the minimum threshold required to be a citizen to 0.9 of a citizenship, then that frees up 0.1 of your citizenship to be traded away as you like and still maintain all the rights and privileges currently afforded to you. What will 0.1 of a citizenship trade for on the open market? Should we treat a noncitizen who buys 9 * 0.1 of a citizenship as a full fledged citizen? What should we offer to those who buy more than 0.9 of a citizenship? For ex, should firearm privileges require 2.5 citizenships to be locked away in an escrow somewhere?

I like the idea, but it requires a definition of what citizenship rights/privileges get downscaled with less than full 1.0 citizenship (and upscaled with increasing shares.  Can I buy myself up to 10000 citizenship?  Or retain some rights that are important to me (freedom to travel and permanent residence/work ability) with 0.01 citizenships?

I strongly suspect that disaggregating the "rights" into tradeable licenses is a more workable mechanism than fractional citizenship.  And, of course, once it's no longer considered a "right" that's acquired with birth and/or naturalization, it'll stop being granted that way, and only rentable from the authority for a fee.

Interesting thought. I assume people are automatically granted 1.0 (or 0.9?) citizenships on birth in this country, but what happens when someone dies? Can you will your extra citizenship(s) to your heirs? What about your base 0.9? Can you sell below 0.9 (and no longer be a full citizen)? Are votes tied to owned citizenships?

Reminds me a bit of the premise of In Time.

Thanks for the interest!

Yes, you would be able to will your citizenship(s) away upon your death. Would it be an issue to treat it like stock holdings?

Birthright Citizenship wouldn't be easy or smart to over turn. Even though the maximum supply of citizenship will go up for every birth, it will still follow a relative emission curve since the ratio of the deceased to the not deceased goes up over the generations. The market price of a citizenship should still go up exponentially over time. You would also be able to affect birthrates by changing how long an adolescent has to wait before they are granted fungible access to their citizenship. If birthrates are too high, then just extend the lockup period from 18 years, to 21 years.

As for selling below the arbitrary-but-reasonable threshold of 0.9 of a citizenship, that would be possible if you first leave the country and provide proof of citizenship elsewhere. So if you want to retire in a country with a lower cost of living, then you would easily be able to do so. You would stop being a citizen though until you bought back in above the threshold. The 0.9 threshold can also be changed, or follow a decay curve as well. Maybe 1% lower every year? Every 10 years?

The system in In Time has a class conflict though, whereas fungible citizenship avoids class conflict. Or at least, keeps the class conflict international rather than domestic. I can imagine the post-change US Military moving into a country and seizing it just because enough of that country's residents have bought US citizenship fractional shares and so there'd be native support for the conquest.

Should rational behavior maximize the amount of rationality there is in the world? So for ex, not exercising is not rational since you can't be as rational if you're unhealthy.

This surprisingly seems like a plausible reference to the concept of rationality, even though it pattern-matches inflationary use of the word, see Rationality: Appreciating Cognitive Algorithms. If exercising does improve cognition and health, that should help for example with the ability to be agentic, although the effect is too general to say that it's specifically about that. Promotion of personal rationailty in the world and say development of better coordination tech admit some sort of collectivist version of rationality, for example making society more agentic. (The latter is not necessarily a good thing, a more rational society or organization might more reliably fail in being aligned with human values.)

It does seem contradictory to use your agency to decrease the amount of agency there is in the world. So given a set of options, the correct option is the one that increases how many options (you) have.

given a set of options, the correct option is the one that increases how many options (you) have

It's useful for a wide variety of goals (see instrumental convergence), but not for every goal. Thus it's usually the case, but it's not contradictory that in unusual circumstances it's not the case.

It does seem contradictory to use your agency to decrease the amount of agency

This phrasing usually indicates agreement, but it's false that it's agreement with my comment, since the comment you replied to is not expressing this point.

Rationality is not about "should". It's no value system.

In the instrumental reading of "should", rational behavior should promote use of good cognitive algorithms. It's a bit inflationary to label any behavior that is not directly a habit of cognition "rational", but if anything is to be labeled that way, it's the things that lead to more systematic use of rational habits of cognition. This is in contrast to beliefs and actions merely generated as a result of using rational habits of cognition, calling those "rational" is obscenely inflationary.

I'm not sure why an instrumental reading of "should" would result in "should" not about being creating obligations. In my experience most of the time where people use the word should and then say that they aren't speaking about obligations, they aren't really clear about what they are saying.

In the case of the OP I expect that he thinks about whether there's an obligation to exercise.

Most concepts can be thought of as purposes, inducing normativity over things in the vicinity of the concept, pointing them in the direction of becoming more central examples of it. So a sphere shouldn't have bumps on it, and a guillotine should be sharp. There is usually equivocation with the shouldness of human values because many concepts are selected for being benign, including concepts for useful designs like cars and chairs, but the sense that emphasizes the purpose of a particular concept is more specific. This way rationality the concept is a property of ingredients of cognition, while rationality the purpose advises how ingredients of cognition should change to become more rational. This is the sense of being instrumental I meant, instrumental to fitting a concept better.

The idea of concepts as purposes is relevant to non-agentic behavior, where the emphasis is on coexistence of multiple purposes, not one preference, and for continued operation of specific agent designs, including rational human cognition, where parts of the design should keep to their purpose and resist corruption from consequentialist considerations, like with beliefs chosen by appeal to consequences or breaking of moral principles for the greater good.

I'll take up the "obligatory" should. Rational behavior is consistent with behaving as if you had an obligation to behave rationally. If you weren't obligated to behave rationally, then behaving rationally could mean anything.

Rationality tells you how to maximize, not what to maximize. Health (and hence exercise) is instrumental to a wide range of goals.

Then how can something be rational if said thing causes you to not be able to be rational anymore?

If a goal is best served by destruction of rationality, that is the course of action rational cognition would advise as effective for achieving the goal.

Things in general shouldn't admit classification into rational and irrational, just as things in general shouldn't admit classification into apples and nonapples. Is temperature an apple or a nonapple? Rationality is a property of cognitive designs/faculties/habits, distinguishing those that do well in their role within the overall process. Other uses should refer back to that.

If a goal is best served by the destruction of rationality, then why isn't the very first step to achieving said goal is to stop behaving rational yourself?

"You must behave rationally to destroy rationality" seems absurd to me.

why isn't the very first step to achieving said goal is to stop behaving rational yourself?

Perhaps it is! (But also, see the rest of my comments on how using "rational" to label behavior is inflationary.)

The idea of raising the rationality waterline was prominent in the early days. It co-existed awkwardly with the self-help version of rationality.

I guess I didn't mean it in an "advocacy" sort of way. Sorry for the confusion.

If you flip a coin to decide whether to vote for A if heads or B if tails, and then A won with 60% of the votes, what are the odds that the coin flip landed heads?

50%
60%

You have an enemy who always votes against you. What are the odds that your enemy voted for the winner?

50%
40%

That's not likely to be something you can calculate, and certainly not from the given information. At the very least, you'd want to know the ratio between P(A wins with 60% of the vote | you vote A) and P(A wins with 60% of the vote | you vote B).

For large numbers of voters who are unaffected by your decision, these are likely to be very close to each other, and so the posterior odds are very close to 50% that the coin flip landed heads.

For smaller numbers (e.g. a board meeting) and/or where your decision may influence other people it's much more complicated. The fact that in the follow-up question you have an enemy who votes against you implies that the vote is not a secret ballot and your vote does influence at least some other people. This means that the posterior distribution needs to be taken over all sorts of social dynamics and situations.

Even so, the posterior probability of heads isn't likely to be much different from 50% except in very unusual circumstances.

I wrote a long response to a related comment chain here: https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past?commentId=jRo2cGuXBbkz54E4o

My short answer to this question is the same as Dagon's: if we're assuming a negligible probability that the election was close enough for your vote to be decisive, 50% in both cases. 

I tried to explain the conflicting intuitions in that other comment. It turned out to be one of those interesting questions that feels less obvious after thinking about it for a couple of minutes than at first glance, but I think I resolved the apparent contradictions pretty clearly in the end.

I did read through your response and felt motivated enough to code up three python scripts:
https://pastebin.com/KdgchLRt
Results:
https://imgur.com/a/pEkjBWx

TLDR: Even with a 1000 voters, your own vote doesn't seem to affect the results, yet the winning bias is still there. You can try comment out the part where your own vote is added to the total to see that the conclusion doesn't change.

(Sorry if I'm misreading anything; my excuse is that I'm operating on 3 hours' sleep and am not very familiar with Python syntax.)

I ran your 'regular run' version, modified to keep a count of 1-vote victories, and the results were as I would have predicted: https://imgur.com/Y17ecLq

I'm a bit confused by the 'random voter sample' version -- which scenario is that illustrating, and what's the deal with the 'myvote = random.randrange(-voters, voters)' and ' if votes*myvote > votes*votes:' lines?

So I ran the regular run again, but with your vote not being counted, and it looks like the winning bias does disappear: https://imgur.com/a/mYA0Q0Z 

The run you have a question about is meant to draw your vote from the sample of winning-side votes.
 You first draw a positive or negative number from 0 to how many voters there are. Then, if the squared vote result is larger than the "square" of your vote's "number", then it means your vote is drawn from the larger winning side. And vice versa. So here's an example: https://imgur.com/a/CuqOlDS

The more extreme the "votes", the more likely "myvote" will fall into the winning side.

And the third script is conditional on the positive side winning with 60% of the votes. https://imgur.com/a/KE7l2ak 

I don't think there's enough information to answer beyond the basic obvious expectation.  50% is my prior for coin flips, and unless you specify a VERY small number of voters and a known distribution of their votes, the coinflip is lost in the noise so there's no evidence in the result.  Assuming "always" is a mathematical certainty rather than my opponent's intent, which could be misleading in results, it must be 1-coinflip.  

Are fantasy worlds possible? I don't mean are they physically possible, but whether there exists some "fantasy world" economic/technological steady state that doesn't just devolve back into what our world is today: one where staple crops are dirt cheap, the map of the world is complete, advancement can't be stopped, etc. Basically, what are the environmental conditions necessary to stifle development and maintain scarcity of modern comforts? I think this is a Hard Problem. In fact, my intuition is that fantasy worlds don't just don't exist, they don't exist even in theory. If they were to exist, then they would take active effort to maintain; they would be artificial. A hidden team of "maintainers" would have to be assigned by God to each fantasy world to actively monitor and take ruthless action against any potential Henry Ford that might rise up. In that sense, we truly do live in a Godless world.

Just take out coal/oil and a stable technological level seems possible. Also I'm not sure those stable fantasy worlds really exists in literature, most examples I can think of have (sometimes magical) technological growth or decline.

Tolkien Middle Earth is very young - a few thousands years. This means no coal, no oil, and no possibility of an industrial revolution. Technology would still slowly progress to 18th century level but I can see it happening slow enough to make the state of technology we see in the LOTR acceptable. On the other hand magical technology is declining because the elves magical power is slowly declining (both in quality and quantity) as they leave Middle Earth. 

Sanderson's Rocharch's civilisation is wiped regularly by an all out war between good and evil, regularly resetting technology to bronze age level. Then (for some reasons) no war happen for 3000 years and during this time we see a steady progress in magical technology (althoug Sanderson inability to write in anything else than close third means an unlikely amount of technological progress just happens wherever the heroes are).
Also I'm pretty sure the environmental conditions in Rocharch do not allow oil and coal to form so once again no industrial revolution is possible.

Fantasy worlds almost universally do have gods that actively work to maintain desired states of the world, so "it would take active maintenance to achieve" isn't any sort of theoretical evidence against the possibility of their existence.

Even that aside, it's pretty easy to think of barriers that mean you won't end up with a modern, industrialized society no matter how many Henry Fords you have. Even more so when you can invoke arbitrary things like "magic".

While it does seem like it's fair game for people to actively maintain the fantasy world's steady state (since our world's lack of steady state uses people too), I view the two as different forms of participation. While the later, our world, is acutely participatory, the fantasy world would need to be chronically participatory. The maintainers don't just invent a thing and be done, they need to keep inventing new and new things, forever, as the impending method by which the fantasy world will start Developing changes. The changers have a much easier job, and they only need to succeed once, whenever: just get onto one of the infinitely many possible development tracks and the maintainers lose. Not even that world's god/archmage could hold on forever.

As for what those barriers are, I'd like to hear a specific example. Magic-based barriers included. I can't think of any barrier that will work forever. The best one I came up with is a kind of technology trap: society forgets how to make computers (amnesia magic), but there are still millions of over produced but good computers available to buy so no company puts in the research and development funding into figuring out how to make them from scratch, so you end up in a weird development trap.

If the rules of the world preferentially destroy cultures that develop beyond the "standard fantasy technology level" (whatever that is) then I expect that over time, cultures will very strongly disfavour development beyond that level. I'm pretty sure that this will be a stable equilibrium.

If the rules are sufficiently object-level (such as in a computer game), then technological progress based on exploiting finer grained underlying rules becomes impossible. You can't work out how to crossbreed better crops if crops never crossbreed in the first place, and likewise for other things.

If intelligence itself past some point is a serious survival risk, then it will be selected against. You may get an equilibrium where the knowledge discovered between generations is (on long-term average) equal to knowledge lost.

... and so on.

Imagine magic was super useful, but every use has a 1 in a 100 billion chance of wiping out 99% of civilization. Then by prisoners dilemma, people keep on using magic, and civilization keeps on being wiped out and having to start from scratch.

So magic could make the fantasy world possible. Invoking the decentralization of the arcane to stop civilization development. But wouldn't your scenario get harder and harder to achieve every time? Arcane knowledge would become rarer and rarer which each civilization reset, as mages drop out. You'd be left with a few mages who refuse to share any knowledge, and may even go out of the way to hunt down and stifle independent mages, and thus those civilization-wide accidents would go down in frequency until secular technology (which flies under the archmages' radars) can achieve the familiar growth curve. The archmages would end up losing the "fantasy world" in the end.

But as civilization develops people might rediscover magic, beginning the whole cycle anew.

I don't see any discussion in the Cryptocurrency space about how Proof of Stake allows for 51% of the stake to eventually accumulate into 99% of the stake. Being chosen to receive a coin makes it more likely that you'll be chosen again. This way, the relative distribution of coin ownership will become more and more extreme as time passes.

Proof of Burn-Stake seems to avoid this "issue". Because being chosen to have your coin burned makes it less likely that you'll be chosen again. This way, the relative distribution of coin ownership won't change.

But there's no method to smooth out the relative distribution of coin ownership (so that all coin holders eventually have the same amount). This one-way nature of distribution evolution is interesting and I'd like to read more about it.

The way to fight this is to create a fork when someone would try such an attack that invalidates all ownership of the attacker. 

There's quite a bit of discussion of this on discussions of various proof of stake algorithms and their strengths and weaknesses (or used to be).

If the simulation theory is correct, then can we expect computational resources to be evenly distributed?

I think no. I also think it's important to learn how to allocate the computational resources you do have, in an efficient manner. For ex, access the "metadata" of something before you access that thing. As in, live your life as if you're accessing files. Start with the file name, then the file header, and only then the file data itself.

Concrete example: look at a banana's spots before you break it open, smell it before you bite into it, etc. The prediction is that you will end up accomplishing more for the same amount of computational resources. As in, the banana will taste better.

I think we might start our lives with some baseline of computational resources, but there's an exponential decay curve; an adult has less access to computational resources than a child does. That's why things seem to taste better as a child, than they do as an adult. But that's not certainty. If you play the game, then you can learn to accomplish "more" with the fewer computational resources you have at your disposal today than you ever could even as a computational-laden child.

TLDR: escalate your qualia responsibly.