1 min read14th Aug 201984 comments
This is a special post for quick takes by Slider. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

84 comments, sorted by Click to highlight new comments since: Today at 2:13 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Spoilers for Man in the High Castle ahead.

LW have thought about Petrov somewhat and I found myself torn a bit balancing various virtues in a fictional situation that seems to be a limit case.

Kido as the highest rank of the Kempeitai (martial and secret police) gets the job of figuring out who tried to shoot the emperor. He is expected to succeed in this or forfeit his life. He discovers that a sniper of a super power with nukes essentially confesses to the act. He thinks that relying this information up the chain would push his country into war which it would lose (his country doesn't have nukes). Instead he suppresses the evidence and prepares to fail on his task. In the eleventh hour a "safe" suspect presents itself and the plot moves on.

Does Kido here do a heroic act like Petrov? He is tasked with a truth uncovering mission and fabricates a falsehood to bolster stability. I know that "the first victim of war is truth" but here it seems to be made in name of peace. With Petrov he knew the system wasn't reliable and that other people might not have the skill or context to evalaute the data better. Here it seems that reporting the facts upstream would not place the decision makers ... (read more)

6Measure3y
Even if your superiors know what really happened, they'll want you to fabricate a plausible lie for the public story. They'll also want you to carry the risk so that if the truth does eventually come out they can deny knowing about the coverup.
2Slider3y
When the agent willingly chooses into death, I don't think there is any significant risk to take on left. There is the side of responcibility of bearing shame which can transcend death. I guess I found an aspect of it I didn't previously realise, when you think a situation will only resolve with an evil act and you could punt the decision to be made by another party it can seem like a favour to make the act happen via the party that carries the stain most gracefully. The setting seems so morally grey that being complicit with the coverup would be that large of a blip in the radar. Later on when the generals disagree with the emperors confidants they pretty much do a coup by excluding the capital people from decision making. Part of the danger from Ripper from Strangelove is that he can just act as if he received an order without actually receiving one. When Kido acts without consultation how does he know that he is not operating with a faulty "bodily fluids" motive? What is the difference between a coup and exercising implicit autonomy, if any?
4Dagon3y
The fundamental question of how to help a person or group of people DESPITE their irrationality and inability to optimize for themselves is ... unsolved at best.  On at least two dimensions: 1. What do you optimize for?  They don't have coherent goals, and you wouldn't have access to them anyway.  Do you try to maximize their end-of-life expressions of satisfaction?  Short-term pain relief?  Simple longevity, with no quality metric?   It's hard enough to answer this for oneself, let alone others. 2. If their goals (or happiness/satisfaction metrics) conflict with yours, or with what you'd want them to want, how much of your satisfaction do you sacrifice for theirs?   And even if you have good answers for those, you have to decide how much you trust them not to harm you, either accidentally because they're stupid (or constrained by context that you haven't modeled), or intentionally because they care less about you than you about them.  If you KNOW that they strongly prefer the truth, and you're doing them harm to lie, but they're idiots who'll blow up the world, does this justify taking away their agency?
4Measure3y
I'm happy with a confident "yes" to that last question.
7Dagon3y
Me too, but I recognize that I'm much less happy with people applying the reasoning to take away my self-direction and choice.  I'm uncomfortable with the elitism that says "I'm better at it, so I follow different rules", but I don't know any better answer.
5Slider3y
If we change "blow up the world" to "kill a fly" at what point does the confidence start to waiver? If we change "will blow up" to "maybe blow up" to "might blow up" when does it start to waiver? Another very edge case comes from Star Control II. The Ur-Quan are of the opinion that having a random sentient species in the universe is a risk that it is a homicidial one or makes a torture world and kills all other life is unacceptable. The two internal factions disagree whether dominating all other species is enough (The Path of Now and Forever) or whether specicide until only Ur-Quan life remains is called for (The Eternal Doctrine). Because of their species history and special makeup they have reason to believe they have enhanced position to understand xenolife risks. Ruminating on Ur-Quan I came to a position that, yes allowing other species to live (free) does pose a extremely bad outcome risk but this is small compared to the (expected) richness-addition of life. What the Ur-Quan are doing is excessive but if "will they blow up the world?" would auto-warrant an infinite confident yes for outlaw status then their argument would carry through: The only way to make sure is to nuke/enslave (most of) the world. I guess in more human scale: Having bats around means they might occasionally serve as jumping off points for pretty nasty viruses. The mere possiblity of this is not enough to jump to the conclusion that bats should be made extinct. And for human positions in organizations the fact that it is filled with a human and thus being fallible doesn't mean they are inadmissible to exercise any of their powers.
2Slider3y
A state works through its ministers/agents. As the investigator correctly assigned to the case it is not like you are working against the system. I guess part of the evaluation that living in a world with a super power trying to incite war means that the world has a background chance to blow up anyway. And knowing that they are trying to incite war by assasination could be used for longer term peacekeeping (counterspy resources shifts etc). Exposing emotionally charged cirumstances risk immidiate less than super deliberate action but clouding the decision apparatus with falsehoods makes contact with reality weaker which has its own error rates.

I got annoyed about titles that are of the form "You are underestimating X", "X is underappriciated", "X is not so bad", "We should rethink X".

The title writer doesn't really know what I think and why. This pushes towards making groups homogenous in beliefs and people dealing with each other more in sterotypes.

But it was minor enough and all negative to actually comment on the specific title.

6Slider1y
"We must do X" I am annoyed because we don't neccearily share incentive or possiblity structures. Parent post worries seem to have bloomed to signifcant proportions in post about reaction to news

Internet point giving is pretty recent phenomenon (as is all of social media). I think there are important social differences to approving in person and giving internet upvotes. You are way more connected see the effects of your approving and can conveoy more subtle messages in the same go.

Giving voting too central a role in our websites might be analgous to having implemented a reinforcement AI as president/world leader without solving alignment. Voting might be institutionalized demagoguery that we are unlikely to catch in critiques.

Having a bad utlity f... (read more)

4Viliam5y
Voting with points is what you do when... * you need to somehow separate the better stuff from the worse stuff, even if the method is imperfect, because there will be tons of extremely bad stuff (e.g. spam, or crazy people obsessed with the topic); and * you don't want to appoint a moderator, because you don't have the money to pay someone to do it as a job, and you suspect that the volunteers would be motivated by the opportunity to abuse the power; * you need to have a nice version to show to a completely passive person (who doesn't even have an account), so that individual friend lists and block lists are not sufficient -- you have to arrive at "one true rating" for stuff. Without voting, you would have to give up on having an official page available to users without accounts, or you would have to establish the official moderators and either pay them or accept that people who want to abuse that kind of power have the strongest incentive to volunteer. (The former is kinda like e-mail, and the latter is kinda like Reddit.)
1Gurkenglas5y
Use Elon Musk's brain-computer interface to map the posts each user has seen to a personal space of possible reactions. (Or just let them invent their own tag system in place of this, but it's gonna be inconvenient.) Across users, glue together these functions by finding a way to translate between any two user's spaces on the intersection of the sets of posts they've seen. (I feel like a sheaf theorist should repeat this to me in their words.) Each user can now explore posts relevant to their current thoughts. (Or to the tags they select.) The admin uses this same mechanism to provide new users with a default sorting.
1Pattern5y
Create a tag dictionary.
1Gurkenglas5y
I mean that it's gonna be inconvenient to consciously write down all the tags that apply, as opposed to the BCI giving a cloud of 2000 relevant tags/Discord reactions. It also feels like giving names would reduce the usefulness of this from telepathy to language.
2Slider5y
The problem of specifying tags might not have received that much attention. I had an expereince with ovewatch in that it is filled with all kinds of nasty behaviours. There were about 6 categories to report bad behaviour. Then you could also have free form text box to provide additonal details. Now free-form text you can't automatically process that easily. The reporting felt important so it pushed me to fight some of the inconvenience of speeding paperwork time on other's behavoiural crimes. However the effort coming up stuff to fill on the textbox was kind of offputting. I ended up figuring out a way of "what is really objectionable about my experience?". This made it so that in next bad experiences I could identify similar bad experiences with less thoughtwork. Thus it became routine to write things like "Diagnosing others with mental issues (moron)" and "Diagnosing others with mental issues (autism)". (And it leads to thinking whether "Diagnosing others with mental issues (idiot)" is a good thing to complain about (it isn't it's not diagnosing)). Abstractring them down to the objectionable part made me lose sight of the details which made them more directly comparable with each other. So the reports became really tag like and thinking about what the tags were made me think were the line of objectionability lies. If someone loses and is angry about it that is not objectionable. If someone bashes others and calls them bad names the motivation can be understandble but it is objectionable. The system might have been designed more that it more relies on just adding up counts on the 6 fixed categories. And I seen a streamer file report where the textbox was just "fuck you" which speaks to players knowing the textboxes matter very little and that reporting is often done to express or play out anger. The lack of detail lost on the altar of automation has a significant cost. You could improve automation that it can handle more complex signals. Or you could evaluate t
1Slider5y
How about when the incentives of the populus are as misaligned as would-be-moderators?
2Viliam5y
That's what always happens, I guess. The thing is that all solutions are bad, but leaving the problem (of spam etc.) unaddressed is even worse than the usual solutions. Sometimes small websites avoid this, when they are unknown enough that they don't attract any spammer or any crazy person, and unimportant enough that people who don't like the content simply leave. But if they get more popular, it's only a question of when. Imagine that your user base is: 50% Greens, 30% Blues, 10% crazy people, and 10% spammers. If you leave the site unmoderated, crazy people and spammers will make it unpleasant for everyone else. If you have a voting system, Greens will eliminate the Blues. If you have moderators, you must choose carefully, because a majority of Greens or Blues among the moderators will eliminate the other side; and of course having the same number of Green and Blue moderators would be unfair, because then the Blues are overrepresented compared to the user base. (Also, this would incentivize the 0.01% Purples to demand equal representation among moderators, too. And if you grant it, then either Greens or Blues, by making a coalition with Purples, can eliminate the other side.) You can't win.
1Slider5y
Sometime the solutions are bad enough that it's not worth having the problem in the first place. If there is no way to input user generated content then you can't spam. However websites are kind of expected to have these sorts of functions even when their core mission doesn't revolve around it. There is also the issue that some of the costs for solutions are private costs beared by the website but having social slant and pressures on the content has a downside that is more beared by the public for a possible eroding of discussion or culture quality. And if every individual website is incentivies to be open to mobs instead of closed to them that enpowers mobs and can make cross-site movements. At some point cross-site culture will be stronger than site spesifc one where even if you try to establish a particular website to be for certain types of users / needs they will be swamped by a bigger existing community that will forfcefully install their norms. The example voting system whether Greens eliminate Blues depends on the voting mechanisms. But I guess it is a general feature that some content will be hidden/downplayed. The arguments mechanics are plausible if it is a majority first-past-the-post. I think there are power balancing mechanism that get a lot more close to proportionality. The mechanic also requires that the sides are interested in destroying content associated with other parties. You could have a system where there is only finite influence power that is shared among promotion and supression. If all players suppress all content generated by others then they could not promote their own stuff but if everybody promoted their own stuff they would use lower amount of the resource the point would be to make it dominant to promote your stuff rather than attack others. Then on the balance losing factions are not erased even if they have "unfairly low" visiblity. The thing would be that spam would be supressed unilaterally. Even if you don't make the emphasis
3Viliam5y
Yep. If I ever have a meaningful web page, there will be no user comments, because it seems like there is no good solution. I am afraid that online even this wouldn't work. First, people can make multiple accounts. (The infamous guy on LW 1.0 made several hundreds of them.) Second, I feel that participating in online debates already selects for a worse parts of humanity, simply because some people have better things to do and some don't. I prefer the archipelago model of internet. Rationalist websites for rationalists, homeopathic websites for homeopaths; rather than having all of them in the same place fighting each other. But goes against the incentives of the big websites, who want to be for everyone, because that allows them to display advertising to everyone. On the other hand, creating "reality bubbles" (because, let's admit it honestly, this is what the archipelago model means) also has its own problems.
1Slider5y
One of the issues is that you will struggle to be meaningful if more attractive webpages manage to be attractive because they allow for self-expression or because so many other users are using or viewing them. Part of tyhe problem can be that if you read a news paper you get nicely editorialised content but if you get your news on reddit you can have fun fights in the comments so people will pass on "boring" newspaper because they can't fullfill their expectation of engagement.

Magic colors and errors

Reading Writers guild policy doc there was a principle of "the vase is already broken". The whole document is a lot how you make a red organization and most of the princples are anti-white.

The principle makes sense but I found it be foregin to my culture. Things are made to be replaced. And if something is done wrong today we will try to do right the next day.

In contrast the blue way is much more familiar with me. Accept only true things, set up things for perpetuity. In the contrast I noticed that the blue thing is focused... (read more)

2habryka5y
Interesting. Do you have a link to the document that sparked this thought?
3Slider5y
It was linked in a lesswrong norm thread. Couldn't relocate it easily as I don't remember which thread it was on.
1Slider5y
More on green errors, I think they do exists. There is a difference between an invasive species and a predator. Green probably allows for predators easier than white or black that would call them murderes. But being disruptive to the harmony is an actual violation green registers. Imagine you have a snake problem in your houses yard. You could get angry and kill every snake you see (haphazard, random and laboursome the red way to address it). You could poison your yard (but then your flowers might die or your food supply gets fouled, the black way). For wheel completeness sake, wall (white way) or scarecrow (blue way). Or you could introduce a predator species that eats snakes (the green way). Even if the effect is to diminish a component you address it by constructing more components (add species). And likely when the problem is "solved" the predator and prey are in balance and in a way the snakes existence functions as a foundation for the foodchain for the predator. The hard thing about green as it is the anti-color of the agent color black it doesn't engage in problem solving. Nature by itself is a defenceless victim. People who care about nature and are naturalistic are a bit different thing. In making a choice what "harmony" you are defending you are probably injecting somewhat of a agentic subjective choice.

I followed up the hint of https://www.lesswrong.com/posts/hAijPYdsbLibBBb9w that there existed an interesting story. I am up to things that we written in Jan 31 2022.

I have been pretty entertained by the somewhat advanced musing on the DnD aligments that it exhibits. However it seems that Chaotic is really getting the short end of that stick. It might be somewhat natural in that the protagonist is Lawful and the main setting is Lawful so off course they would have a very a strawman picture of Chaotic (since they are not persuaded). The Good vs Evil conflic... (read more)

2Slider2y
Caught up to current generation of the story, it seems the chacarters have also caught on that gods do a different kind of Chaotic than the less-than-lawful condition they look down upon (at times called primordial chaos). An example of this "lawful chaoticness" (which is probably a contradiction in its own term) was that if you do the optimal thing each time you don't ever gather evidence how things could have been different. The antidote for the shortcoming of "lawfull lawfullness" was that in unimportant matters do the thing on purpose suboptimally to search the exploration space instead of being in exploit all the time. To my mind this in an example and an instance of what the "proper" Chaoticness is about.

So it is mark of good cognititve processing of being able to entertain a thought without accepting it.

As tends to happen when you take things to extremes, things get tricky.

At some point just imagning a scenario at all spends more brainpower than the credibility of the scenario or line of thought would warrant. Trying to be "exploratory" with infinities and quantum mechanics leads to some wacky configurations. Because thinking complex thoughts uses a lot of subparts it is harder to sandbox such things.

I remembered an almost decades old argument about ethic... (read more)

Basilisk makes you confront your worst fear. Compassion of winged feenix prevents you from becoming zero-k. The kronology has still yet to come. Timeless friends will serve you throught the darkest dungeon. Reach out to walk hand in hand through the fire.

Total Annihiliation is a name of a game.

You can read it to mean powdering to physical dust everywhere.

You can also read it to mean collapse of persons, a state where there is not a single self around, that humanity has been wiped out.

Supreme Commander is also a name of a game.

In it a single king-like chesspiece runs an almost planetary army. With thousands of units going around the small bit of biological circuitry is wrapped in isolation into an Armored Command Unit. But the ACU does not really do much for the army.

Dr. Brackman is also a single point of f... (read more)

Suppose you are the operator of a Chinese Room and don't know Chinese.

You could idle and keep be lead by the book.

You could also pay attention to what you are doing.

If you do that enough you might at times know what the book will tell you to do.

If you confidently know what the book will tell you to do in this particular situation, you might as well go do it without checking that it does.

The more you have situations you don't use the book for the less daunting staring and figuring at the rest of it becomes. (The further we explore, the more connected everyt... (read more)

2Slider1y
The feeling when somebody else fills in the details Weirdly debookment is both the game over and win scenario. The lunatic running away without possibiltiy of correction and the sane superhuman successor.

quantilization as a super low resolution discretization in the direction of (something like) geometric rationality.

level 1: take top 20% options of utility function and do a equiprobably random action

level 2: take top 40% options of utility function, give top 50% (the original top 20%) of those double weight and pick a random one

level 3: take top 60% of options, give top 40% double weight, give top 20% triple weight

level n+1: make smaller buckets to cover more of the range and have higher scoring buckets have bigger weights

beyond discrete: propability of picking an option is proportional to the its utility rank ("infinidesimally small buckets")

Epistemic status: crazy corner
 

carry over for the overtly wild rant-off from writing

https://www.lesswrong.com/posts/r282ErRKMFzxpKYMm/can-we-in-principle-know-the-measure-of-counterfactual?commentId=qX53p2gsKGSnHfEFm


Immediatly after observation an electron is in a definetely known place. A while after it is described by a complex valued field that is quite spread out spatially.

Combining many spread out complex fields can add to much narrower fields.

In QM sometimes superpositions evolve into non-superpositions in a perfectly predictable manner.

Having o... (read more)

2Slider1y
Only figuratively, I have been smoking some brockwood and have reached another level of speculativeness. Say you have an agent which has a quite effective cartesian wall but its epistemics to connect with the outside world are so jumbled that it has no chance to ever get a clue what is going on in its non-imagination. Because it is so jumbled (or therefore high chance to) it contains all kinds of weird circuits. Say there is a banana in the environment which has a radioactive atom in it that sends a non-classical photon out. One goes to the agents wall and another goes off in the other direction. If the wall is composed only of classical computing the information content of the photon is essentially kept intact. Then a weird circuit turns the photon information into a representation in the agent. Now the agent has a chance to know about the world by EPR scenario that bypasses the cartesian wall completely. Any argument that strongly relies on embodiment might have this angle to take on it. If a boxing relies on causal isolation this argument makes incoming direction also a thing to worry about. If the question posed to an oracle is in superposition it might contain information that its designers do not mean to put there. A new thing to be scared about, non-designed quantum computation. Althought it might be that "classical dimension of time" and "quantum dimension of time" are quite ortogonal and can't cross over at too many places. The thing that gets assumed to be smooth and non-important by global phase symmetry might be the way this orthogonal time ticks (and be a clock cycle for hardware (typoed hardwhere, accidental embodiment argument) "there"). (Wick rotation is a thing and is a mathematical move, this argument is supposed to be separate and not use that at all). Achron has 2 time-dimensional computing so it is not an unimaginable route.
2Slider1y
My Face Wen other minds tick the same way, just great
2Slider1y
What is evidence? If you do have epistemic entanglement via physical entanglement then you might have chances to build extra-classical perception.
2Slider1y
Reading up on Hardy's Paradox It has the same kind of weirdness as the bomb tester going on. The weirdness culminates on the event only happening if an annihilation happens. Yet in the "outcome" that the rare event happens there is nowhere an annhilation to be found. In the crazzy-hat POV this is evidence of an event happening in a "parralel" timeline. They are not exactly parralel as they are not causally isolated as information signals can jump the gap. You now know that there is a photon in the sister timeline. I am starting to read passages like as "under any single-timeline theory this does not happen". I am a bit of a loss where I could check what local means in that context and whether unintuitive circumventions exist. In the spatial sense it might be that while particles need to be spatially on top of each other they can take different paths to get there. Therefore if you take only one particle or its timeline to be real there are no causations happening throught that that can account for what is happening. And this can not be amended by "being really accurate" of what the "true" timeline is (on small scales, on long scales  you implicitly take the formation of the false realities into account (but that goes into global rather than local territority)). A blockage or disturbance happening in the false reality leaves very little clue that it is happening. So the first real hint is whether the spatial overlap goes one way or the other which can happen way later in time than the blockage. So if you are only allowed to condition what is the (temporally local) real state and are not allowed to condition on the false reality, prediction accuracy neccesarily suffers.
2Slider1y
Reading https://www.lesswrong.com/posts/XYDsYSbBjqgPAgcoQ/why-the-focus-on-expected-utility-maximisers?commentId=a5tn6B8iKdta6zGFu This refers to epistemological reasons why keeping track of stuff that didn't happen is needed. The crazy-hat reason in the parent is an ontological (in the metaphysical sense) reason why keeping of the stuff that didn't happen is needed. We are already (even with sane-hat) tracking that in the ontological sense of "vocabulary of data structures" sense. "Shutup and calculate" means that that shall never develop any meaning. How big is the danger that this commits errors focused on in "Don't look up"?
2Slider1y
Wave-particle duality if you are dealing with a single world it is a particle if you are actually dealing with a bundle of worlds it is a wave. It still feels like a single scenario for you. You can smoothly care about less or more worlds where fewer is more particle-like and more is more wavelike. The physics offcourse happens in the full multiverse. Might be obvious or not but now the concept seems like one unified thing for me rather than two disconnected modes having an arbitrary or mysterious feeling connection.
2Slider1y
When writing parent comment I had not yet watched Devil's Hour. Now I have. The degree of having watched similar stuff helps is quite astonishing (almost a requirement). Shoestring compared to electron coil was an interesting analog to make. h sets the bar how thick the finger is. Explanations and models what h is do not seem to occur that often. That burnt face also gets a special mention from me for high degree of subtelty to implications.
2Slider1y
Not doing this as a direct reply dialog as this does not overcome a bar of preponderance of being more guiding than misguiding. By writing somebody with buy in can come mitigate the what-woudl-have-been-misguidings. Building up things in my mind as reading other content Corresponding idea would be to map each point in U(1) to a spacetime comment there ψ′(x1…xn)=n∏i=1g(xi)qiψ(x1…xn) This stuff is designed for multple electrons in spatially different places. The extension would be to say that you have a superposition of where a single electron could be and each of the "possibility slices" gets to act out the role of a separate electron. So for discrete cases (such as slit right and slit left) not being able to do a continuum is not that big of a problem. Mathematically it could be educational how to differentiate two electrons being simultaneously present vs one electron being in a superposition of both of the locations. My brain can't intuit that thing and my hand is too shaky to have reliable symbolic answer for that. In quantum computing the complex phases do not seem especially photonic. Photons are bends in the U(1) that correspond to electromagnetism. The "convergence forces" being relied would probably take on a similar form because they are based in the same things. This kind of "possibleton" would have the aspect of travelling from one possibility to another. Wait, does that mean that electron in superposition in two locations would electronmagnetically interact with the other position (I do not think this is how it goes in vanilla theory)? How does "self-energy" factor into this? 
2Slider1y
There is such a theory of "dark photons" but they add an additional U(1) to do the dark things in. It was motivate to explain gravitational effects which sources would not be seen.

Since I did not keep it in a drawer as much as I thought let me make a note here to have a time stamp.

Instead of going

(units sold * unit price) - productions costs => enterpreneour compensation

go

 (production costs+ enterpreneour compensation)/units sold => unit price

you get a system where it is impossible to misprice items.
 

Combined with other stuff you also get not having to lie or be tactical about how much you are willing to pay for a product and a self-organising system with no profit motive.

I am interested in this direction but because I do not think the proof passes the musters it would need to, I am not pushy about it.

2Slider1y
Not comenting on the referenced content as here we are at: epistemic status: crazy corner Something that is very resonant to the approach but has a big point of deviation https://www.lesswrong.com/posts/rCZ9fruWriD6uGNLp/who-are-some-prominent-reasonable-people-who-are-confident I have a ideological beef with that "first come first served" bit. At 20 links the cap is exactly filled. If you receive 40 links each linker should get 15$. Even as is, it is nearly impossible to tell whether you are the 22nd linker because of the time gap when 20th link is received and the gap is announced fullfilled might be something like 2 days. With my style there would be a rollingly updated display of what is the current bounty per link, after 80 links new linkers can expect to only ever get up to 7.5$. A real reservation is that if I go fetching a link with 30$ in my mind but eventually only get 15$ I might feel cheated. But the promise isn't super solid as a "sorry, you were just out of time" can still net you 0$. At some point you would "close the market", submissions stop being accepted and money is doled out. The advantage of this old style is that each submission can get instantly rewarded instead of being delayed. One could also imagine a kind of reverse operation where rather than fetching information we are broadcasting it Then a paper having 40 readers would cost 15$ for each new reader to access. This kind of market we do not need to close for the producer, after the 20th reader the author has got their 600$ compensation and the rest is just new readers resharing the burden amongs the old readers. If you are wondering "who tf would go fetching a link for uncertain reward" then you should be of the opinion that these read licences should sell like hotcakes. "This authors last paper cost 400$ distributed among 4 million readers for which has access price of 0.0001 $. Since you were the 1 millionth reader you have 0.0003 $ free credits to use . Would you be interested t
2Dagon1y
Those equations (assuming  => means =) are equivalent.  And it's usually difficult to set the price to vary with units sold (not least because you don't know the units sold until it's too late).
2Slider1y
Enterpreneour compensation is not a function of units sold. I mean assigment with => (use left side to set value of right side). Assurance contract to sell stuff in a way that the customer will not walk away with the product until other customers have made similar purchases. The part about not being deceptive or tactical about willigness to pay comes from paying people back after the fact if we overcharge them. Buying early is not supposed to matter, just how many customers we have. This is more significant departure the more production costs we have that do not scale with amount of units produced. Old style floats enterpreenour compensation and keeps money exchanged per unit constant. This indeed makes transactions practical to execute and mall shelf prices predictable. Here we choose to keep enterprenour compensation constant and float price (with customer volume being the driver).
2Dagon1y
Why would you think entrepeneur compensation (often called more simply "profit") is not a function of units sold?  All of these variables are related to each other in the equation, and each of them is a function of the others, depending on which you model as controllable and which as dependent.
2Slider1y
True profit starts only after the point in compensation that the enterpreneour would stop doing the activity. In this mode of selling we set the compensation to be constant by contract. Seller wants 10 000 and has 100 willing customers, sellers gets 10 000 and customers pay 100. Seller wants 10 000 and has 1000 willing customers, sellers gets 10 000 and customers pay 10. Thus it is impossible to make a profit or a loss. The uncertainty is only in whether the sell goes through or it ends up being pending indefinetely as not enough customers to pay enough are found. What usually is the risk of capital turns into customers taking the risk of naming bigger prices in the hopes that other customers will also buy the same product and help lower the price ("retroactively"). Correspondingly success it not enrichment of the business runner but support for previous customers. As a side bonus you get "autocompetetion". You don't need a rival firm or product to drive down the price as the product becomes more succesfull. (Price drops to 10, new people afford to instabuy it, dropping the price further allowing even lower instabuy prices even in a monopoly). Orthodox approach has leniency of competetion emerge from people racing to be most modest in their extraction. But this still includes a step and actor that tries to maximise extraction. But one can maximize for impact directly while keeping the boundary condition that people do not work for free. Sure the nice property does not come for free, a big scale product can not really get going with instabuys but preorders become more mandatory.
2Dagon1y
I'm not following.  I'd assumed you were using "entrepreneur" to mean owner/operator to simplify the world by removing the distinction between wages and profit.  Instead, you're making some point about price theory and elasticity that I haven't seen your underlying initial/average cost model for, nor any information about competition, all of which tend to be binding in such discussions.
2Slider1y
This is a bit where glimpses can be seen. With usual stuff you would get  Assume comparable product and producer A can make it happen for 10 000 and producer B can make it happen for 20 000. If there are 100 willing customers if A would cost 100 and B would cost 200. However if there are 100 A patrons and 200 B patrons then the cost of A would be 100 and cost of B would be 100. In this kind of situation if new people are undecided A patrons want them to buy A and B patrons want them to buy B. Producers A and B don't really care. Any old style constant price point offer will have some patron amount after which this dilution pool deal is better. Say that A projects that about 100 could want the product and starts collecting promises who wants the product for 100. Say that seller C that uses old style pricing has an outstanding offer for 25. If patron pool for A ever hits 400 the spot price for A is going to be 25. In case that A patron pool is 800 then C is likely to reprice at 12.5. However even if C keeps up with the spot price, A patrons get money everytime a new A patron joins (this is structurally so that you can not draw more than you initially put in, it can not enter "ponzi mode"). So "12.5 + promise of maybe later income" is somewhat better than 12.5. And because we kickstart this with assurance contracts, initial customers can name the currently best traditional price as their willingness to pay. So while people might not promise to pay 100 for a thing that is available for 25, entering into assurance contract of paying 25 on the condition that 400 other people pay makes you never regret the assurance contract triggering. If you can pull out of the assurance contract then you can even indulge in inpatience. Say that you have have given 25 and there are only 350 other such entries. If you lose hope in the arrangement you can ask for your 25 back and then there are 349 entries in the patron pool (no backsies once we hit 400 and product changes hands). Alter
2Dagon1y
I have no clue what this model means - what parts are fixed and what are variable, and what does "want" mean (it seems to be different than "willing to transact one marginal unit @ specific price)?  WTF is a patron and why are we introducing "maybe later income"? Sorry to have bothered you - I'm bowing out.  
2Slider1y
I am not bothered. Cool to have interaction even if it is just reveals that inferential distance / mistepping is large. Patron is a customer. Because they have a more vested interest how the product they bought is doing, it might make sense to use a word to remind of that. We pay customers retroactively the difference they would have saved if they shopped later, so that they do not have reason to lie about their willingness to pay or have a race to shop last. All customers at all times have lost equal amount to have access to the product and this trends downwards as time / customer base goes on.   "wants" means "declares by own volition that the fair compensation for the project is" "wants" means "[subject] prefers an outcome in a choice another agent is doing" "wants" means "is ready to spend above average amount of resources to aquire" "wants" means "commits to a conditional transaction" "wants" means "is willing to compromise by consenting to receive less than previous arrangements would entitle them to"

 media spoilers 
Series The Peripheral Episode 7 Doodad

Mom character makes an argument that another character is evil for letting survival limiting their options to solve their dilemma and that they are evil for it. Listening to this with reflection of LW memes this sounds awefully lot like "A good person would let themselfs be shut down in this situation" ie you are evil for not being corrigible.

An interesting point is that both characters are intensly invested in the outcome of the situation with similar kind of downsides.

Stab-in-dark how my intuitions seems to react to people being pessimistic about playing St- Peterpurg game. To "cash out" the EV value of such games one needs to play infinite amount of games. Maybe there is a principle that if a game is offered one can play it arbitrarily many times but this only applies to finite amounts. Thus it might be opkay to turn down a game for having a property like "infinite risk neutrality" if one knows they will ever only be able to do finite amoutn of tries.

2Dagon2y
Assuming you're referring to https://en.wikipedia.org/wiki/St._Petersburg_paradox .  Current humans don't have infinite time, and certainly not infinite time to play such a game, nor any counterparty with infinite ability to pay.  Intuitions that include "don't get paid at all after some finite number of heads" are easy to understand and justify.  Especially since you don't need to get anywhere near infinity to understand the issue. What would you consider pessimistic?   This is "worth" $1 per maximum flip you can actually collect on, less the hassle and worry of playing.   I'd expect that to be pretty low for most situations.  In order for it to be fair at $20, you've got to believe the casino won't cheat or weasel out of over $1M potential loss.  Paying $44 to play would  imply that you get a year's gross planetary product. 
2Slider2y
Those intuitions point to belief in the robustness of the payout. My thinking is more pointed towards repeateability. Like "you get a one time offer" means you can't repeat it but by default "X offers to play game Y" means that if you which to play 3 Y games you may and if you wish to play 1000 Y then you may. But "I can do this all day long" is actually different from being able to do it infinite times. Even an action or game that takes 3 seconds can be done only so many times over a 24 hour period. Attitudes of "I can do this all day long" and "I can do this all year long" are different and an attitude of "I can do this for all of eternity" is rarely actually exhibited. I guess there are 4-5 categories and we can even assume all of them are expected utility positive. There is mild chances like a game of dice say where you win on 4/6 or 1/6 outcomes. Then there is steep chances like lottery where 1 in a million payouts. Then there is arbitrary chance games like St Petersburg where payout happens in vanishingly small portion. And then there is infinidesimal win chances where an infinite payout (I guess pascal muggins go here). It seems like the first 2 are okay to recommend to take as rational actions if the EV is positive and it seems recommendation to play is counterintuitive for the later 2. And I am suspecting that there is a game between lottery and St Peterburg that is still finite but which I would try to argue is okay to turn down. Something like locating the correct grain of sand in the galaxy giving you the ownership of it. Even if you could submit to the contest with the mere act out pointing out a grain of sand recommendation to spend a couple of decades shifting throught sand seems extreme. It might lead to a principle that says that butterflies should not buy "house loses" lottery tickets. In Magic the Gathering on some formats there is no theorethical upper limit on how many cards you deck may contain. However some events and rulings that wish to i
4Dagon2y
My point was that, for significant potential payouts, unstated factors will dominate the decision.  These impact one's intuition, and lead to a false diagnosis of "irrational".   It's pretty darned rational to avoid things that sound like scams, unless you have the energy and knowledge to know the difference.  If someone's offering me $lots out of the blue, it's probably a trick.   In addition, there are declining marginal utility and second-order effects (like how it impacts your future reputation and self image) that are very hard to model, and get included in our instincts - rational but illegible.
2Slider2y
If something is evolutionarily selected but is implemented as a reflex I have a icky feeling calling such calculations "rational". I guess for real situations a certain amount of "fighting the actual" is pretty much always relevant. Just because you have recognised and formulated the problem one way doesn't mean you have construed it correctly.

Beyond Two Souls, Talos Principle spoilers ahead.

Played Beyond Two Souls at "mission" phase of the story there is a twist where the protagonist loses faith that the organisation they are a part of is working for good goals. They set off to distance themselfs at quite great risk from their position in it. Combining this with narratives of aligment this felt like a reverse the usual situation. Usually the superhuman powerhouse turning against its masters is the catastrophe scenario. However here it seemed very clearly established that "going rogue" was the e... (read more)

Let me muse about infinite amounts so that I am not waiting to spill them on a random infinite-adjacent thread.

Someone offers you a choice of 10 fish now or 1 fish a day. Which option lets you eat more fish?

Questions of these kind seem to have properties of:

  • Even a ludicrous amount of one-off fish will lose to a steady income of fish
  • Bigger fish streams are bigger. Having 3 fish per day is better than 1 fish per day

Now one could make the case that there is a scheme where each day you grab the new fish and give them a number. For any finite fish per day this ... (read more)

3Taleuntum2y
You can absolutely count your fish that way with the help of hyperreals! ("growing promise" stream would be 12ω2+12ω though) I think https://en.wikipedia.org/wiki/Hyperreal_number#The_ultrapower_construction is a good introduction. https://math.stackexchange.com/questions/2649573/how-are-infinite-sums-in-nonstandard-analysis-defined and https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3459243 address the handling of infinite divergent series with hyperreals and https://arxiv.org/pdf/1106.1524v1.pdf talks about uniform probability over N (among other things).
2Slider2y
Why 12ω2+12ω and not any other? What kind of stream would correspond to ω2 ?
1Taleuntum2y
You can just pretend that ω is finite and plug it into the formula for the partial sum.n∑i=1i=12n2+12n, so ω∑i=1i=12ω2+12ω. If they were to give the ith odd number amount of fish on the ith day (1,3,5,7,9...), then you would have ω2 amount of fish, because n∑i=12i−1=n2. The two links I posted about the handling of infinite divergent series go into greater detail (eg. the question of the starting index).
2Slider2y
The links are very on point for my interest thanks for those. Some of it is in rather dense math but alas that is the case when the topic is math. At one point there is a constuction where in addition to having series of real numbers to define a hyperreal (r1,r2,r3...)=h1 we define a series of hyperreals (h1,h2,h3...)=d1, in order to get a "second tier hyperreal". So I do wonder whether the "fish gotten per day" is adeqate to distinguish between the scenarios. That is there might be a difference between "each day I get promised an infinite amout of fish" and "each day I get 1 more fish". That is on day n I have been promised ωn fish and taking it as α∑i=1I am not sure whether α=ω and whether terms like ω2 and ωα refer to the same thing or whether mixing "first-level" and "second level" hyperreals gets you a thing different than mixing just "level 1"s
3Measure2y
If you just graph fish vs. time, then the one-time gift is a constant function, the steady income is linear, and the "growing promise" stream-of-streams is a quadratic. The fact that a steady income will eventually surpass any one-time gift is because any positive-slope linear function will eventually exceed any constant-value function. Likewise, any 2nd-order polynomial with a positive x^2 term will eventually exceed any linear function. You could keep going with higher order polynomials if you want. A similar analogy would be a race where even a large head start will eventually be surpassed by a slightly faster car.
2Slider2y
I guess I know that kind of perspective exists but I don't want to refer to the "up to a time" thing but the total. For example taking the even and odd integers, that would seem like taking a stream of fish and putting them into alternate buckets every other day. I would like to be able to say something like "even and odd integers are of equal amount and they are half of that of all integers". But I am unsure whether that is accurate and where the foundations would crumble. if I do it in a very naive way in graph vs time then I could conclude that the bucket that I start with has a stricly dominating amount of fish (because the graph stay above or matches the other graph). And it feels odd to say that there are more even numbers than odd ones! Althought I guess it wouldn't be odd that 0+{1,2,3,4,5...| x is even} is more than {1,2,3,4,5...| x is even}. Or is it because the two-bucket arrangement is not the natural choice of odds vs evens? I guess if I received two fish a day and put them on different buckets things could change? Oh I guess I notice that if I receive two fish a day that seems more than if I receive 2 fish every other day. That seems to map to 2*ω > 2*(ω/2) = ω.
2Pattern2y
Every other day, the number is the same. On the days in between, there's a difference. Even taking that notion of 'dominating' into account, the difference is finite, and small: 1. If you had a choice between: a) getting 1/3 a fish on day one, then 1/9 a fish on day two and 1/27th a fish on day three...(1/3^n on day n) and b) getting 1 fish on day one, then 1/2 a fish on day two, and 1/4 a fish on day three...(1/2^(n-1) on day n) Then b is better than a...by more than the difference between 0+{1,2,3,4,5...| x is even} and {1,2,3,4,5...| x is even}. But it also seems like a bigger deal in those cases, a and b, because getting what comes out to a finite amount of fish over an infinite period of time isn't as good as getting an infinite amount. Flip a coin. If it comes up heads put the first fish in the red bucket, and the second fish in the green bucket. If it comes up tails do the reverse. (With the integers as a whole, where do you start counting from? The answer to that, determines which measure "seems" greater, on odd time/count steps.) You can color the positive integers, 1, 2, 3, 4,... using this pattern: red, green, red, green,... You are obviously partitioning them into two collections (sets, because there's not repetition). Are there more odd numbers than even? Well if you count how many odd numbers (which are positive integers) are less than or equal to n, and call this function o, o(n) > e(n) for odd n. But for even n, they're equal. So in the limit as n increases without bound: f(n) = o(n)/(o(n)+e(n) = 1/2.* But if you take steps of 2, starting at an even part, like 0, then the two sets are always of equal size. This seems to formalize the intuition well. Using this stream approach, one can articulate that: a randomly chosen integer is as likely to be even as it is to be odd**, and that this doesn't change over time (as one iterates further through the integers), but the as you iterate out the integers, the frequency with which you see prime num
1JBlack2y
Strictly speaking, there is no such thing as uniform distribution across the integers, or any other set that can be matched up 1:1 with integers. There are things that aren't probability distributions that do have some similar properties, such as "asymptotic density", but they fail to have the properties we require of probabilities. The "stream" idea seems to be another way to describe asymptotic density. For example we might like to say that there are as many natural numbers of even length in decimal notation as those of odd length, but the stream idea doesn't support that. The fraction of even-length numbers oscillates between 11% and 89% infinitely often, and there are other properties that behave even more weirdly. One paradox that illustrates why there is no such thing as a uniform distribution on natural numbers: Suppose we both pick natural numbers uniformly at random, independently of one another. You reveal your number. What is the probability that my number is greater than yours? No matter what number you chose, there are infinitely many larger numbers and only finitely many numbers that are not larger. Since my number was chosen uniformly at random, my number is larger with probability 1.
2Pattern2y
What?
2Slider2y
1 number of length 0, 9 numbers of length 1 (and maybe 0), 90 numbers of length 2, 900 numbers of length 3, 9000 numbers of length 4 9*10^(n-1) numbers of length n. For each n the amount of numbers of length just before that is 10 times less and the amount of numbers the next length is 10 times more. If you take a rolling fraction of n odd to all numbers seen it starts to go down when even numbered length is reached and starts to go up when an odd length number is reached.
2Pattern2y
(Ignoring that most people don't think of 0 as of being length 0.) Jump by two orders of magnitude every time and it stays stable: Starting with nothing: 1 of even length, 9 of odd length. 90 of even length, 900 of odd length. 10% versus 90%. Start after an even jump: 91 of even length, 9 of odd length. 9,091 of even length, 909 of odd length. (Starts at 91% even, but drops after a double jump. I don't know what the limit on this is.) By comparison, resolving proportion of even numbers versus odd numbers, is much easier, because it's a simple pattern which oscillates at the same rate, instead of changing. (in base 10) Well, if a different color is used every time, then the coloring aspect is solved. If you ask about addition though, then things get weird.
1JBlack2y
Right, that was a math typo. It really oscillates between 9% and 91%. For example, 909090 of the numbers below a million have even length, i.e. 91%. As you increase the bound toward ten million, this fraction decreases until it hits a minimum of 9%, and then starts increasing again until you reach a hundred million, and so on.
2Pattern2y
Well, the old solution to what is the limit of: +1, -1, +1, -1, etc. was: (index starts at one, pattern is (-1)^(n+1)) Consider the cases: a) +1, odd index b) -1, even index Average them. 0. If that was applied directly, it'd be: (9+91)/2% = 50%. You could argue that it should be broken down differently, because there's different proportions here though. You could also declare the answer undefined, and say infinity is about growth, it doesn't have a value, for x % 2 (or odd or even number as the case may be), and averages are ridiculous. (And once you have a breakdown of cases, and probability what more is there?)
2Slider2y
That is one of the puzzle in that 0+0+0+0+0... converges and has a value but +1-1+1-1+1-1... which seems to be like (1-1)+(1-1)+(1-1)+(1-1)... diverges (and the series with and without the paranthesis are not equivalent) The strram idea gives it a bit more wiggleroom. Getting 1,0,1,0,1.. fish seems equivalent to getting 1/2 fish a  day but 1,1,1,1,1.. seems twice the fish of 1,0,1,0,1,0,1,0... So which with the other methods are "can't say anthing" there is maybe hope to capture more cases with this kind of approach. Too bad its not super formal and I can't even pinpoint where the painpoints for formalization would be.
2Slider2y
That paradox is good in that it cuts to the matter very cleanly. To my mind it "numbers larger than my number" and "uniform integer" don't need to be same. There are n smaller numbers and ω-n bigger numbers. (ω-n)/ω is going to be near 1 (infinidesimally so) but not quite up to 1. Maybe crucially (ω-2n)/ω is smaller than (ω-n)/ω ie if I hit a high number my chances are better. I get that standard approach somehow gets into the way of this and I would like to know which axiom I have a bone to pick with. There is a (from my perspective a problem) that events of 0 probablity can happen and events with probability 1 can fail to happen. The associated verbal language is "almost surely" and "almost never". Showing me that a thing can almost surely happen doesn't guarantee it. To my mind this is because some zeroes have rounding to the nearest real and some don't.
1JBlack2y
There are two bones you might pick. One is that probabilities are real numbers. The other is that probabilities are countably additive.
2Dagon2y
For positive values, there will be some time t where the quadratic overtakes the linear which overtakes the constant.  The best advice for dealing with infinity, though, is "don't".  In some cases, you can deal with the approach to infinity as something else increases, but most of the time you don't even need to do that.   Also, there is some point in time at which fish don't help you.  You only need to calculate to that point.  And there is likely some sense in which earlier fish are more useful than later fish, and you should discount far-future fish to the point they don't affect your decisions.

sakshibhav "wittnessing attitude" and "scout mindset" are probably nearly synonymous

Socrates demonstrated that in the transjective relationship between individual and state both ends should be able to face obliteration but that making the other do that is cruel. Even if wrong.

please be less cruel to Mizushino

[This comment is no longer endorsed by its author]Reply
2Slider1y
That is still probably what socrates meant. I was feeling abnormally low, and have now calibrated to feel more in proper proportion.

Ponder Stibbons was written before Harry Potter? I don't think so.

Some concepts are starting to click into meanigful word variants.

I have had previously the feeling that Slack is a green concept within Magic the Gathering color pie. So if there is the green concept how does the cycle express itself in other colors?

As a scale of most solid to lesser ones:

Slack - Green - The start point of the comparison


Excellence - Black - The concept of slack is useful in as a remedy or critique. However seen as positive force in itself it is the ruthlessness of Molochian mazes. It is what makes the fangs of the tiger sharp. It is what d... (read more)

1Slider3y
The black resource is rather "resolve". I thought it could be "power" (in the sense of being able to make things happen (rather than a bundle of energy that would be a red concept)). "Resolve" has the aspect that it fuels sacrifice. You are willing to let go of things. One of the saying is that one has to make peace that one will/might die when entering a battlefield to battle. An attitude of "it is a good day to die" is a lot about choosing the sacrifice. It also has the aspect of pushing up to a goal state. You don't stop until the goal state is hit even if it a bit ambitious and costly. Also coming short on resolve can be made sense. Maybe what you are doing might feel immoral. Maybe you feel that you are losing yourself in pursuit of the the goal. From a black perspective this could be called weakness of character. But calling it resolve is neutral in that maybe sometimes the opposite is good. If you are wrong you should back down.