Steve Whetstone

Sorted by New

# Wiki Contributions

Against Tulip Subsidies

Sellers of education should raise their prices on rich dumb students a lot and also lower their tuition prices for smart poor students a lot.  More poor smart students will get into college, the college will make more tuition off the rich students, and everybody wins.   What you want to do is subsidize poor smart students while raising the prices on dumber richer students.  Then the school makes more money.  See how that works?  for example.

Tuition price is [\$50,000/year] or discount tuition price is [\$3,000/year + 1/10 of your families yearly income].

in the case you're family income is \$100,000/yr then you, the student, would pay \$13,000/year tuition
in the case your family income is \$400,000/yr then you the student would pay \$43,000/year tuition
in the case your family income is \$500,000/yr then you the student would pay \$53,000/year tuition
in the case your family income is \$20,000/yr then you the student would pay \$5,000/year tuition
in the case your family income is \$1,000,000/yr then you the student would pay \$103,000/year tuition
in the case your family income is \$00/yr then you the student would pay \$3,000/year tuition

So what this price system does is make more profit for the school and lower prices for poor students at the same time.  BUT wait, what do we do about the over production of tulips, or degree's?  Well now that we have a price system that's fair we can just let the school raise prices.

What if tuition price were \$200,000/year or discount tuition price of \$300/year + 1/5 of your families income? For rich people the price of 1/5 of the family income is higher than 1/10 of the family income in the previous example.  AND the fixed price part has decreased from \$3000, to \$300 per year.  So this second price system, the poor are actually encouraged to become smarter at even lower costs while the rich are charged even higher prices for the same education.  The result is eventually prices stabilize and there is no over supply problem?  hmm. .

Maybe not.  I'm not sure my idea would work 100% but thanks for writing the article and sharing with me so I could read and think about what you wrote too.

[Bay Area LW] San Francisco Meetup: Special gathering for Maia and Roger!

I can help.  live in SF near Civic Center.  but I don't know how to scout a venue or what that involves. I have meetups for my interests at Code for San Francisco meetup sometimes, but I don't go out much.

It occurs to me as a "Notion" that . . .

To formulate fuzzy logic in a boolean top domain environment, I think you would need to use a probabilistic wave form type explanation. Or else just treat fuzzy logic as a conditional multiplier on any boolean truth value. To encapsulate a boolean or strict logic system into fuzzy logic is trivial and evolving. You could start with just adding a percentage based on some complex criteria to any logical tautology or contradiction. By default the truth axis of a fuzzy logic decision or logic tree is going to be knows for some classes of logic systems. When used for making real world decisions in the context of taking action in a decision or vote, the "relevance" value of a fuzzy logic based decision branch would be 0% relevant for a "contradiction logic" and 100% relevant for a "tautology logic"? So in the real world we don't consider contradictions when we humans use fuzzy logic to decide on a course of action. the default truth and relevance value of any contradiction in Fuzzy logic is zero until voted otherwise or adjusted through some mechanism.

Or maybe this doesn't make sense? sorry if this post is a little confused and I haven't thought about these ideas until just now for this discussion. Thanks for your time and let me know if it wasn't worth your time if it bothered you please. I don't want to be a bother so just ask and I'll go away if you prefer. Thx.

If all true statements are defined as non-contradictory, then you can ask more meaningful fuzzy logic questions about the relevance of several tautologies for applying to a specific real world phenomena. To do this you need a survey or poll of the environment and a survey or poll for determining how much the teutologies matter. For example.

Consider the following boolean true false claims we hold to be true and consider their relevance for use in locating humans statistically: our first rule or fuzzy logic heuristic is to take the first tautology that seems relevant and apply it to see if it matches results.

For example consider these specefic logic systems:

1) The complete theory of gravity as discovered by Neuton determines that statistically humans have mass and density approximately equal to water. Gravity combined with density predicts that humans should be located in a region of space centered on the gravitational center of the planet and evenly distributed in a sphere with all air above every human and all solid matter below humans. Evidence of humans living underground or flying on airplanes above any air or with air separating humans from the center of gravity is a violation of this theory of gravity and density and random distribution mathematics.

2) The incomplete theory of plate tectonics and geography determines that people in some places their will be air closer to the center of gravity than at other places. The idealized sphere distribution of humans has bumps and valleys caused by plate techtonics asserts that some humans on mountain tops will be at local equilibrium above other air molecules in valleys. Humans at one latitude and longitude can be above the air in some other latitude and longitude.

3) The incomplete theory of human behavior says that humans can move and defy uniform distribution rules about their statistical probabilistic location relative to the center of gravity. People go into rocket ships and can even be found above the air which is in complete contradiction to the theory of gravity considered as the only teutology theory of relevance.

4) The theory of geometry and angular momentum combined with gravity proves conclusively that humans must be located exclusively in a squished sphere shape distribution (oblate) with their distance from the the center of gravity determined solely by their relative density compared to the rest of the material in the the planetary space under consideration.

Conclusion: not all of these verifiable true boolean statements are equally valuable and equally relevant. Some of them can be discarded or are more usably incomplete than others. Utility value of any logic system is determined by the use case and boolean logic components can be added and removed from consideration and time consuming calculations based on their predictive ability for the particular use case. To determine, in this example, the most relevant and important logic systems I would like anyone who reads this to rank order the logic system choices from most relevant and useful to least relevant and useful. The distribution of your rank order voting will determine the utility value of the the multiple non-contradictory tautology logic systems comparatively. You may also add a 1 (one) new option to this voting poll on relevance and other can rank your additional logic framework. When we have enough votes we start evolving and deleting logic systems from our poll until we have a high level of agreement or a stable equilibrium of voting distribution.

Boolean logic tells us what is possible. Fuzzy logic tells us what is relevant and usable.

Moderation List (warnings and bans)

Ok, thanks for re-instating my original account. Will that reactivate my discussion topic "discussion of society scale benefits. . . " https://www.lesswrong.com/posts/CLMh2Ne7D2H9EaXzy/discussion-re-implementation-of-society-scale-benefits-that ? I see that it did not. Perhaps you decided to renege on your plan to lift the ban?

Moderation List (warnings and bans)

Frontpage commenting guidelines:

Get curious. If I disagree with someone, what might they be thinking; what are the moving parts of their beliefs? What model do I think they are running? Ask yourself - what about this topic do I not understand? What evidence could I get, or what evidence do I already have?

Weak arguments against the universal prior being malign

If you assume the prior has a computational cost vs computational benefit criteria for communicating or gathering data, or sharing data then doesn't that strongly limit the types of data and the data specifics that the prior would be interested in? As one commenter pointed out, it may be less expensive to simulate some things in the prior channel than create an actual new channel (simulation). We can categorize information that a prior could most efficiently transmit and receive across a specific channel into profitable and not profitable types of information. Non-profitable information is less expensive to discover or produce without opening a channel. Profitable information for the channel may be limited to very specific kinds of first principle information.

To use an analogy. Humans don't engage is building large scale simulations to try and learn something simple they can learn with a smaller and less resource demanding simulation. I think it has to do with the motives of the prior. If it's just seeking self advancement, then it would follow the human principle of making any experiments or simulations using the least amount of information and resources necessary. If the prior doesn't seek self advancement then it probably doesn't reach the stage where it can create any simulation at all. So priors are expected to be self interested, but maybe not interested in us as a method for advancing it's interests. You could maybe expect that we are not living in a simulation because if we were, then it would be a very inefficient simulation or else a simulation with the goal of deriving an extremely complex answer to a very difficult simulation question that can't be discovered in any other less expensive, faster, or cheeper way. If the universe contains meaningless data and meaningless information and irrelevant matter, then we are probably not living in a simulation. If everything we do, and every molecule and every decision matters and has meaning, then we are probably living in a simulation? Planks content seems to be related to the information density of the universe and would be a good way to roughly calculate the information complexity of the universe and the complexity.

In theory you could test this out and determine if we are living in a simulation or else crash a simulation by imposing overly burdensome calculations that require excessive and unfeasible detail. For example, you might measure a star in the sky with extreme precision and collect data that requires 1 trillion years of effort to verify. As soon as you make that measurement, then perhaps you have imposed a very high cost of 1 trillion years of effort for the simulation to maintain consistency. BUT only took you 1 year to measure the star with the precision needed. The result is an exponentially increasing inefficiency in the simulation that eventually causes a crash or end or other intervention?