All of Psy-Kosh's Comments + Replies

The Quantum Arena

Ah, nevermind then. I was thinking something like let b(x,k) = 1/sqrt(2k) when |x| < k and 0 otherwise

then define integral B(x)f(x) dx as the limit as k->0+ of integral b(x,k)f(x) dx

I was thinking that then integral (B(x))^2 f(x) dx would be like integral delta(x)f(x) dx.

Now that I think about it more carefully, especially in light of your comment, perhaps that was naive and that wouldn't actually work. (Yeah, I can see now my reasoning wasn't actually valid there. Whoops.)

Ah well. thank you for correcting me then. :)

Omission vs commission and conservation of expected moral evidence

I'm not sure commission/omission distinction is really the key here. This becomes clearer by inverting the situation a bit:

Some third party is about to forcibly wirehead all of humanity. How should your moral agent reason about whether to intervene and prevent this?

2Stuart_Armstrong7yThat's interesting - basically here we're trying to educate an AI into human values, but human values are going to swiftly be changed to something different (and bad from our perspective). I think there's no magical solution - either we build a FAI properly (which is very very hard), and it would stop the third party, or we have an AI that we value load and try and prevent our values from changing while it's happening. The omission/commission thing applies to value loading AIs, not to traditional FAI. But I admit it's not the best analogy.
The Quantum Arena

Aaaaarggghh! (sorry, that was just because I realized I was being stupid... specifically that I'd been thinking of the deltas as orthonormal because the integral of a delta = 1.)

Though... it occurs to me that one could construct something that acted like a "square root of a delta", which would then make an orthonormal basis (though still not part of the hilbert space).

(EDIT: hrm... maybe not)

Anyways, thank you.

5[anonymous]7yI'm not sure what you're trying to construct, but note that one can only multiply distributions under rather restrictive conditions. There are some even more abstract classes of distributions which permit an associative multiplication (Colombeau algebras, generalized Gevrey classes of ultradistributions, and so on) but they're neither terribly common nor fun to work with.
The Quantum Arena

Meant to reply to this a bit back, this is probably a stupid question, but...

The uncountable set that you would intuitively think is a basis for Hilbert space, namely the set of functions which are zero except at a single value where they are one, is in fact not even a sequence of distinct elements of Hilbert space, since all these functions are elements of , and are therefore considered to be equivalent to the zero function.

What about the semi intuitive notion of having the dirac delta distributions as a basis? ie, a basis delta(X - R) parameterized by the vector R? How does that fit into all this?

7Dacyn7yGood question! The Dirac delta distributions are a basis in a certain sense, but not in the sense that I was talking about in my previous comment (which is the sense in which mathematicians and physicists say that "the Hilbert space of quantum mechanics has a countable basis"). I realize now that I should have been more clear about what kind of basis I was talking about, which is an orthonormal basis [http://en.wikipedia.org/wiki/Orthonormal_basis] - each element of the basis is a unit vector, and the lines spanned by distinct basis elements meet at right angles. Implicit in this formulation is the assumption that elements of the basis will be elements of Hilbert space. This is why the Dirac delta distributions are not a basis in this sense - they are not elements of Hilbert space; in fact they are not even functions but are rather generalized functions [http://en.wikipedia.org/wiki/Distribution_(mathematics]). Physicists also like to say that they are "nonrenormalizable" in the sense that "no scalar multiple of a delta function is a unit vector" - illustrating failure of the criterion of orthonormality in a more direct way. The sense in which the Dirac delta distributions are a basis is that any element of Hilbert space can be written as a integral combination of them: \delta_x%20\;dx) (Both sides of this equation are considered in the distributional sense, so what this formula really means is that for any function , \left(\int%20g\delta_x\right)\;dx,) which is a tautology.) This is of course a very different statement from the notion of orthonormal basis discussed above. So what are some differences between these two notions of bases? 1. Orthonormal bases have the advantage that any two orthonormal bases have the same cardinality, allowing dimension to be defined consistently. By contrast, if one applies a Fourier transform to Hilbert space on [0,1], one gets Hilbert space on the integers; but the former has an uncountable basis of Dirac de
Proper value learning through indifference

Ah, alright.

Actually, come to think about it, even specifying the desired behavior would be tricky. Like if the agent assigned a probability of 1/2 to the proposition that tomorrow they'd transition from v to w, or some other form of mixed hypothesis re possible future transitions, what rules should an ideal moral-learning reasoner follow today?

I'm not even sure what it should be doing. mix over normalized versions of v and w? what if at least one is unbounded? Yeah, on reflection, I'm not sure what the Right Way for a "conserves expected moral evidence" agent is. There're some special cases that seem to be well specified, but I'm not sure how I'd want it to behave in the general case.

Open Thread April 8 - April 14 2014

Not sure. They don't actually tell you that.

0[anonymous]8y
Proper value learning through indifference

Really interesting, but I'm a bit confused about something. Unless I misunderstand, you're claiming this has the property of conservation of moral evidence... But near as I can tell, it doesn't.

Conservation of moral evidence would imply that if it expected that tomorrow it would transition from v to w, then right now it would be acting on w rather than v (except for being indifferent as to whether or not it actually transitions to w), but what you have here would, if I understood what you said correctly, will act on v until that moment it transitions to w, even though it knew in advance it was going to transition to w.

3Stuart_Armstrong8yIndeed! An ideal moral reasoner could not predict the changes to their moral system. I couldn't guarantee that, but instead I got a weaker condition: an agent that didn't care about the changes to their moral system.
Open Thread April 8 - April 14 2014

Yeah, found that out during the final interview. Sadly, found out several days ago they rejected me, so it's sort of moot now.

0komponisto8yYikes. Any idea why?
Absence of Evidence Is Evidence of Absence

Alternately, you might have alternative hypothesis that explain the absence equally well, but with a much higher complexity cost.

Open Thread April 8 - April 14 2014

Hey there, I'm mid application process. (They're having me do the prep work as part of the application). Anyways,,,

B) If you don't mind too much: stay at App Academy. It isn't comfortable but you'll greatly benefit from being around other people learning web development all the time and it will keep you from slacking off.

I'm confused about that. App Academy has housing/dorms? I didn't see anything about that. Or did I misunderstand what you meant?

0Jack8yHey. You might have had this question answered already but just in case: they don't have housing or dorms. But they do have room and allow you to put up a cot or inflatable mattress and sleep there for the duration.

Cool! (Though does seem that a license would be useful for longer trips, so you'd at least have the option of renting a vehicle if needed.)

And interesting point re social environment.

1Gunnar_Zarncke8yHaving the license would be useful, but you have to balance the cost against the benefit. I figured that a) I (or my parents) could put the money to better use, b) the investment wouldn't pay off. Note that if you are in a relationship or larger family it is usually sufficient that one person has a drivers license (but then better one including lorries and/or trailer, which has additional costs).

I'm just going to say I particularly liked the idea of the house cable transport system.

For me it was the least plausible part. I think if the major obstacle to living where you want is the hassle of carting all your stuff around, the most efficient answer surely isn't living in a shipping crate with special content-bracing furniture.

Makes more sense to me to just not bother with "owning" a lot of matter. If every kind of material object you need is available anywhere, all you need to bring with you when you move house is your information (books, music, family pictures, decor configuration for your living space). There's no particul... (read more)

Yeah, that was my very first thought re the tunnels. Excavation is expensive. (and maintenance costs would be rather higher as well.)

OTOH, we don't even need full solution (including legal solution) to self driving cars to improve stuff. The obvious solution to the "but I might need to go on a 200 mile trip" is "rent a long distance car as needed, and otherwise own a commuter car."

That needs far less of coordination problems, because that's something that one can pretty much do right now. Next time one goes to purchase/lease/whatever a ... (read more)

5Gunnar_Zarncke8yThis I realized as a 15 year old. I balanced the costs of a drivers license (time and cost), total cost of car(s) and the time spent driving against the costs of public transportation (including occassional larger transports) and freedom of mobility. Note that public transportations saves lot of time and time was important to me. So I decided against a car. And I have not regretted it. Since I'm free-lancing I'm using cabs more often. But driving a car myself? What a horrible waste of precious time. Disclaimer: Public transportation is quite good where I live and allows to work on a laptop during commute. From my point of view in an ideal system significant commute and relocation shouldn't be neccessary at all. Besides leaving a place often means leaving a social environment which has to be balanced - except you see independent singles as more highly motivated or that virtual relationships are sufficient.
Meetup : Southeast Michigan

Running a bit late, but still coming, just about to head out.

Meetup : Southeast Michigan

Cool! In that case, as of now at least, I'm still planning on showing up.

Meetup : Southeast Michigan

Well, I could bring a few extra chairs if wanted. (Although are we even still on for tomorrow given how the roads are? (Admittedly, sunday will probably be worse...))

0Scott Alexander8yWe're still on for whoever can make it. I was out driving today and roads seemed a little better.
Meetup : Southeast Michigan

Well, as I said, anything else needed? (more chairs? other stuff?)

Meetup : Southeast Michigan

As of now, I'm planning on coming.

Anything I should be bringing? (ie, extra chairs, whatever?)

0Scott Alexander8yI think we're good, as long as some people don't mind sitting on the floor.
2arundelo8yLike last time, I'll bring around six chairs & stools and some "my name is" stickers.
Open Thread, November 1 - 7, 2013

Hrm... The whole exist vs non exist thing is odd and confusing in and of itself. But so far it seems to me that an algorithm can meaningfully note "there exists an algorithm doing/perceiving X", where X represents whatever it itself is doing/perceiving/thinking/etc. But there doesn't seem there'd be any difference between 1 and N of them as far as that.

Open Thread, November 1 - 7, 2013

That seems to be seriously GAZP violating. Trying to figure out how to put my thoughts on this into words but... There doesn't seem to be anywhere that the data is stored that could "notice" the difference. The actual program that is being the person doesn't contain a "realness counter". There's nowhere in the data that could "notice" the fact that there's, well, more of the person. (Whatever it even means for there to be "more of a person")

Personally, I'm inclined in the opposite direction, that even N separate copi... (read more)

0Nick_Tarleton8yWhy not go a step further and say that 1 copy is the same as 0, if you think there's a non-moral fact of the matter? The abstract computation doesn't notice whether it's instantiated or not. (I'm not saying this isn't itself really confused - it seems like it worsens and doesn't dissolve the question of why I observe an orderly universe - but it does seem to be where the GAZP points.)
Open Thread: November 2009

I don't think I was implying physicists to be anti-MWI, but merely not as a whole considering it to be slam dunk already settled.

Completeness, incompleteness, and what it all means: first versus second order logic

I've been thinking... How is it that we can meaningfully even think about full semantics second order logic of physics is computable?

What I mean is... if we think we're talking about or thinking about full semantics? That is, if no explicit rule following computable thingy can encode rules/etc that pin down full semantics uniquely, what are our brains doing when we think we mean something when we mean "every" subset?

I'm worried that it might be one of those things that feels/seems meaningful, but isn't. That our brains cannot explicitly "pin... (read more)

The Apologist and the Revolutionary

Odd indeed, but if it works for you, that's good. (How long does the effect last?)

How I Lost 100 Pounds Using TDT

Thermodynamics is not any more useful than quantum mechanics in understanding obesity. It is moralizing disguised as an invocation of natural law.

Mm... I guess what this would be a case of I agree with the connotations of what you're saying, but not with the explicitly stated form, which I'd say goes a bit too far. It's probably more fair to say "energy-in - energy-spent - energy-out-without-being-spent = net delta energy" is part of the story, simply not the whole story.

It doesn't illustrate the ways in which, say, one might become unwell/fa... (read more)

Meetup : Southeast Michigan

Oh, incidentally, just commenting that's a good date, it's the anniversary of a certain One Small Step. :)

Start Under the Streetlight, then Push into the Shadows

Science itself would be a major "flashlight", I guess?

Can we dodge the mindkiller?

Alternative vote is Instant Runoff Voting, right? If so, then it's bad, for it fails the monotonicity criterion. That means that raising one's vote re a particular candidate doesn't necessarally do the obvious thing.

Personally, I favor Approval Voting, since it seems to be the simplest possible change to our voting system that would still produce large gains.

(Also, would be nice if we (the US, that is) could switch to algorithmic redistricting and completely get rid of the whole gerrymandering nonsense.)

Rationality Quotes June 2013

Hrm... But "self-interest" is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc... Seems like it wouldn't be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person's well being, instead of it secretly being just a concern for your own. It'd perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that's not the same thing as saying that the only think you care about is your own reinforcement system.

0pinyaka9yWell, the trivializing gambit here would be to just say that "caring about another person" just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress. I'm not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you're looking for information not to literally feel the same thing at the same intensity - when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression). It's worth noting (for me) that this doesn't diminish the importance of empathy and it doesn't mean that I don't really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can't really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you're trying to do something specific with your caring circuits (or trying to figure out how to emulate them).
Rationality Quotes June 2013

Why would actual altruism be a "new kind" of motivation? What makes it a "newer kind" than self interest?

0pinyaka9yI meant that everyone I've discussed the subject with believes that self-interest exists as a motivating force, so maybe "additional" would have been a better descriptor than "new."
Causal Universes

Re your checking method to construct/simulate an acausal universe, won't work near as I can tell.

Specifically, the very act of verifying a string to be a life (or life + time travel or whatever) history requires actually computing the CA rules, doesn't it? So in the act of verification, if nothing else, all the computing needed to make a string that contains minds actually contain the minds would have to occur, near as I can make out.

Sorting Pebbles Into Correct Heaps

He wasn't endorsing that position. He was saying "pebblesorters should not do so, but they pebblesorter::should do so."

ie, "should" and "pebblesorter::should" are two different concepts. "should" appeals to that which is moral, "pebblesorter::should" appeals to that which is prime. The pebblesorters should not have killed him, but they pebblesorter::should have killed them.

Think of it this way: imagine the murdermax function that scores states/histories of reality based on how many people were murdered. The... (read more)

He wasn't endorsing that position. He was saying "pebblesorters should not do so, but they pebblesorter::should do so."

You didn't understand what TheOtherDave said. He was talking about the same usage you are talking about and commenting that it is in contrast to Eliezer's past usage (and past advocacy of usage in conversations about how he uses should-related words.)

A Series of Increasingly Perverse and Destructive Games

Looking down the thread, I think one or two others may have beat me to it too. But yes, It seems at least that Omega would be handing the programmers a really nice toy and (conditional on the programmers having the skill to wield it), well..

Yes, there is that catch, hrm... Could put something into the code that makes the inhabitants occasionally work on the problem, thus really deeply intertwining the two things.

0private_messaging9yThis is what's rather unsatisfactory with the notion of subjective experience as 'computation' - optimizations that do not affect the output may be unsafe from the inside perspective - even if the beings inside simulator sometimes work on the problem, the hyper-compiler might optimize too much out. Essentially, you end up with 'zombie' hypercomputers that don't have anyone inside, and 'non zombie' hypercomputers inside of which beings really live.
A Series of Increasingly Perverse and Destructive Games

Game3 has an entirely separate strategy available to it: Don't worry initially about trying to win... instead code a nice simulator/etc for all the inhabitants of the simulation, one that can grow without bound and allows them to improve (and control the simulation from inside).

You might not "win", but a version of three players will go on to found a nice large civilization. :) (Take that Omaga.)

(In the background, have it also running a thread computing increasingly large numbers and some way to randomly decide which of some set of numbers to ou... (read more)

0loup-vaillant9yArgh, you beat me to it! But frankly, how's that not obvious? Omega is giving us unbounded computational power, and we wouldn't use it? Now there may be a catch. Nothing says the hyper-computer actually computes the programs, even those that do return a value. It could for instance detect the separation between your nice simulated advanced civilization and the background program, and not compute the simulation at all. You could counteract that strategy, but then the Hyper-computer may be smarter than that.
The Level Above Mine

You know, I want to say you're completely and utterly wrong. I want to say that it's safe to at least release The Actual Explanation of Consciousness if and when you should solve such a thing.

But, sadly, I know you're absolutely right re the existence of trolls which would make a point of using that to create suffering. Not just to get a reaction, but some would do it specifically to have a world they could torment beings.

My model is not that all those trolls are identical (In that I've seen some that will explicitly unambiguously draw the line and recognize that egging on suicidal people is something that One Does Not Do, but I also know (seen) that all too many gleefully do do that.)

0MugaSofer9yIt's worth noting that private torture chambers seem different to trolling, but a troll can still set up a torture chamber - they just care about people's reaction to it, not the torture itself.
Lifeism in the midst of death

I'm sorry. *offers a hug* Not sure what else to say.

For what it's worth, in response to this, I just sent 20$ to each of SENS and SIAI.

1TobyBartels9yThanks!
The Evil AI Overlord List

I was imagining that a potential blackmailer would self modify/be an Always-Blackmail-bot specifically to make sure there would be no incentive for potential victims to be a "never-give-in-to-blackmail-bot"

But that leads to stupid equilibrium of plenty of blackmailers and no participating victims. Everyone loses.

Yes, I agree that no blackmail seems to be the Right Equilibrium, but it's not obvious to me exactly how to get there without the same reasoning that leads to becoming a never-give-in-bot also leading potential blackmailers to becoming al... (read more)

0tim9yMy argument is more or less as follows: 1. The act of agent A blackmailing agent B costs agent A more than not blackmailing agent B (at the very least A could use the time spent saying "if you don't do X then I will do Y" on something else). 2. If A is an always-blackmail-bot then A will continue to incur the costs of futilely blackmailing B (given that B does not give in to blackmail). 3. If the costs of blackmailing B (and/or following through with the threat) are not negative, then A should blackmail B (and/or follow through with the threat) regardless of B's position on blackmail. And by extension, agent B has no incentive to switch from his or her never-give-in strategy. 4. If A inspects B and determines that B will never give in to blackmail, then A will not waste resources blackmailing B.
The Evil AI Overlord List

I was thinking along the lines of the blackmailer using the same reasoning to decide that whether or not the potential victim of blackmail would be a blackmail ignorer or not, the blackmailer would still blackmail regardless.

ie, Blackmailer, for similar reasoning to the potential Victim, decides that they should make sure that the victim has nothing to gain by choosing ignore by making sure that they themselves (Blackmailer) would precommit to ignoring whether or not. ie, in this sense the blackmailer is also taking a "do nothing" thing in the se... (read more)

3Eliezer Yudkowsky9yOnly if you expect that the blackmail victim has not "already" decided that if the blackmailer does that, they will still ignore the blackmail regardless. Wise agents ignore order-0 blackmail, ignore order-1 blackmail in which the blackmailer decides to ignore their ignorance of order-0 blackmail, ignore order-omega blackmail in which the blackmailer decides to ignore all order-N refusals to be blackmailed, etcetera for all ordinals. If there is some ordinal of blackmail you do not ignore, you can be blackmailed, and how does that help? Only if those blackmailers have wrongly anticipated that their victims will be stupid enough to conform.
The Evil AI Overlord List

Wouldn't the blackmailer reason along the lines of "If I let my choice of whether to blackmail be predicated on whether or not the victim would take my blackmailing into account, wouldn't that just give them motive to predict and self modify to not allow themselves to be influenced by that?" Then, by the corresponding reasoning, the potential blackmail victims might reason "I have nothing to gain by ignoring it"

I'm a bit confused on this matter.

0tim9ySince following through with a threat is (almost?) always costly to the blackmailer, victims do gain something by ignoring it. They force the blackmailer to put up or shut up so to speak. On the other hand, victims do have something to lose by not ignoring blackmail. They allow their actions to be manipulated at little to no cost by the blackmailer. That is, if you have a "never-give-into-blackmail-bot" then there is a "no-blackmail" equilibrium. The addition of blackmail does nothing but potentially impose costs on the blackmailer. If following through with threat was a net gain for the blackmailer then they should just do that regardless.
3Eliezer Yudkowsky9yWell, sure, if the blackmail victim were silly enough to reason "I have nothing to gain by ignoring it" if the blackmailer went through anyway, then the blackmailer would indeed decide to ignore their decision to ignore it and go through anyway. But that's only if the blackmail victim is that silly. In a problem like this, the "do nothing" side has the advantage; there's nothing the other side can do to make them be responsive and blackmailable. That's why I expect TDT to resolve to a blackmail-free equilibrium.
What does the world look like, the day before FAI efforts succeed?

The idea is not "take an arbitrary superhuman AI and then verify it's destined to be well behaved" but rather "develop a mathematical framework that allows you from the ground up to design a specific AI that will remain (provably) well behaved, even though you can't, for arbitrary AIs, determine whether or not they'll be well behaved."

Causal Reference

How, precisely, does one formalize the concept of "the bucket of pebbles represents the number of sheep, but it is doing so inaccurately." ie, that it's a model of the number of sheep rather than about something else, but a bad/inaccurate model?

I've fiddled around a bit with that, and I find myself passing a recursive buck when I try to precisely reduce that one.

The best I can come up with is something like "I have correct models in my head for the bucket, pebbles, sheep, etc, individually except that I also have some causal paths linking them that don't match the links that exist in reality."

3fubarobfusco9ySee this thread [http://lesswrong.com/lw/c4h/rationality_quotes_may_2012/6hsb?context=1#6hsb] for a discussion. A less buck-passing model is: "This bucket represents the sheep ... plus an error term resulting from this here specific error process." For instance, if I systematically count two sheep exiting together as one sheep, then the bucket represents the number of sheep minus the number of sheep-pairs erroneously detected as one sheep. It's not enough to say the sheep-detector is buggy; to have an accurate model of what it does (and thus, what its representations mean) you need to know what the bug is.
[Link] Offense 101

But you can argue for anything. You might refuse to do so but the possibility is always there.

Presumably one would wand to define "strong argument" in such a way that tend to to be more available for true things than for false things.

The Fabric of Real Things

Koan 4: How well do mathematical truths fit into this rule of defining what sort of things can be meaningful?

-3jslocum9yMathematics is a mental construct created to reliably manipulate abstract concepts. You can describe mathematical statements as elements of the mental models of intelligent beings. A mathematical statement can be considered "true" if, when an intelligent beings use the statement in their reasoning, their predictive power increases. Thus, " '4+4=8' is true" implies statements like "jslocum's model of arithmetic predicts that '4+4=8', which causes him to correctly predict that if he adds four carrots to his basket of four potatoes, he'll have eight vegetables in his basket" I'm no sure that "use the statement in their reasoning" and "their predictive power increases" are well formed concepts, though, so this might need some refining.
-1Slackson9yMost mathematics has isomorphism to typographical or computational rules. I'm pretty sure these can be encoded into a causal diagram which connects with the real world.
-3Bruno_Coelho9yI find mathematics most about future physics laws who will be discovered. Math without empirical confirmation is more difficult to link, but normally is a matter of time to find a application.
1dankane9yThis seems especially difficult noting that although we can claim that things are caused by certain mathematical truths, it doesn't really make sense to include them in our Bayesian net unless we could say, for example, how anything else would be different if 2+2=3.
Random LW-parodying Statement Generator

"what is true is already so. the statement that "a/an upload of Pinkie Pie will kill you because you are made of the utility function of the Society for Rare Diseases in Cute Puppies that it could use for something else." doesn't make it worse" is obviously false? Have a lot of caring!

hrm...

You make a compelling argument that a/an babyeater is the art of winning at infanticide.

I guess that one works.

Irrationality Game II

2% is way way way WAY too high for something like that. You shouldn't be afraid to assign a probability much closer to 0.

Open Thread: November 2009

Fair enough. (Well, technically both should move at least a little bit , of course, but I know what you mean.)

It would cause me to update in the direction of believing that more physicists probably see MWI as slam-dunk.

Hee hee. :)

Less Wrong Product & Service Recommendations

Touch 3G is limited to stuff like Wikipedia and Amazon. (I have a Touch, and I like it, btw.) More general net access via Kindle Touch is only via wifi)

Backward Reasoning Over Decision Trees

But... even given them not being that clever, you'd think they'd know that the ability to arbitrarily slice and dice a bill would be too much. (I know I may be displaying hindsight bias, but... they're politicians! They have to have had experience with, say, people taking their (or colleagues') words out of context and making it sound like something else, or they themselves doing it to an opponent, right?

ie, the ability to slice and dice some communication into something entirely different would be something you'd think they'd already have personal experience with. At least, that's what I'd imagine. Though, still, Hanlon's Razor and all that.

Many many years ago, I was contacted by a coworker for an estimate of what it would take to modify our tax-calculation code to support a new law recently passed by the Texas legislature, which specified that the first $25 of Internet charges for Texas-residential subscribers was free of state taxes.

I asked "Well, that depends. What does 'first' mean? If they mean that on every bill we should reduce the taxable basis by $25, we can do that easily. If they mean that on every bill we should chronologically sort all the charges and then exclude those char... (read more)

Load More