ACrackedPot's Shortform

by ACrackedPot23rd Apr 202125 comments
23 comments, sorted by Highlighting new comments since Today at 7:43 AM
New Comment

I really, really dislike waste.

But the thing is, I basically hate the way everybody else hates waste, because I get the impression that they don't actually hate waste, they hate something else.

People who talk about limited resources don't actually hate waste - they hate the expenditure of limited resources.

People who talk about waste disposal don't actually hate waste - they hate landfills, or trash on the side of the road, or any number of other things that aren't actually waste.

People who talk about opportunity costs ('wasteful spending') don't hate the waste, they hate how choices were made, or who made the choices.

Mind, wasting limited resources is bad.  Waste disposal is itself devoting additional resources - say, the land for the landfill - to waste.  And opportunity costs are indeed the heart of the issue with waste.

At this point, the whole concept of finishing the food on your plate because kids in Africa don't have enough to eat is the kind of old-fashioned where jokes about it being old fashioned are becoming old fashioned, but the basic sentiment there really cuts to the heart of what I mean by "waste", and what makes it a problem.

Waste is something that isn't used.  It is value that is destroyed.

The plastic wrapping your meat that you throw away isn't waste.  It had a purpose to serve, and it fulfilled it.  Calling that waste is just a value judgment on the purpose it was put to.  The plastic is garbage, and the conflation of waste and garbage has diminished an important concept.

Food you purchase, that is never eaten and thrown away?  That is waste.  Waste, in this sense, is the opposite of exploitation.  To waste is to fail to exploit.  However, we use the word waste now to just mean that we don't approve of the way something is used, and the use of the word to express disapproval of a use has basically destroyed - not the original use of the word, but the root meaning which gives the very use of the word to express disapproval weight.  Think of the term "wasteful spending" - you already know the phrase just means spending that the speaker disapproves of, the word "wasteful" has lost all other significance.

Mind, I'm not arguing that "waste" literally only means a specific thing.  I'm arguing that an important concept has been eroded by use by people who were deliberately trying to establish a link with that concept.

Which is frustrating, because it has eroded a class of criticisms that I think society desperately needs, which have been supplanted by criticisms rooted in things like environmentalism, even when environmentalism isn't actually a great fit for the criticisms - it's just the framing for this class of criticism where there is a common conceptual referent, a common symbolic language.

And this actually undermines environmentalism; think about corporate "green" policies, and how often they're actually cost-cutting measures.  Cutting waste, once upon a time, had the same kind of public appeal; now if somebody talks about cutting waste, I'm wondering what they're trying to take away from me.  We've lost a symbol in our language, and the replacement isn't actually a very good fit.

now if somebody talks about cutting waste, I'm wondering what they're trying to take away from me.

This probably applies to applause lights in general. Individuals sometimes do things for idealists reasons, but corporations are led by people selected for their ability to grab power and resources. Therefore all their actions should be suspected as an attempt to gain more power and/or resources. A "green policy" might mean less toilet paper in the company bathrooms, but it never means fewer business trips for the management.

This concept is not fully formed.  It is necessary that it is not fully formed, because once I have finished forming it, it won't be something I can communicate any longer; it will become, to borrow a turn of phrase from SMBC, rotten with specificity.

I have noticed a shortcoming in my model of reality.  It isn't a problem with the accuracy of the model, but rather there is an important feature of the model missing.  It is particularly to do with people, and the shortcoming is this: I have no conceptual texture, no conceptual hook, to attach nebuluous information to people to.

To explain what I need a hook for, a professional acquantance has recriprocated trust.  There is a professional relationship there, but also a human interaction; the trust involved means we can proceed professionally without negotiating contractual terms beforehand.  However, it would undermine the trust in a very fundamental way to actually treat this as the meaning of the trust.  That is to say, modeling the relationship as transactional would undermine the basis of the relationship (but for the purposes of describing things, I'm going to do that anyways, basically because it's easier to explain that way; any fair description of a relationship of any kind should not be put to a short number of words).

I have a pretty good model of this person, as a person.  They have an (adult) child who has developed a chronic condition; part of basic social interaction is that, having received this information, I need to ask about said child the next time we interact.  This is something that is troubling this person; my responsibility, to phrase it in a misleading way, is to acknowledge them, to make what they have said into something that has been heard, and in a way that lets them know that they have been heard.

So, returning to the shortcoming: I have no conceptual texture to attach this to.  I have never built any kind of cognitive architecture that serves this kind of purpose; my interactions with other humans are focused on understanding them, which has basically served me socially thus far.  But I have no conceptual hook to attach things like "Ask after this person's child".  My model is now updated to include the pain of that situation; there is nothing in the model that is designed to prompt me to ask.  I have heard; now I need to let this person know that they have been heard, and I reach for a tool I suddenly realize has never existed.  I knew this particular tool was necessary, but have never needed it before.

It's maybe tempting to build general-purpose mental architecture to deal with this problem, but as I examine it, it looks like maybe this is a problem that actually needs to be resolved on a more individual basis, because as I mentally survey the situation, a large part of the problem in the first place is the overuse of general-purpose mental architecture.  I should have noticed this missing tool before.

I am not looking for solutions.  Mostly it is just interesting to notice;  usually, with these sorts of things, I've already solved the problem before I've had a chance to really notice, observe, and interact with the problem, much less notice the pieces of my mind which actually do the solving.  Which itself is an interesting thing to notice; how routine the construction of this kind of conceptual architecture has gotten, that the need for novel mental architecture actually makes me stop for a moment, and pay attention to what is going on.

It can sometimes be hard to notice the things you mentally automate; the point of automating things is to stop noticing them, after all.

That's a fascinating observation! When I introspect the same process (in my case, it might be "ask how this person's diabetic cat is doing"), I find that nothing in the model itself is shaped like a specific reminder to ask about the cat. The way I end up asking is that when there's a lull in the conversation, I scan the model for recent and important things that I'd expect the person might want to talk about, and that scan brings up the cat. My own generalizations, in turn, likely leave gaps which yours would cover, just as the opposite seems to be happening here.

Observe.  (If you don't want to or can't, it's a video showing the compression wave that forms in traffic when a car brakes.)

I first saw that video a few years ago.  I remembered it a few weeks ago when driving in traffic, and realizing that a particular traffic condition was caused by an event that had happened some time in the past, that had left an impression, a memory, in the patterns of traffic.  The event, no longer present, was still recorded.  The wave form in the traffic patterns was a record of an event - traffic can operate as a storage device.

Considering traffic as a memory storage device, it appears traffic reaches its capacity when it is no longer possible for traffic to move at all.  In practice I think traffic would approach this limit asymptotically, as each additional event stored in its memory reduces the speed at which traffic moves, such that it takes additional time to store each event in proportion to the number of events already stored.  That is, the memory storage of traffic is not infinite.

Notably, however, the memory storage of traffic can, at least in principle, be read. Traffic itself is an imperfect medium of storage - it depends on a constant flow of cars, or else the memory is erased, and also cars don't behave uniformly, such that events aren't stored perfectly.  However, it isn't perfectly lossy, either; I can know from an arbitrary slow-down that some event happened in traffic in the past.

You could probably program a really, really terrible computer on traffic.

But the more interesting thing, to me, is the idea that a wave in a moving medium can store information; this seems like the sort of "Aha" moment that, were I working on a problem that this could apply to, would solve that problem for me.  I'm not working on such a problem, but maybe this will find its way to somebody who is.

Edit: It occurs to me this may look like a trivial observation; of course waves can store information, that's what radio is.  But that's not what I mean; radio is information transmission, not information storage.  It's the persistence of the information that surprised me, and which I think might be useful, not the mere act of encoding information onto a wave.

Data storage and transmission are the same thing.  Both are communication to the future, (though sometimes to the very very near future).  Over long enough distances, radio and wires can be information storage.  Like all storage media, they aren't permanent, and need to be refreshed periodically.  For waves, this period is short, microseconds to hours.  For more traditional storage (clay tablets or engraved gold discs sent to space, for instance), it could be decades to millenea.  

Traffic is quite lossy as an information medium - effects remain for hours, but there are MANY possible causes of the same effects, and hard-to-predict decay and reinforcement rates, so it only carries a small amount of information: something happened in the recent past.  Generally, this is a good thing - most of us prefer that we're not part of that somewhat costly information storage, and we pay traffic engineers and car designers a great deal of money to minimize information retention in our roads.
 

You have (re)invented delay-line memory!

Acoustic memory in mercury tubes was indeed used by most of first-generation electronic computers (1948-60ish); I love the aesthetic but admit they're terrible even compared to electromagnetic delay lines. An even better (British) aesthetic would be Turing's suggestion of using Gin as the acoustic medium...

It took me a little while to understand what criticism Aella raised over Eliezer's defense of the concept of truth.

So to try to summarize what I am now reasonably certain the criticism was:

Eliezer argues that "truth", as a concept, reflects our expectation that our experiences of reality can match our experiences of reality.

Aella's criticism is that "of reality" adds nothing to the previous sentence, and Eliezer is sneaking reality into his concept of truth; that is, Eliezer's argument can be reframed "Our expectation of our experiences can match our experiences".

The difficulty I had in understanding Aella's argument is that she framed it as a criticism of the usefulessness of truth, itself.  That is, I think she finds the kind of "truth" we are left with, after subtracting a reality (an external world) that adds nothing to it, to be kind of useless (either that or she expects readers to).

Whereas I think it's basically the same thing.  Just as subtracting "of reality" removes nothing from the argument, I think adding it doesn't actually add anything to the argument, because I think "reality", or "external world", are themselves just pointers at the fact that our experiences can be expected, something already implicit in the idea of having expectations in the first place.

Reality is just the pattern of experiences that we experience.  Truth is a pattern which correlates in some respect with some subset of the pattern of experiences that we experience.

I have read that criticism, and...

When I expect it to rain and then it doesn’t and I feel surprised, what is happening? In my subjective experience, this moment, I am imagining a prior version of myself that had a belief about the world (it will rain!), and I am holding a different belief than what I imagine my previous self had (It isn’t raining!). I am holding a contrast between those two, and I am experiencing the sensation of surprise. This is all surprise is, deep down. Every interpretation of reality can be described in terms of a consistent explanation of the feeling of our mental framework right at this moment.

...it feels like some map-and-territory confusion. It's like if I insisted that the only things that exist are words. And you could be like: "dude, just look at this rock! it is real!", and I would say: "but 'dude', 'just', 'look', 'at', 'this', 'rock', 'it', 'is', and 'real' are just words, aren't they?" And so on, whatever argument you give me, I will ignore it and merely point out that it consists of words, therefore it ultimately proves me right. -- Is this a deep insight, or am I just deliberately obtuse? To me it seems like the latter.

By this logic, it's not even true that two plus two equals four. We only have a sensation of two plus two being four. But isn't it interesting that these "sensations" together form a coherent mathematics? Nope, we only have a sensation of these sensations forming a coherent mathematics. Yeah, but the reason I have the sensation of math being coherent is because the math actually is coherent, or isn't it? Nah, you just have a sensation of the reason of math's coherency being the math's actual coherency. But that's because... Nope, just a sensation of becauseness...

To make it sound deeper: the moon allegedly does not exists, because your finger that points at it is merely a finger.

EDIT: A comment below the criticism points out that the argument against reality can be also used as an argument against existence of other people (ultimately, only your sensations of other people exist), therefore this line of thought logically ends at solipsism.

EDIT: It's actually quite an interesting blogger! The article on reality didn't impress me, but many others did. For example, Internet communities: Otters vs. Possums is a way more charitable interpretation of the "geeks and sociopaths" dynamics in communities.

EDIT: It's actually quite an interesting blogger! The article on reality didn't impress me, but many others did. For example, Internet communities: Otters vs. Possums is a way more charitable interpretation of the "geeks and sociopaths" dynamics in communities.

Her writing is pretty good, yeah.

The rest of the blog made me pause on the article for a lot longer than I usually would have, to try to figure out what the heck she was even arguing.  There really is a thing there, which is why when I figured it out I came here and posted it.  Apparently it translates no better than her own framing of it, which I find interesting.

Talking about words is an apt metaphor, but somewhat misleading in the specifics.  Abstractly, I think Aella is saying that, in the map-territory dichotomy, the "territory" part of the dichotomy doesn't actually add anything; we never experience the territory, it's a strictly theoretical concept, and any correspondence we claim to have between maps and territory is actually a correspondence of maps and maps.

When you look at the world, you have a map; you are seeing a representation of the world, not the world itself.  When you hear the world, you have a map.  All of your senses provide maps of the world.  Your interpretation of those senses is a map-of-a-map.  Your model of those interpretations is a map-of-a-map-of-a-map.  It's maps all the way down, and there is no territory to be found anywhere.  The "territory" is taken axiomatically - there is a territory, which maps can match better or worse, but it is never actually observed.  In this sense, there is no external world, because there is no reality.

I think the criticism here is of a conceptualization of the universe in which there's a platonic ideal of the universe - reality - which we interact with, and with regards to which we can make little facsimiles - theories, or statements, or maps - which can be more or less reproductions of the ideal (more or less true).

So strictly speaking, this it's-all-maps explanation is also misleading.  It's territory all the way down, too; your sight isn't a map of reality, it is part of reality.  There are no maps; everything is territory.  There is no external reality because there is not actually a point at which we go from "things that aren't real" to "things that are real", and on a deeper level, there's not a point at which we go from the inside to the outside.

 

Is an old map of a city, which is no longer accurate, true?

The "maps all the way down" does not explain why there is (an illusion of) a reality that all these maps are about. If there is no underlying reality, why aren't the maps completely arbitrary?

The criticism Aella is making is substantively different than "reality isn't real".

So, imagine you're god.  All of reality takes place in your mind; reality is literally just a thought you had.  How does Eliezer's concept of "truth" work in that case?

Suppose you're mentally ill.  How much should you trust something that claims to be a mind?  Is it possible for imaginary things to surprise you?  What does truth mean, if your interface to the "external world"/"reality" isn't reliable?

Suppose you're lucid dreaming.  Does the notion of "truth" stop existing?

 

(But also, even if there is no underlying reality, the maps still aren't going to be completely arbitrary, because a mind has a shape of its own.)

So, imagine you're god.  All of reality takes place in your mind; reality is literally just a thought you had.  How does Eliezer's concept of "truth" work in that case?

Then the god's mind would be the reality; god's psychological traits would be the new "laws of physics", kind of.

I admit I have a problem imagining "thoughts" without also imagining a mind. The mechanism that implements the mind would be the underlying reality.

We can suppose that the god is just observing what happens when a particular mathematical equation runs; that is, the universe can, in a certain sense, be entirely independent of the god's thoughts and psychological traits.

Independence might be close enough to "external" for the "external world" concept to apply; so we can evaluate reality as independent from, even for argument's sake external to, the god's mind, even though it exists within it.

So we can have truth which is analogous to Eliezer's truth.

Now, the question is - does the "external world" and "independence" actually add anything?

Well, suppose that the god can and does alter things; observes how the equation is running, and tweaks the data.

Does "truth" only exist with respect to the parts of this world that the god hasn't changed?  Are the only "true" parts of this reality the parts that are purely the results of the original equation?  If the god makes one adjustment ever, is truth forever contaminated?

Okay, let's define the external world to be the equation itself.  The god can choose which equation to run, can adjust the parameters; where exactly in this process does truth itself lay?  Maybe in the mathematics used to run the equation?  But mathematics is arbitrary; the god can alter the mathematics.

Well, up to a point, the point Aella points at as "consistency."  So there's that piece; the truth has to at least be consistent.  And I think I appreciate the "truth" of the universe that isn't altered; there's consistency there, too.

Which leaves the other part, experience.

Suppose, for a moment, we are insane (independently, just imagine being insane); the external reality you observe is illusory.  Does that diminish the value of what we consider to be the truth in anticipating our experiences?  If this is all a grand illusion - well, it's quite a consistent illusion, and I know what will happen when I engage in the experience I refer to when I say I drop an apple.  I call the illusion 'reality', and it exists, regardless of whether or not it satisfies the aesthetic ideal I have for what "existence" should actually mean.

Which is to say - it doesn't matter if I am living in reality, or in a god's mathematical equation, or in a fantasy.  The existence or nonexistence of an external reality has no bearing on whether or not I expect an apple to hit the ground when I let go of it; the existence or nonexistence of an external reality has no bearing on whether the apple will do so.  Whether the apple exists in the real world, or as a concept in my mind, it has a particular set of consistent behaviors, which I experience in a particular way.

Whereas I take the view that truth in the sense in the sense of instrumentalism, prediction of experience, and truth in the sense of realism, correspondence to the territory, are different and both valid. Having recognised the difference, you don't have to eliminate one, or identify it with the other

"Goal-oriented behavior" is actually pretty complicated, and is not, in fact, a natural byproduct of general AI.  I think the kinds of tasks we currently employ computers to do are hiding a lot of this complexity.

Specifically, what we think of as artificial intelligence is distinct from motivational intelligence is distinct from goal-oriented behaviors.  Creating an AI that can successfully play any video game is an entirely different technology stack from creating an AI that "wants" to play video games, which in turn is an entirely different technology stack from creating an AI that translates a "desire" to play video games into a sequence of behaviors which can actually do so.

The AI alignment issue is noticing that good motivation is hard to get right; I think this needs to be "motivation is going to be hard to do at all, good or bad"; possibly harder than intelligence itself.  Part of the problem with AI writing right now is that the writing is, basically, unintentional.  You can get a lot further with unintentional writing, but intentional writing is far beyond anything that exists right now.  I think a lot of fears come about because of a belief that motivation can arise spontaneously, or that intentionality can arise out of the programming itself; that we might write our desires into machines such that machines will know desire.

What would it take for GPT-3 to want to run itself?  I don't think we have a handle on that question at all.

Goal-oriented behaviors, meanwhile, correspond to an interaction between motivation and intelligence that itself is immensely more complicated than either independently.

---

I think part of the issue here is that, if you ask why a computer does something, the answer is "Because it was programmed to."  So, to make a program do something, you just program it to do it.  Except this is moving the motivation, and intentionality, to the programmer; or, alternatively, to the person pressing the button causing the AI to act.

The AI in a computer game does what it does, because it is a program that is running, that causes things to happen.  If it's a first person shooter, the AI is trying to kill the player.  The AI has no notion of killing the player, however; it doesn't know what it is trying to do, it is just a series of instructions, which are, if you think about it, a set of heuristics that the programmer developed to kill the player.

This doesn't change if it's a neural network.  AlphaGo is not, in fact, trying to win a game of Go; it is the humans who trained it who have any motivation, AlphaGo itself is just a set of really good heuristics.  No matter how good you make those heuristics, AlphaGo will never start trying to win a game, because the idea of the heuristics in question trying to win a game is a category error.

I think, when people make the mental leap from "AI we have now" to "general AI", they're underspecifying what it is they are actually thinking about.

AI that can solve a specific, well-defined problem.

AI that can solve a well-defined problem. <- This is general AI; a set of universal problem-solving heuristics satisfies this criteria.

AI that can solve a poorly-defined problem. <- This, I think, is what people are afraid of, for fear that somebody will give it a problem, ask it to solve it, and it will tile the universe in paperclips.

Assuming all marginal economic growth comes from eliminating unnecessary expenses - increasing efficiency - then companies moving from a high-tax locality to a low-tax locality is, in fact, economic growth.

Is it a central example of economic growth,  or am I just engaging in a rhetorical exercise?

Well, assuming a diverse ecosystem of localities with different taxes and baskets of goods, I think a company moving from a high-tax locality to a low-tax locality - that is, assuming that we do in fact get something for paying taxes - this means a company is effectively moving from a high-cost plan which covers a wide range of bundled services, to a low-cost plan which covers a smaller range of bundled goods and services - that is, insofar as high taxes pay for anything at all, a company moving to a low-tax locality is reducing their consumption of those goods and services.  So, assuming that marginal economic growth arises from reducing consumption, and assuming that taxes are purchasing something, it is fair to describe this as a average, that is, central example of economic growth at the margins.

Whether or not taxes actually buy anything is, of course, another question entirely.  Another question is whether such economic growth in this case is coming at the expense of values we'd rather not sacrifice.

Government spending is included in GDP, so GDP will go up some as the company is able to buy and sell more stuff, but down some as the government is less able to buy and sell stuff.

I think that line of argument proves too much; anytime anybody consumes less of a good, the seller has less ability to buy and sell things, where the buyer has more ability to buy and sell things; the government isn't a special case here.  More, the reversal of this argument is just the broken window fallacy with a reversal of the status quo.

Here's what I understood you to be saying in the OP: that paying less taxes is economic growth because if you pay less taxes, you can produce more for less money. I'm saying that isn't necessarily true because you're not accounting for the reduction in economic activity that comes from the government being less able to buy and distribute things. It may well be true that moving to a low-tax locality does cause economic growth, but it won't always, so I wouldn't say that it's a central example of economic growth.

I don't get what you mean by the analogy to consuming less of a good. Are you trying to say that my response is wrong because it implies that consuming fewer goods doesn't always increase economic growth, because consuming fewer goods is like paying less taxes? Well, I don't think that those are all that similar (the benefits you get from living in a locale are mostly funded by how much tax other people pay as well as non-government perks, you could totally move to a lower-tax jurisdiction and get more goods and services), but also it's totally correct that consuming fewer goods doesn't always increase economic growth.

More, the reversal of this argument is just the broken window fallacy with a reversal of the status quo.

I don't know what you mean by this.

Take two societies.  They are exactly identical except in one respect: One has figured out how to manufacture light bulbs using 20% less glass.

Which society is richer?

I don't know what you mean by this.

https://en.wikipedia.org/wiki/Parable_of_the_broken_window for a basic breakdown.  When I say your argument is a reversal of the broken window fallacy, I'm saying your argument amounts to the idea that, in a society in which people routinely break windows, and this is a major source of economic activity, people shouldn't stop breaking windows, on account of all the economic activity it generates.

OK: I think I missed that you're implying that the cases where companies in fact move to low-tax jurisdictions count as growth, rather than all cases. It makes sense that if you model choice of how much taxes to pay as a choice of how much of some manufacturing input to buy, then companies only do that if it increases efficiency, and my argument above doesn't make sense taken totally straightforwardly.

I still think you can be wrong for a related reason. Suppose the government spends taxes on things that increase economic growth that no private company would spend money on (e.g. foundational scientific research). Suppose also that that's all it does with the money: it doesn't e.g. build useless things, or destroy productive capabilities in other countries. Then moving to a lower tax jurisdiction will make your company more efficient, but will mean that less of the pro-growth stuff governments do happens. This makes the effect on growth neutral. Is this a good model of government? Well, depends on the government, but they really do do some things which I imagine increase growth.

My main objection is that thinking of government as providing services to the people who pay them is a bad model - in other words, it's a bad idea to think of taxes as paying for a manufacturing input. When you move out of a state, the government probably spends less on the people still in there, and when you move in to a new state, you mainly benefit from other people's taxes, not your own. It's as if if you stopped buying glass from a glass company, they made everyone else's glass worse: then it's less obvious that your lightbulb company buys less glass, society will get richer.