Suppose we have a medical sensor measuring some physiological parameter. The parameter has a constant true value , and the sensor takes measurements over a short period of time. Each measurement has IID error (so the measurements are conditionally independent given ). In the end, the measurements are averaged together, and there’s a little bit of extra error as the device is started/stopped, resulting in the final estimate - the only part displayed to the end user. We can represent all this with a causal DAG:

Note that, conceptually, there are two main sources of error in the final estimate Y:

- IID measurement noise in the ’s
- Noise in Y from the starting/stopping procedure

… so the node is not fully deterministic. The joint distribution for the whole system is given by

Since all the measurements are to be averaged together anyway, it would be nice if we could just glom them all together and treat them as a single abstract measurement, like this:

Formally, we can do this in two steps:

- Replace the nodes with a single node , i.e. a list containing all the measurements. This doesn’t change the substance of the model at all, it just changes what we’re calling a “node”.
- Replace the node with , the average of the measurements. We no longer worry about the individual measurements at all, and just directly compute the distributions and .

The second step is the interesting one, since it changes the substance of the model.

__Main question__: under the abstract model, what counterfactual queries remain valid (i.e. match the corresponding concrete queries), and how do they correspond to counterfactuals on the concrete model? What about probabilistic queries, like ?

The concrete model supports three basic counterfactual queries:

- Set the value of
- Set the value of
- Set the value of

… as well as counterfactuals built by combining multiple basic counterfactuals and possibly adding additional computation. In the abstract model:

- Setting abstract works exactly the same and corresponds directly to the concrete-model counterfactual.
- Although the abstract node has different inputs and computation than the concrete node, the procedure for setting abstract is exactly the same: cut all the incoming arrows and set the value.
- Setting corresponds to setting all of the concrete at once, and there may be degeneracy: a single counterfactual setting of may correspond to many possible counterfactual settings of the whole set of measurements .

… so counterfactuals on and have a straightforward correspondence, whereas the correspondence between counterfactuals on and is more complicated and potentially underdetermined. But the important point is that any allowable counterfactual setting of will correspond to *at least one* possible counterfactual setting of - so any counterfactual queries on the abstract model are workable.

(Definitional note: I’m using “correspond” somewhat informally; I generally mean that there’s a mapping from abstract nodes to concrete node sets such that queries on the abstract model produce the same answers as queries on the concrete model by replacing each node according to the map.)

Probabilistic queries, i.e. , run into a more severe issue: . In the abstract model, node retained all information relevant to , but not necessarily all information relevant to . So there’s not a clean correspondence between probabilistic queries in the two models. Also, of course, the abstract model has no notion at all of the individual measurements , so it certainly can’t handle queries like .

Now, in our medical device example, the individual measurements are not directly observed by the end user - they just see - so none of this is really a problem. The query will never need to be run anyway. That said, a small adjustment to the abstract model *does* allow us to handle that query.

Let’s modify our abstract model from the previous section so that . Rather than just keeping the information relevant to , our node will also need to keep information relevant to . (The next three paragraphs briefly explain how to do this, but can be skipped if you're not interested in the details.)

By the __minimal map theorems__, all the information in which is relevant to is contained in the distribution . So we could just declare that node is the tuple , where the second item is the full distribution of given (expressed as a function). But notation gets confusing when we carry around distributions as random variables in their own right, so instead we’ll simplify things a bit by assuming the measurements follow a maximum entropy distribution - just remember that this simplification is a convenience, not a necessity.

We still need to keep all the information in which is relevant to , which means we need to keep all the information to compute . From the DAG structure, we know that , where is a normalizer. is part of the model, so the only information we need from to compute is the product . If we assume the measurements follow a maxentropic distribution (for simplicity), then , for some vector and vector-valued function (both specified by the model). Thus, all we need to keep around to compute is - the __sufficient statistic__.

Main point: the node consists of the pair . If we want to simplify even further, we can just declare that is the identity function (possibly with ), and then node is just , assuming the number of measurements is fixed.

What does this buy us?

First and foremost, our abstract model now supports all probabilistic queries: , , , , etc, will all return the same values as the corresponding queries on the concrete model (with corresponding to ). The same counterfactuals remain valid with the same correspondences as before, and the counterfactually-modified abstract models will also support the additional probabilistic queries.

We can even add in one extra feature:

Huh? What’s going on here?

Remember, contains all of the information from which is relevant to or . That means is conditionally independent of both and , given (this is a __standard result__ in information theory). So we can add into the DAG as a child of M, resulting in the overall distribution

Since is just a child node dangling off the side, any probabilistic queries not involving any will just automatically ignore it. Any probabilistic queries which do involve any will incorporate relevant information from and via .

What about counterfactuals?

Counterfactual settings of , , and still work just like before, and we can generally run probabilistic queries involving the on the counterfactually-modified DAGs. Cutting the arrow still corresponds to cutting all the arrows in the concrete model. The addition of to the model even lets us calculate which are compatible with a particular counterfactual setting of , although I don’t (yet) know of any useful interpretation to attribute to the distribution in that case.

We still can’t directly translate counterfactuals from the concrete model to the abstract model - e.g. a counterfactual setting of in the concrete model does not easily correspond to anything in the abstract model. We also can’t directly run counterfactuals on in the abstract model; we have to run them on instead. But if a counterfactual modification is made elsewhere in the DAG, the probabilistic queries of within the counterfactual model will work.

That brings us to the most important property of this abstraction, and the real reason I call it “natural”: what if this is all just a sub-component of a larger model?

Here’s the beauty of it: everything still works. All probabilistic queries are still supported, all of the new counterfactuals are supported. And all we had to account for was the *local* effects of our abstraction - i.e. had to contain all the information relevant to and . (In general, an abstracted node needs to keep information relevant to its Markov blanket.) Any information relevant to anything else in the DAG is mediated by and/or , so all of our transformations from earlier still maintain invariance of the relevant queries, and we’re good.

By contrast, our original abstraction - in which we kept the information relevant to but didn’t worry about - would mess up any queries involving the information contained in relevant to . That includes , , etc. To compute those correctly, we would have had to fall back on the concrete model, and wouldn’t be able to leverage the abstract model at all. But in the natural abstraction, where contains all information relevant to or , we can just compute all those queries directly in the abstract model - while still gaining the efficiency benefits of abstraction when possible.

Discuss]]>

A new episode of Global Optimum has been released! Global Optimum is a podcast aimed at making altruists more effective. This episode is about social status, hypocrisy, signaling, hormones, and politics.

This episode features:

-Why does men’s testosterone go down when they fall in love?

-Does “power posing” have any psychological effects?

-What is “humblebragging” and why does it pervade social media?

-Is our preference for democracy really a preference for high status?

-What is self-esteem?

-How to increase self-esteem (the answer is disappointing)

-How to act high status (the answer is not disappointing)

The podcast is available on all podcast apps.

Discuss]]>

In a recent post (and papers), Anders Huitfeldt and co-authors have discussed ways of achieving external validity in the presence of “effect heterogeneity.” These results are not immediately inferable using a standard (non-parametric) selection diagram, which has led them to conclude that selection diagrams may not be helpful for "thinking more closely about effect heterogeneity" and, thus, might be "throwing the baby out with the bathwater."

Taking a closer look at the analysis of Anders and co-authors, and using their very same examples, we came to quite different conclusions. In those cases, transportability is not immediately inferable in a fully nonparametric structural model for a simple reason: it relies on *functional constraints* on the structural equation of the outcome. Once these constraints are properly incorporated in the analysis, all results flow naturally from the structural model, and selection diagrams prove to be indispensable for thinking about heterogeneity, for extrapolating results across populations, and for protecting analysts from unwarranted generalizations. See details in the note we post here for discussion.

Discuss]]>

*(These are the touched up notes from a class I took with CMU's Kevin Kelly this past semester on the Topology of Learning. Only partially optimized for legibility)*

One of the whole points of this project is to create a clear description of various forms of "success", and to be able to make claims about what is the highest form of success one can hope for given the problem that is being faced. The ultimate point of this is to have a useful frame for justifying the use of different **methods**. Now I'll introduce the gist of our formalization of methods so that we can get back to the good stuff.

In it's most general form, a method is just a function from info states to hypothesis.

Often I might use notation like to highlight "This method is responding to a yes or no question given evidence . This is mostly useful to be able to talk about certain relations between the question being asked, and your answers.

We are going to look at methods that respond with an articulation of an answer. So st . There's an interesting reason this matters, and part of it has to do with the Gettier Problem.

The Gettier problem has to do with believing the right thing for the wrong reasons. Consider this example.

There's three possible worlds, boxes are possible info states, and the two Hypothesis and are outlined. We haven't given a formal notion of what it means to be Occam / "to act in accord with simplicity". But pretend we have. An occam method would say:

, and . In a bit, we're going to make a big deal about the criterion of **progressive learning** (never drop the truth once you have it). The method I just outline drops the truth in this problem. Suppose is the true world. In is says , which is true, but then we drop the truth and say in , only to return to it later. Can you see why this is sorta a gettier problem? In we proclaim "!" but we do it for inductive reasons. It's the simplest hypothesis right now. So it is true that , but we don't have a super sure reason for saying it. That means that when we get more info, , we drop the truth, because our previous "bad reason" for saying has been disconfirmed.

Our way around these sorts of gettier problems is to not restrict the method to only giving a yes or no answer. That's what an articulation is. A method that gives an articulation would look like . This makes a lot of sense. The reason you're saying when you're in is different from the reason you say it in . Letting methods give articulations instead of flat yay or nay let's you not loose that information.

**Convergence to an articulation**

converges to an articulation of in

Plain English: A method converges to a hypothesis in a given world iff that world has some information state such that no matter what further info you get, your method will stick to it's guns and give an articulation of

**Convergence to a true articulation**

converges to a true articulation of in

Plain English: Same as converging to an articulation of , with the added stipulation that your articulations must include the the world

**Verification in the Limit**

verifies in the limit in converges to a true articulation of if and does not converge to a **true articulation** of if

**Strong Verification in the Limit**

verifies in the limit in converges to a true articulation of if and does not converge to an articulation of if

Difference between strong and normal verification:

Consider someone pondering if the true form of the law is a polynomial degree, and what degree it it. This question can be verified in the limit, but it can't be strongly verified. Forever shifting the polynomial degree that you think the law is counts as converging to an articulation of "The true law is polynomial". To strongly verify, at some point your method would have to say "It's not polynomial!" But if you had to keep inter-spacing those in between your other guesses, you don't get to converge at all.

Here's a picture.

Basically a retraction is any time you get more information and say something that isn't strictly a refinement of your earlier hypothesis. Retractions are really important because they are going to be a key measure of success, one which we connect to various topological properties of a question.

Some brief philosophical motivation for caring about retractions: At first glance, minimizing retractions sounds like being closed minded, and that sounds like a bad quality to have. Luckily, retractions aren't the only thing we're paying attention to when we talk about success. Often we'll talk about converging to the truth while also minimizing retractions. The closed-minded kersmudgeon who sticks to their guns forever doesn't even converge to the truth in most scenarios, and is thus not appealing to us. One way to think about minimizing retractions to "getting to the truth with minimum fuss". It's like missile pursuit.

It's totally expected that for most scientific problems, you're going to have to dodge and weave. But the more of a sequitous path you take in pursuing the truth, the less it feels like it's even right to call what you are doing "pursuit". Converging to the truth while minimizing retractions is like pursuing a target with minimal waste.

A retraction is is a sequence of of info states ... such that each pair is a retraction. We'd call this a retraction sequence of length .

Now to the most important definition.

-verifies in verifies in the limit in and the longest possible retraction chain for in is of length

This concept is about to become very important. A sneak peak at the rest: we have some notion of different types of success you could achieve on a problem. You can verify, refute, or decide a question with 0- retraction. Next we're going to hop back to topology and construct a topological notion of complexity, one that allows us to make climbs like

is -topologically complex there exists a method such that -verifies

If we could do that, then we'd have a way to talk about scientific problems in terms of there complexity, and have a strong way that caches out. For a given problem, you might be able to prove upper or lower bounds on the topological complexity, and thus be able to re-calibrate expectations about what sort of success you can expect from your methods. You might be able to show that a given methods achieves the best possible success, given the topological complexity of the problem. That would be pretty dope. Let's get to it.

(note: So far, for every definition of verification we have given, you can create an analogous definition for refutability and decidability)

Discuss]]>

*(These are the touched up notes from a class I took with CMU's Kevin Kelly this past semester on the Topology of Learning. Only partially optimized for legibility)*

Time to introduce some new Topological terms. We're going to create some good intuitions around the concepts of *interior*, *exterior*, *boundary*, *closure*, and *frontier*. These are all operators in the sense that if you have a set , then is me using the $$int$$ operator to create a new set that we call "the interior of ". Ext, Bndr, Cl, and Frnt are the shorthand I will use for these operators.

Before talking about these operators in a topological sense, I want to talk about them in a metric space sense. A metric space is just some mathematical space where you have a way to specify the *distance* between any two points, according to a __specific definition__ of distance. In the real line, the distance between any two numbers can just be the absolute value of their difference. In n-dimensional euclidean space, distance is given by the n-dimensional version of the Pythagorean Theorem. I want to start talking about interiors and boundaries and such from a metric point of view in order to contrast the way it's different from the topological view. I found that when I was trying to wrap my head around these concepts, I was implicitly assuming a metric space world view, because literally every math space I'd interacted with up to that point was a metric space.

Let's start with this picture:

The squiggly loop is our set . In a metric space, a point is in the interior of a set if you can "draw a circle" around it, such that the circle only contains other points that are in (formally, you talk about "balls" instead of circles. An -ball around is the set of all points st ). You can clearly see that I can draw a circle around , where the circle only contains points in , so is in the interior of . A *boundary* point like is a point where no matter how small a circle you draw around it, the circle will contain some points in , and some points not in .

Likewise, the *exterior* of consists of all points that you can draw a circle around such that the circle only contains points *not* in . Here are definitions of our two other operators:

Now, here's where we shift from the *metric* perspective to the *topological* perspective. Let's think back to trying to decide if a point is in the interior of . Re-frame this task as us trying to take a "measurement" around . "Can I make a measurement that would include and not include anything from ?" You can see how this is a more general question. We were asking the same question in the metric space context, it's just that our "measurements" were circles of arbitrarily small radii. A general metric space abstracts the idea of measuring with a circle to measuring with a ball of arbitrarily small radius. Topology abstracts one step further and says, "we don't even care about distance, we just want to see if you can make some abstract measurement on space that would show to be surrounded by ."

So what are the "measurements" on a topological space? Its open sets! Remember, a topological space is a set accompanied by a set of things called "open sets" which are subsets of the original set, subject to various axioms. For us, using possible world semantics and the verifiability-topology, we can think of the opens sets (which are all the verifiable propositions) as "measurements" you could take. How does this translate for our topological operators? is in the *topological interior* of if there exists an open set (verifiable proposition) that includes , and all other members of that open set are members of . In math,

For the rest of this sequence, I'm often going to talk in terms of measurements instead of talking about open sets in the topology. Just know that if you ever get confused, all my statements about measurements should cached out as some statements about open sets, which cache out as statements about the information basis.

At this point, you could re-examine the definitions of the topological operators, swapping out notions of drawing circles with the existance of measurements. Or.... you could check out this one WEIRD PICTURE that will make you A GENIUS at topology!

(Note: I used an upside down "P" because I was going for a mirror symmetry aesthetic, but it didn't work. Just consider the upside down "P" to be or (they're the same in possible world semantics, remember?). I already made all the images and don't want to change them)

For motivating this picture, consider the open sets of our topology to correspond to rectangles that don't get small enough to cleanly fit into the squiggly boundary and only cover or .

If you want you can just stare at this picture until you become enlightened. You can also keep reading as I walk through examples.

If then your problem is decidable. The only possible worlds are ones where you can cleanly measure whether or is true.

If then is **verifiable**: if is true, it is true in a way that lets us get a clean measurement showing it's true. If it's not true, maybe we get a clean measurement of , maybe we don't. Note that a problem that is decidable is also verifiable.

If then is **refutable**: if is false, you can get a clean measurement showing it's false. If it's true, maybe you get a clean measurement of , maybe you don't.

These pictures help a lot with being able to see what problem statements are duals of each other, and also for translating problem statements into topological statements. See if you can match these problem statements to the corresponding topological ones:

- is "strictly" verifiable (verifiable and not decidable)
- poses the problem of metaphysics for
- is "strictly" refutable (refutable and not decidable)
- poses the problem of induction for
- is "strictly" verifiable
- "You're fucked"
- poses the problem of metaphysics for

There's lots of other fun exercises you can do to milk intuition from this image. Feel free to play around with it as much as you want. It will be helpful when thinking about and translating between topological ideas in the future.

.

.

.

.

.

.

Discuss]]>

Of course, I'm not expecting you to support the idea in the answers, but simply mentioning its conclusion:)

Discuss]]>

*(These are the touched up notes from a class I took with CMU's Kevin Kelly this past semester on the Topology of Learning. Only partially optimized for legibility)*

Now that we've got the basic formalism, it's time to go from our intuitive notion of verifiable to our formal notion of verifiable.

This basically lines up. Something is verifiable if no matter how it's true, there exists some information that would tell you it's true..

Now, a series of leading questions with a fun surprise at the end:

**Question 1**: Can you verify the contradiction?

Yes! There is no world in which contradiction is true, and with verifiability we only care about what we can do in worlds where the proposition is true.

**Question 2**: Can you verify the tautology?

Yes! This follows from the first axiom of our info basis. For every world, you get at least one info state. An info state is a subset of , so no matter what the info state we have, it's a subset of

**Question 3**: Can you verify the arbitrary finite union of verifiable propositions?

Yes! Takes a little more thought.

**Question 4**: Can you verify the intersection of any finite number of verifiable propositions?

Okay, get ready for this, here's *The Sixth Sense* "He was dead the whole time!" plot twist...

(*clipped from wikipedia*)

... crazy right?

If we take to be , and to be "The set of all verifiable propositions in ", then what we just proved matches up exactly with this definition. So we get to claim that **verifiable propositions** in are **open sets**, and that they form a **topology** on our set of possible worlds.

What does this mean? From a practical stance, it means that we can now use all of topology, a rich and developed branch of math, to think and talk about verifiability. For any theorem that proves something about opens sets, we can make the same conclusion about verifiable propositions. Before we get into any of that, let's expand our mapping from verifiability concepts to topological concepts.

Okay, so we know that open is verifiable, let's see if we can connect an intuitive notion to closed. Let's look at the Halting Problem. In CS, you'd say the halting problem is semi-decidable. If it's going to halt, you'll see it happen, but if it doesn't you'll never know, maybe it loops forever. Semi-decidable lines up exactly with our notion of verifiability. So what's the negation of the halting problem?

Now we've got a situation where if it's true ( doesn't halt), we can't be guaranteed to know). But if it's false ( does halt) we will find out.

Oh shit, this is refutability! To be open is to be verifiable, and to be closed is to be refutable. You can check for yourself, the negation of any refutable proposition is verifiable, and vice verse. Just like how the compliment of any open set is closed, and vice versa.

Discuss]]>

A large amount of philosophy has been people trying to demonstrate how you can never really know anything [*citation needed*]. Various forms of skepticism take the stance, "No method of inquiry get get you , and so no method is justified."

Despite that, it seems like (at least in terms of watching how people act in the world) everyone gets that you need some level of pragmatism. "Well I've gotta do something, and this seems like the best idea, so I'm going to do it instead of doing nothing." No one is so skeptical of knowledge that they have stayed immobile until they starved to death [*citation needed*].

What Kelly aims to do is create a rigorous formalism for the sort of pragmatic attitude that people take all the time. It's very much inspired by how computer scientists do things. If someone proves that a problem can't be done any faster than quadratic time, and you figure out a quadratic time algorithm, you're happy. You don't refuse to use any algorithm that doesn't run in constant time.

In a sentence, this course was about how really cool formalism to talk about "How hard is a given scientific problem" and how that effects "the best possible performance you can get given the hardness of the problem".

Over the ages people have postulated what qualities scientific hypothesis should have. The logical positivists asserted that only verifiable propositions should be the domain of science (if it's true, you can do some test to demonstrate it's true). Popper wanted hypothesis to be falsifiable (if it's false, you can do some test to demonstrate it's false). Verification and falsifiability have an important connection to two other notions that philosophers of science often talk about, **the problem of induction** and **the problem of metaphysics**.

You face the problem of induction if it's the case that even if you hypothesis is true, you'll never know for sure (will the sun rise tomorrow? You can never rule out that it just won't at some point).

You face the problem of metaphysics if it's the case that even if your hypothesis is false, you'll never get definitive evidence that it's false ("there's a teacup somewhere in the infinite expanse of space!")

Turns out almost all questions that have been the subject of actual science are neither verifiable or falsifiable, and you're going up against induction or metaphysics with most questions as well. Uh Oh. Looks like those normative ideas on what science should be rule out most of what science is. Oops.

In light of our new tack on things, this is sorta like if the only complexity classes people had were linear and quadratic, and thought that the only problems that should be in the domain of computer science should be ones that are solvable in linear or quadratic time. Fix: make a richer complexity hierarchy in which to locate problems, see what the complexity says about possible performance.

Discuss]]>

Let be the set of all possible worlds. The nature of your inquiry is going to shape what sorts of worlds are in .

Now we consider some true or false proposition concerning . could be "There are more than 15 people in North America". The key idea in possible world semantics is that every proposition is represented by the set of all worlds where is true.

**Def:** Proposition is true in

We'll still refer to propositions by their English sentence description, but when we start doing math with them you should think of them as sets of worlds. Here are some consequences of defining logical propositions as such:

Convince yourself that the above are true. If you're wondering, "Hmmm, the logic of set containment and classical logic seem eerily similar" you're right, and might want to ponder if that has anything to do with how we decided what the rules of classical logic should be in the first place.

Now that we have our world, the next thing we want is our **information basis** . is made up of *information states*, and each information state is a proposition. This means that they follow all the same union and intersection rules that other propositions do, and that an info state is a set of possible worlds. However, your info basis can't just be any old set of proposition. We are trying to capture the set of propositions that you *could* know about the world. Upfront, I want to acknowledge that this "could" might cause some confusion. How do we know what you could or couldn't know? Isn't that what we're trying to figure out? For now, we will resolve that with the following distinction. When we're talking about possible worlds and information states, they are not defined by some requirement that a particular person could access them. Later when we talk about *methods*, then we'll be talking about, "What conclusions could someone with XYZ method reach?"

The information basis, just like the possible worlds, will be shaped by how we construct the setup to our inquiry. Normally, what the information basis looks like will be a direct result of what "measurement" tools are being used to do inquiry. Separate from what the basis looks like in any given construction, below are some basic axioms that we always have our info basis abide by.

- is countable

The first axiom just states, "No matter what world you're in, there's *something* you can know, even if it's just the tautology". This is mostly a bookkeeping axiom and doesn't have profound philosophical consequences.

The third axiom has some intuitive appeal (it's hard to imagine finite beings interacting with uncountable entities) but again is mostly a bookkeeping axiom to make some proofs slicker down the road.

The second axiom is the interesting one. It plain English it says "For any two info states you could witness, there is another info state that includes the information of both." This can be thought of "additivity of evidence". If there are two propositions and , two possible things you could know, then it is "possible" to know both of them. There won't be a weird branching of your experiments where if you see you'll never be able to see .

Okay, that's a bulk of the setup to the relevant syntax, and it probably wasn't very insightful or meaningful to you. Let's hop into some examples and see what it looks like to model problems in this framework.

I'm going to model a simple inductive problem. Let's say everyday you wake up and check whether or not aliens have made contact with earth. Everyday you put a "0" up on the wall if they haven't, and a "1" if they have.

In this setup, a possible world is any given infinite sequence of 1's and 0's. Something like:

This makes the set of all possible worlds the set of all possible infinite binary strings.

(notation explanation: is common notation for "all function from into ". So is "all function from the naturals to 2 (which in many set theory construction, 2 is defined to be the set )". A function from the naturals to the set defines an infinite binary string)

Onto our info states. Since we are observing this infinite sequence day by day, when can only ever have seen a finite amount of it. So we probably want an info state to be something like where is "all finite binary strings". But remember, an info state has to be a proposition, and a proposition is a set of possible worlds. No world is represented by a finite binary string. So we do the following:

And boom, we've got our info basis.

Here's what this looks like as a picture:

The circles represent information states. There is an information state that confirms "Bread fails to nourish at time t = 3", and there are information states like "either bread nourishes or it fails to nourish at t > n", but there is no information state that uniquely picks out the world where bread nourishes, which is why this is an induction scenario.

There exists some function on the real numbers, and we are trying to figure out what sort of function it is. We get to investigate the function by getting arbitrarily small rectangle measurements of it.

The motivation for the rectangular measurement is to account for measurement error. Imagine there is some natural law, and investigating the function is us setting one variable and seeing how another variable changes. There's there small uncertainty, we never actually check the function at a point and get info like "f(15.4) = -37". You put in an approximate input and get an approximate answer. You can refine the approximation as much as you want and get the error smaller and smaller, but there is never zero error.

Discuss]]>

Applying economic models to physiology seems really obvious. For instance:

- Surely the body uses price signals to match production to consumption of various metabolites. Insulin as a price signal for glucose is one example.
- Presumably such price signals coordinate between spatially-separated organs with specialized roles in various physiological "supply chains". That should lead to general equilibrium models, and questions of convexity and stability.
- Can we back out an implied discount rate for the body's long-term energy stores?

Yet when I run a google search for the obvious phrase "econophysiology", I get back five results, most of which appear to be misspellings. (I feel like I ought to write something right now just to call dibs on the name.)

Does anyone know of sources on this sort of thing? Is there a name for it?

Discuss]]>