# All of Zetetic's Comments + Replies

I see your point here, although I will say that decision science is ideally a major component in the skill set for any person in a management position. That being said, what's being proposed in the article here seems to be distinct from what you're driving at.

Managing cognitive biases within an institution doesn't necessarily overlap with the sort of measures being discussed. A wide array of statistical tools and metrics isn't directly relevant to, e.g. battling sunk-cost fallacy or NIH. More relevant to that problem set would be a strong knowledge of kno...

2shminux10y
Yes. A CEO is by nature an optimist, with a "can do" approach. A CRO would report to the board directly to balance this optimism. This way the board, the CEO and the company will not be blindsided by the results of poor decisions, or anything else short of black swans. Currently there is a lip service to this approach in the SEC filings under possible risks and such. Of course, this is a rather idealistic point of view. In most public companies the board members do not share in the troubles of their company, only in its benefits, so it would be easy for them to marginalize the role of the CRO and restrict it to checking for legislative compliance only. No one likes hearing about potential problems. Besides, if the CRO brings up an issue before the board, assigns a high probability to it, but no action is taken and the risk comes to pass, the board members might be found responsible. They would never want that.

Just thought I'd point out, actuaries can also do enterprise risk management. Also, a lot of organizations do have a Chief Risk Officer.

2shminux10y
From Wikipedia: Unfortunately, this description is missing the point. Main existential risks come from the inside, like over-optimistic projections, sunk cost-based decisions, NIH syndrome behavior, rotting corporate culture, etc.

I think it's fair to say that most of us here would prefer not to have most Reddit or Facebook users included on this site, the whole "well-kept garden" thing. I like to think LW continues to maintain a pretty high standard when it comes to keeping the sanity waterline high.

This is part of why I tend to think that for the most part, these works aren't (or if they are, they shouldn't be) aimed at de-converting the faithful (who have already built up a strong meme-plex to fall back on), but rather for interception and prevention for young potential converts and people who are on the fence. Particularly college kids who have left home and are questioning their belief structure.

The side effect is that something that is marketed well towards this group (imo, this is the case with "The God Delusion") comes across as sh...

1David_Gerard11y
I think in practice, it has to be a movement and it has to, in its various parts, work all the angles at once. Which is pretty much the present state of things - there's plenty of work to go around.

I've met both sorts, people turned off by "The God Delusion" who really would have benefited from something like "Greatest Show on Earth", and people who really seemed to come around because of it (both irl and in a wide range of fora). The unfortunate side-effect of successful conversion, in my experience, has been that people who are successfully converted by rhetoric frequently begin to spam similar rhetoric, ineptly, resulting mostly in increased polarization among their friends and family.

It seems pretty hard to control for enou...

Amongst the sophisticated theists I know (Church of England types who have often actually read large chunks of the Bible and don't dispute that something called "evolution" happened), they will detail their objections to The God Delusion at length ... without, it turns out, having actually read it. This appears to be the religious meme defending itself. I point them at the bootleg PDF and suggest they actually read it, then complain ... at which point they usually never mention it ever again.

First, I do have a couple of nitpicks:

Why evolve a disposition to punish? That makes no sense.

That depends. See here for instance.

Does it make sense to punish somebody for having the wrong genes?

This depends on what you mean by "punish". If by "punish" you mean socially ostracize and disallow mating privileges, I can think of situations in which it could make evolutionary sense, although as we no longer live in our ancestral environment and have since developed a complex array of cultural norms, it no longer makes moral sense....

I'm not sure if it's elementary, but I do have a couple of questions first. You say:

what each of us values to themselves may be relevant to morality

This seems to suggest that you're a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fa...

I initially wrote up a bit of a rant, but I just want to ask a question for clarification:

Do you think that evolutionary ethics is irrelevant because the neuroscience of ethics and neuroeconomics are much better candidates for understanding what humans value (and therefore for guiding our moral decisions)?

I'm worried that you don't because the argument you supplied can be augmented to apply there as well: just replace "genes" with "brains". If your answer is a resounding 'no', I have a lengthy response. :)

-1AlonzoFyfe11y
Evolutionary Biology might be good at telling us what we value. However, as GE Moore pointed out, ethics is about what we SHOULD value. What evolutionary ethics will teach us is that our mind/brains are maleable. Our values are not fixed. And the question of what we SHOULD value makes sense because our brains are malleable. Our desires - just like our beliefs - are not fixed. They are learned. So, the question arises, "Given that we can mold desires into different forms, what SHOULD we mold them into?" Besides, evolutionary ethics is incoherent. "I have evolved a disposition to harm people like you; therefore, you deserve to be harmed." How does a person deserve punishment just because somebody else evolved a disposition to punish him. Do we solve the question of gay marriage by determining whether the accusers actually have a genetic disposition to kill homosexuals? And if we discover they do, we leap to the conclusion that homosexuals DESERVE to be killed? Why evolve a disposition to punish? That makes no sense. What is this practice of praise and condemnation that is central to morality? Of deserved praise and condemnation? Does it make sense to punish somebody for having the wrong genes? What, according to evolutionary ethics, is the role of moral argument? Does genetics actually explain such things as the end of slavery, and a woman's right to vote? Those are very fast genetic changes. The reason that the Euthyphro argument works against evolutionary ethics because - regardless of what evolution can teach us about what we do value, it teaches us that our values are not fixed. Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer. Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires - resulting in an institution where the question of the difference between right
0[anonymous]11y
IMO, what each of us values to themselves may be relevant to morality. What we intuitively value for others is not. I have to admit I have not read the metaethics sequences. From your tone, I feel I am making an elementary error. I am interested in hearing your response. Thanks

As I understand it, because T proves in n symbols that "T can't prove a falsehood in f(n) symbols", taking the specification of R (program length) we could do a formal verification proof that R will not find any proofs, as R only finds a proof if T can prove a falsehood within g(n)<exp(g(n)<<f(n) symbols. So I'm guessing that the slightly-more-than-n-symbols-long is on the order of:

n + Length(proof in T that R won't print with the starting true statement that "T can't prove a falsehood in f(n) symbols")

This would vary some with the length of R and with the choice of T.

Typically you make a "sink" post with these sorts of polls.

ETA: BTW, I went for the paper. I tend to skim blogs and then skip to the comments. I think the comments make the information content on blogs much more powerful, however.

You can donate it to my startup instead, our board of directors has just unanimously decided to adopt this name. Paypal is fine. Our mission is developing heuristics for personal income optimization.

Winners Evoking Dangerous Recursively Improving Future Intelligences and Demigods

4wedrifid11y
I commit to donating $20k to the organisation if they adopt this name! Or$20k worth of labor, whatever they prefer. Actually, make that \$70k.

Bob's definition contains my definition

Well here's what gets me. The idea is that you have to create Bob as well, and you had to hypothesize his existence in at least some detail to recognize the issue. If you do not need to contain Bob's complete definition, then It isn't any more transparent to me. In this case, we could include worlds with any sufficiently-Bob-like entities that can create you and so play a role in the deal. Should you pre-commit to make a deal with every sufficiently-Bob-like entity? If not, are there sorts of Bob-agents that ma...

I'm not sure I completely understand this, so Instead of trying to think about this directly I'm going to try to formalize it and hope that (right or wrong) my attempt helps with clarification. Here goes:

Agent A generates a hypothesis about an agent, B, which is analogous to Bob. B will generate a copy of A in any universe that agent B occupies iff agent A isn't there already and A would do the same. Agent B lowers the daily expected utility for agent A by X. Agent A learns that it has the option to make agent B, should A have pre-committed to B's deal?...

0Viliam_Bur11y
It certainly needs to be refined, because if I live in thousand universes and Bob in one, I would be decreasing my utility in thousand universes in exchange for additional utility in one. I can't make an exact calculation, but it seems obvious to me that my existence has much greater prior probability than Bob's, because Bob's definition contains my definition -- I only care about those Bobs who analyze my algorithm, and create me if I create them. I would guess, though I cannot prove it formally, that compared to my existence, his existence is epsilon, therefore I should ignore him. (If this helps you, imagine a hypothetical Anti-Bob that will create you if you don't create Bob; or he will create you and torture you for eternity if you create Bob. If we treat Bob seriously, we should treat Anti-Bob seriously too. Although, honestly, this Anti-Bob is even less probable than Bob.)

SPARC for undergrads is in planning, if we can raise the funding.

Awesome, glad to hear it!

See here.

Alright, I think I'll sign up for that.

Anything for undergrads? It might be feasible to do a camp at the undergraduate level. Long term, doing an REU style program might be worth considering. NSF grants are available to non-profits and it may be worth at least looking into how SIAI might get a program funded. This would likely require some research, someone who is knowledgeable about grant writing and possibly some academic contacts. Other than that I'm not sure.

In addition, it might be beneficial to identify skill sets that are likely to be useful for SI research for the benefit of those who might be interested. What skills/specialized knowledge could SI use more of?

4lukeprog11y
SPARC for undergrads is in planning, if we can raise the funding. See here [http://lesswrong.com/lw/aus/please_advise_the_singularity_institute_with_your/] .

My bigger worry is more along the lines of "What if I am useless to the society in which I find myself and have no means to make myself useful?" Not a problem in a society that will retrofit you with the appropriate augmentations/upload you etc. and I tend to think that is more likely that not, but what if, say, the Alcore trust gets us through a half-century-long freeze and we are revived, but things have moved more slowly than one might hope, yet fast enough to make any skill sets I have obsolete? Well, if the expected utility of living is su...

If I like and want to hug everyone at a gathering except one person, and that one person asks for a hug after I've hugged all the other people and deliberately not hugged them, that's gonna be awkward no matter what norms we have unless I have a reason like "you have sprouted venomous spines".

Out of curiosity, are there any particular behaviors you have encountered at a gathering (or worry you may encounter) that you find off-putting enough to make the hug an issue?

5Alicorn11y
I prefer to hug only people I like, and I don't like literally everyone. Hugging people I merely don't like that much is not so much "an issue" as it is "a thing I do not think should be subject to social pressure" - who I'm going to touch and how should be solely about the intersection of my preferences and the other person's. It's not about a specific behavior (i.e. I'm not particularly afraid someone's going to take a hug and turn it into unexpected rear-grabbing or anything like that).

I'm 100% for this. If there were such a site I would probably permanently relocate there.

essentially erasing the distiction of map and territory

This idea has been implied before and I don't think it holds water. That this has come up more than once makes me think that there is some tendency to conflate the map/territory distinction with some kind of more general philosophical statement, though I'm not sure what. In any event, the Tegmark level 4 hypothesis is orthogonal to the map/territory distinction. The map/territory distinction just provides a nice way of framing a problem we already know exists.

In more detail:

Firstly, even if you tak...

0[anonymous]11y
It is true that a "Shiny" new ontological perspective changes little. Practical intelligences are still bayesians, for information theoretical reasons. What my rather odd idea looks at is specifically what one might call the laws of physics and the mystery of the first cause. And if one might know the Math behind the Universe, the only thing that one might get is a complete theory of QM.

Disagreement is perfectly fine by me. I don't agree with the entirety of the sequences either. It's disagreement without looking at the arguments first that bothers me.

Firstly, a large proportion of the Sequences do not constitute "knowledge", but opinion. It's well-reasoned, well-presented opinion, but opinion nonetheless -- which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren't in the sequences, that's fun too. Secondly:

Whether the sequences constitute knowledge is beside the point - they constitute a baseline for debate. People should be familiar with at least some previously stated well-reasoned, well-presented opinions before th...

3Bugmaster11y
I agree with pretty much everything you said (except for the sl4 stuff, because I haven't been a part of that community and thus have no opinion about it one way or another). However, I do believe that LW can be the place for both types of discussions -- outreach as well as technical. I'm not proposing that we set the barrier to entry at zero; I merely think that the guideline, "you must have read and understood all of the Sequences before posting anything" sets the barrier too high. I also think that we should be tolerant of people who disagree with some of the Sequences; they are just blog posts, not holy gospels. But it's possible that I'm biased in this regard, since I myself do not agree with everything Eliezer says in those posts.

But having an AI that circumvents it's own utility function, would be evidence towards poor utility function design.

By circumvent, do you mean something like "wireheading", i.e. some specious satisfaction of the utility function that involves behavior that is both unexpected and undesirable, or do you also include modifications to the utility function? The former meaning would make your statement a tautology, and the latter would make it highly non-trivial.

0[anonymous]11y
I mean it in the tautological sense. I try to refrain from stating highly-non trivial things without extensive explanations.

I'm going to assert that it has something to do with who started the blog.

I think he's talking about "free market optimism" - the notion that deregulation, lowered taxes, less governmental oversight, a removal of welfare programs etc. lead to optimal market growth and eventually to general prosperity. Most conservative groups in America definitely proselytize this idea, I'm not sure about elsewhere.

The sample list of subjects is even broader than all the subjects mentioned someone on this page.

In that case I'm a bit unclear about the sort of research I'd be expected to do were I in that position. Most of those subjects are very wide open problems. Is there an expectation that some sort of original insights be made, above and beyond organizing a clear overview of the relevant areas?

2lukeprog11y
No original insights required. You will not be asked to do tasks you can't do, or research subjects you can't research.

I think it might help if you elaborate on the process some: How are hours tracked? Is it done by the honor system or do you have some software? Will I need to work at any specific times of the day, or do I just need to be available for at least 20 hours? Is there a sample list of subjects?

Either way, I'll probably send in an application and go from there. I currently tutor calculus online for approximately the same pay, but this seems somewhat more interesting.

4lukeprog11y
Hours currently tracked on the honor system; but it's all pretty visible work, so if 2 hours are logged but I don't see any changes to the Google doc where the researcher is tracking their research efforts, I'll have questions. Work can be done during any hours of the day. Almost all correspondance is by email. The sample list of subjects is even broader than all the subjects mentioned someone on this page [http://lukeprog.com/SaveTheWorld.html].

I posted this article to the decision theory group a moment ago. It seems highly relevant to thinking concretely about logical uncertainty in the context of decision theory, and provides what looks to be a reasonable metric for evaluating the value of computationally useful information.

ETA: plus there is an interesting tie-in to cognitive heuristics/biases.

0Sniffnoy11y
Link nitpick: When linking to arXiv, please link to the abstract, not directly to the PDF.

The original article and usual use of "Ugh Field" (in the link at the top of the post) is summariezed as:

Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have, we call it an "Ugh Field"1. The Ugh Field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs.

I agree that LW has Ugh Fields, but I can't see how AI risks is one. There may be fear associated with AI risks here but that is specifically because it ...

0Dmytry11y
Ahh, okay. Rather narrow definition. I was thinking more along thoughts associated with fear. Scare people with concept they don't very well understand, offer hope, and over time as they think about one and get scared or think of the other and get comfortable, they develop conditioned associations of the form A=bad, B=good, that can not be removed with logical arguments any more than you can argue a conditioned blink reflex out of someone.

Not to mention a massive underestimation of intermediate positions, e.g. the doubting faithful, agnostics, people with consciously chosen, reasonable epistemology etc. This sets that number to 0. I've met plenty of more liberal theists that didn't assert 100% certainty.

That makes sense. It still seems to be more of a rhetorical tool to illustrate that there is a spectrum of subjective belief. People tend to lump important distinctions like these together: "all atheists think they know for certain there isn't a god" or "all theists are foaming at the mouth and have absolute conviction", so for a popular book it's probably a good idea to come up with this sort of scale like this, to encourage people to refine their categorization process. I kind of doubt that he meant it to be used as a tool for inferring Bayesian confidence (in particular, I doubt 6.9 out of 7 is meant to be fungible with P(god exists) = .01428).

There has however be a mechanism for it to work for correct positions better than for incorrect ones. That is absolutely the key.

The whole point of studying formal epistemology and debiasing (major topics on this site) is to build the skill of picking out which ideas are more likely to be correct given the evidence. This should always be worked on in the background, and you should only be applying these tips in the context of a sound and consistent epistemology. So really, this problem should fall on the user of these tips - it's their responsibility ...

I'm speaking of people arguing. Not that there's all that much wrong with it - after all, the folks who deny the global warming, they have to be convinced somehow, and they are immune to simple reasonable argument WRT the scientific consensus. No, they want to second-guess science, even though they never studied anything relevant outside the climate related discussion.

I'm a tad confused. Earlier you were against people using the information they don't fully understand yet happens to be true, but here you seem to be suggesting that this isn't so bad and ...

0Dmytry11y
Well, I said it was irritating to see, especially if it doesn't work to convince anyone. If it works, well, the utility of e.g. changing the attitudes can exceed dis-utility of it being annoying. It's interesting how if one is to try to apply utilitarian reasoning it is immediately interpreted as 'inconsistent'. May be why we are so bad at it - other's opinions matter. There has however be a mechanism for it to work for correct positions better than for incorrect ones. That is absolutely the key.

Given that he's pretty disposed to throwing out rhetorical statements, I'd say that's a reasonable hypothesis. I'd be surprised if there was more behind it than simply recognizing that his subjective belief in any religion was 'very, very low', and just picking a number that seemed to fit.

Just look at the 'tips' for productive arguments. Is there a tip number 1: drop your position ASAP if you are wrong? Hell frigging no (not that it would work either, though, that's not how arguing ever works).

I've done my best to make this a habit, and it really isn't that hard to do, especially over the internet. Once you 'bite the bullet' the first time it seems to get easier to do in the future. I've even been able to concede points of contention in real life (when appropriate). Is it automatic? No, you have to keep it in the back of your mind, ju...

-2Dmytry11y
One should either cite the prevailing scientific opinion (e.g. on global warming), or present a novel scientific argument (where you cite the data you use). Other stuff really is nonsense. You can't usefully second-guess science. Citing studies that support your opinion is cherry picking, and is bad. Consider a drug trial; there were 2000 cases where drug did better than placebo, and 500 cases where it did worse. If each trial was a study, the wikipedia page would likely link to 20 links showing that it did better than placebo, including the meta-study, and 20 that it did worse. If it was edited to have 40 links that it did better, it'll have 40 links that it did worse. How silly is the debate, where people just cite the cases they pick? Pointlessly silly. On top of that people (outside lesswrong mostly) really don't understand how to process scientific studies. If there is a calculation that CO2 causes warming, then if calculation is not incorrect, or some very basic physics is not incorrect, CO2 does cause warming. There's no 'countering' of this study. The effect won't go anywhere, what ever you do. The only thing one could do is to argue that CO2 somehow also causes cooling; an entirely new mechanism. E.g. if snow was black, rather than white, and ground was white rather than dark, one could argue that warming removes the snow, leading in decrease in absorption, and decreasing the impact of the warming. Alas snow is white and ground is dark, so warming does cause further warming via this mechanics, and the only thing you can do is to come up with some other mechanism here that does the opposite. And so on. (You could disprove those by e.g. finding that snow, really, is dark, and ground, really, is white., or by finding that CO2 doesn't really absorb IR, but that's it). People don't understand difference between calculating predictions, and just free-form hypothesising that may well be wrong, and needs to be tested with experiment, etc etc. (i choose global

Nevermind

[This comment is no longer endorsed by its author]Reply

I found myself wondering if there are any results about the length of the shortest proof in which a proof system can reach a contradiction, and found the following papers:

Paper 1 talks about partial consistency. We have statements of the following form:

$Con\_\{\\bf ZF\}\(n\$) is a statement that there is no ZF-proof of contradiction with length =< n.

The paper claims that this is provable in ZF for each n. The paper then discusses results about the proof length of the partial consistency statements is polynomial in n. The author goes on to derive analogous results pertain...

0cousin_it11y
Thanks to you and Nesov for the references. I didn't know this stuff and it looks like it might come in handy.
There's stuff like that in Ch. 2 of Lindstrom's Aspects of Incompleteness, apparently elementary within the area.

This seems to be conflating rationality centered material with FAI/optimal decision theory material and has lumped them all under the heading "utilit maximization". These individual parts are fundamentally distinct, and aim at different things.

Rationality centered material does include some thought about utility, Fermi calculations and heuristics, but focuses on debiasing, recognizing cognitive heuristics that can get in the way (such as rationalization, cached thoughts) and the like. I've managed to apply them a bit in my day to day thought....

This is not so simple to assert. You have to think of the intensity of their belief in the words of allah. Their fundamental wordview is so different from ours that there may be nothing humane left when we try to combine them.

CAVEAT: I'm using CEV as I understand it, not necessarily as it was intended as I'm not sure the notion is sufficiently precise for me to be able to accurately parse all of the intended meaning. Bearing that in mind:

If CEV produces a plan or AI to be implemented, I would expect it to be sufficiently powerful that it would entail c...

There is little in common between Eliezer, Me and Al Qaeda terrorists, and most of it is in the so called reptilian brain. We may end up with a set of goals and desires that are nothing more than “Eat Survive Reproduce,” which would qualify as a major loss in the scheme of things.

I think you may possibly be committing the fundamental attribution error. It's my understanding that Al Qaeda terrorists are often people who were in a set of circumstances that made them highly succeptible to propaganda - often illiterate, living in poverty and with few, i...

-2diegocaleiro11y
6Rhwawn11y
No, that is completely wrong: the correlations are quite the opposite way, terrorists tend to be better educated and wealthier. Bin Laden is the most extreme possible example of that - he was a multimillionaire son of a billionaire!

Well the agent definition contains a series of conditionals. You have as the last three lines: if "cooperating is provably better than defecting", then cooperate; else, if "defecting is provably better than cooperating" then defect; else defect. Intuitively, assuming the agent's utility function is consistent, only one antecedent clause will evaluate to true. In the case that the first one does, the agent will output C. Otherwise, it will move through to the next part of the conditional and if that evaluates to true the agent will ou...

2Nisan11y
Yep, this sounds about right. I think one do has to prove "#a > #b if a > b", or rather )

That would be very impressive, but I don't see that in any of the stuff on his semiotics on Wikipedia.

A caveat: I'm not at all sure how much I'm projecting on Peirce as far as this point goes. I personally think that his writings clarified my views on the scientific method (at the time I originally read them, which was a good while back) and I was concurrently thinking about machine learning - so I might just be having a case of cached apophenia.

However; if you want a condensed version of his semiotic look over this. You might actually need to read s...

0endoself11y
Well I saw some interesting ideas about science in Piaget, which is at least as tenuous. Okay, I read most of this, but not in too much detail. I'm guessing that the things like are not the crucial parts. I'm seeing the idea that one has a partially correct theory explaining one's observations and that it is continuously refined. Is that the main idea or am I missing something? It's valid, but I don't know how it compares to other ideas at the time. Also, the emphasis seems to be on refining one's ideas by continuing to contemplate the same evidence, which isn't very empirical, but I could be misunderstanding. That's interesting. There are a lot of people using three valued logic today as if it is a huge insight that we can have a system that classifies statements as known to be true, known to be false, and unknown, or with three other, slightly different, categories, but in Pierce's day it was an important insight (well, there were similar ideas before, but they weren't formalized).

I think there's a problem with your thinking on this - people can spot patterns of good and bad reasoning. Depending on the argument, they may or may not notice a flaw in the reasoning for a wide variety of reasons. Someone who is pretty smart probably notices the most common fallacies naturally - they could probably spot at least a few while watching the news or listening to talk shows.

People who study philosophy are going to have been exposed to many more diverse examples of poor reasoning, and will have had practice identifying weak points and exploit...

Well I can relay my impressions on Peirce and why people seem to be interested in him (and why I am):

I think that the respect for Peirce comes largely from his "Illustrations in the Logic of Science" series for Scientific American. Particularly "The Fixation of Belief" and "How to Make Our Ideas Clear".

When it comes to Tychism, it's kind of silly to take it in a vacuum, especially given that the notion of statistics being fundamental to science was new, and Newtonian determinism was the de facto philosophical stance of his d...

0endoself11y
I think his Tychism might have been justified. Statistics could make predictions assuming Tychism but it wasn't obvious that the same results were predicted by determinism. That's Bayesian evidence in favour of Tychism. His more useful statistical work is also impressive. That's definitely a lot of math and logic. That would be very impressive, but I don't see that in any of the stuff on his semiotics on Wikipedia. The passage you linked to seems to just be saying that with sufficient study it is possible to understand things. I don't see anything that anticipates information theory or knowledge as statistical modelling. Oh, he seems to have disobeyed endoself's first law of philosophy: "Have as little to do with Hegel as possible."

Yes, you're right. Looking at the agent function, the relevant rule seems to be defined for the sole purpose of allowing the agent to cooperate in the even that cooperation is provably better than than defecting. Taking this out of context, it allows the agent to choose one of the actions it can take if it is provably better than the other. It seems like the simple fix is just to add this:

$\\exists a,b\\operatorname\{Prv\}\\biggl\(\(\\psi\(\\ulcorner \\chi\\urcorner\\underline\{i\}\$=D\to\pi_{\underline{i}}\chi()=\underline{a})\\\mbox{\quad\quad\quad\quad}\wedge(\psi(\ulcorner%20U%20\urcorner,\underline{i})=C\to\pi_{\underline{i}}\chi()=\underline...

2Nisan11y
So you modify the agent so that line 3 says "cooperating is provably better than defecting" and line 4 says "defecting is provably better than cooperating". But line 3 comes before line 4, so in proving Conjecture 4 you'd still have to show that the condition in line 3 does not obtain. Or you could prove that line 3 and line 4 can't both obtain; I haven't figured out exactly how to do this yet.

The utilitarian case is interesting because both Mill and Bentham seemed to espouse a multidimensional utility vector rather than a uni-dimensional metric. There is an interesting paper I've been considering summarizing that takes a look at this position in the context of neuroeconomics and the neuroscience of desire.

Of interest from the paper: They argue that "pleasure" (liking), though it comes from diverse sources, is evaluated/consolidated at the neurological level as a single sort of thing (allowing a uni-dimensional representation as is ...

1endoself11y
A lot of people really admire Pierce, and I have trouble understanding why. It's very possible that hindsight bias makes me underestimate his ability, though. His Tychism seems confused and, AFAICT, he justifies it with a mind projection fallacy: "I can't predict this so it's fundamentally random". Also, he classified everything into groups of three so many of his good ideas were obsured by that weird numeralogical framework. Thirdly (heh), his theory of abduction seems to combine the generation and the assessment of hypotheses. Those processes are often done together and, indeed, it is usually most efficient to do them at the same time, but combining the two into a single process does not produce any deep insight into either. Anyway, most of his other work seems good, I'm just probably having trouble understanding it in its historical context. His work on logic seems about on par with Frege, who I admire greatly for logic alone, so that's rather impressive, but Frege isn't praised the way Pierce is. You seem to be very familiar with Pierce though, so hopefully you can explain this to me.

I only looked at this for a bit so I could be totally mistaken, but I'll look at it closely soon, it's a nice write up!

My thoughts:

A change of variables/values in your proof of proposition 3 definitely doesn't yield conjecture 4? At first glance it looks like you could just change the variables and flip the indicies for the projections (use pi_1 instead of pi_0) and in the functions A[U,i]. If you look at the U() defined for conjecture 4, it's exactly the one in proposition 3 with the indices i flipped and C and D flipped, so it's surprising to me if this doesn't work or if there isn't some other minor transformation of the the first proof that yields a proof of conjecture 4.

2AlexMennen11y
The agent function is not defined symmetrically with respect to C and D, so flipping the roles of C and D in the universe without also flipping their roles in the agent function does not guarantee that the outcome would change as desired.
2Nisan11y
One can try to take the proof of Proposition 3 and switch C and D around, but one quickly runs into difficulty: The second line of that proof contains %20=%20C%20\to%20\pi_0%20U()%20=%20\underline{a}%20)%20\\%20\mbox{\quad\quad\quad\quad}%20\wedge%20(A(\ulcorner%20U%20\urcorner,%200)%20=%20D%20\to%20\pi_0%20U()%20=%20\underline{b})%20\biggr)%20\wedge%20a%3Eb) "cooperating provably results in a better payoff than defecting". This is enough to convince the agent to cooperate. But if you switch C and D, you end up with a proposition that means "cooperating provably results in a worse payoff than defecting". This is not enough to convince the agent to defect. The agent will defect if there is no proof that cooperating results in a better payoff than defecting. So at least one would have to say a bit more here in the proof.

Ha! Yeah, it seems that his name is pretty ubiquitous in mathematical logic, and he wrote or contributed to quite a number of publications. I had a professor for a sequence in mathematical logic who had Barwise as his thesis adviser. The professor obtained his doctoral degree from UW Madison when it still had Barwise, Kleene and Keisler so he would tell stories about some class he had with one or the other of them.

Barwise seems to have had quite a few interesting/powerful ideas. I've been wanting to read Vicious Circles for a while now, though I haven'...

Is there anything, in particular, you do consider a reasonably tight lower bound for a man-made extinction event? If so, would you be willing to explain your reasoning?

3CarlShulman11y
Mega-scale asteroid impacts (dinosaur-killer size) come close. Uncertainty there would be about whether we could survive climatic disruptions better than the dinosaurs did (noting that fish, crocodiles, mammals, birds, etc, survived) using technology.