Posts

Sorted by New

Wiki Contributions

Comments

The best advice I ever heard for Imposter Syndrome, was

"It's okay, by definition nobody is qualified to do something- if it is truly cutting edge"

Thank you for this article.

Thank you for clarifying, I misunderstood your post. 

Yes, you're right. "essentially" arbitrary problems would be free game. 

There is a hierarchy of questions one can ask though. That is, whatever oracle you introduce, you can now ask more complex questions and would require a more complex oracle to answer ( basically, you can ask the first oracle, questions about itself, which require another more complex oracle to answer). 

When I saw you use the word "computer" I thought you meant, a literal computer that we could in principle build. 

If in World A, the majority was an Alice ... not doing the job they loved ( imagine a teacher who thinks education is important, but emotionally dislikes students) , unreciprically giving away some arbitrary % of their earnings, etc...

Is that actually better than World B? A world where the majority are Bobs, sucessful at their chosen craft, giving away some amount of their earnings but maintaining a majority they are comfortable with.

I'm surprised Bob didn't make the obvious rebuttals:

  1. Alice, why aren't you giving away 51% of your earnings? What methodology are you applying that says 50% is the best amount to give, can't any arbitrary person X, giving 51% and working harder than you, make the same arguments you are making to me, to you?

  2. I've calculated that investing my earnings ( either back into the economy via purchasing goods or via stocks), will lead to the most good as I'm actually incentivized to prioritizing the growth of my investments (and commiting to donating it all after my death), whereas I cannot ensure the same for any orginzation I donate to.

Where Alice's argument is very strong, I would say is that she is arguing that being Generative, is a virtue, closely tied to Generosity.

The implication ( which Bob argues could be harmful/counterproductive) that follows is that if being Generative is a Virtue ( or a precurser to the virtue of Generosity) that implies that Sloth is a Sin, or atleast a prerequsite/enabler for the Sin of anti-Generosity - basically Greed.

Of course, telling someone that they are being too slothful, may or may not be counterproductive - which is effectively Bob and Alices discussion.

Perhaps, Bob is arguing from a principle of "do no harm" that doctors operate under, and so he would avoid harming people by calling them slothful. Essentially, he wouldn't pull the lever in this application of the trolley problem.

Whereas, Alice is arguing from a principle of, inaction is also an action, and so she views the non-action of not pulling the lever as worse than explicitly pulling the lever.

Or, she has a belief of the amount of net harm, being in the favor of risking harm via calling Bob slothful.

There is a compromise position, of establishing some kind of sufficient condition for determining how someone would respond to the points made above. Funny enough, people are having that very discussion about where to even post this article.

So, props for a job well done.

My immediate thoughts ( Apologies if they are incoherent): The predictability of belief updating could be due in part to what qualifies as "updating". In the examples given, belief updating seemed to happen when new information was presented. However, I'm not sure that models how we think. 

What if, "belief updating" is compounded at some interval, and in the absence of new information, old beliefs, when "updated" don't actually tend to change?   Every moment you believe something even in the absence of new information, would qualify as a moment of updating ones beliefs

So, if you believe that a coin has a 50% chance to be weighted. Perhaps, it matters if one waits for one hour before flipping the coin VS if one flips the coin immediately.  After all, It's not unreasonable, that if you believe something for a long time and are never proven false, you assign a higher certainty than something you just started believing One minute ago. I understand that in my example above this is an unreasonable conclusion, but I believe it holds as valid in cases where we might expect, counter-examples to have made themselves known to us over time, ( if they existed). 

This could explain Polarization in a very natural lens, it's just a feature of the fact that being presented with new information is rare, however we still "update" our beliefs, even when not presented with new information, and these updated beliefs hold more weight than as if we adopted them for the first time. 

So, it could be that when we update a belief, we add in some weight factor, dependent on many times we have updated the same belief in the past. So, it's not that current probability equals current expectation of your future probability. 

But that, 

Current Probability +   Past Held Probabilities  ( which are equivalent to Current, or at least non-exclusive) = current expectation of future probability 

The amount of Past Held Probabilities, to be considered could have to do with the frequency at which one updates their positions. As this would vary, it would explain why Polarization varies, and would even predict that people who hold a belief, and thinks about them frequently ( which we might use an indicator of a belief updating more frequently) but don't encounter any new information or don't question/update their position.... have a higher risk of Polarization.  Which seems self evident, and I hope is an indicator that my proposed mathematics, is reasonable. 

You would be right in most cases. But, there is still the issue of independent statements. " ZF is consistent" can not be shown to be true or false, if ZF is consistent, via the Incompleteness Theorems. 

So, some statements may not be shown to halting, or not halting. Which, is the famous halting problem. 

Any algorithm would be unable to tell you, if the statement halts or doesn't halt. So, not all statements can be expressed as true/false or halt/not-halt 

As far as I know for know, all of standard Mathematics is done within ZF + Some Degree of Choice. So it makes sense to restrict discussion to ZF (with C or without).  

My comment was a minor nitpick, on the phrasing "in set theory, this is a solved problem". For me, solved implies that an apparent paradox has been shown under additional scrutiny to not be a paradox. For example, the study of convergent series (in particular the geometric series) solves Zeno's Paradox of Motion. 

In Set Theory, Restricted Comprehension just restricts us from asking the question, "Consider this Set, with Y property" It's quite a bit different than solving a paradox in my book.  Although, it does remove the paradoxical object from our discourse. It's really more that Axiomatic Set Theory avoids the paradox, rather than solve it. 

I want to emphasize that this is a minor nitpick. It actually, ( I believe) serves to strengthen your overall point that RDT is an unsolved problem, I'm just adding that as far as I can tell - I think it's safe to say this component of RDT ( self-reference) isn't really adequately addressed in Standard Logic. If we allow self reference, we don't always produce paradoxes,  x = x is hardly, in any way self-evidently paradoxical. But, sometimes we do - such as in Russell's famous case. 

The fact that we don't have a good rule system ( in standard logic, to my knowledge) for predicting when self-reference produces a paradox indicates it's still something of an open problem. This may be radical, but I'm basically claiming that restricted Comprehension isn't a particularly amazing solution for the self-reference problem, it's something of a throwing the baby out with the bathwater kind of solution. Although, to its merit, that ZF hasn't produced any contradictions in all these years of study- is an incredible feat. 

Your point about, having to sacrifice to solve Russells question is well taken. I think it may be correct, the removal of something may be the best kind of solution possible. In that sense, restricted comprehension may have "solved" the problem, as it may be the only kind of solution we can hope for. 

Adele Lopez's answer was excellent, and I haven't had a chance to digest the referenced thesis, but it does seem to follow your proposed principle- to answer Russells question we need to omit things. 

Saying that Set Theory "solved the problem" by introducing restricted Comprehension is maybe a stretch.

Restricted Comphrension prevents the question from even being asked. So, it "solves it" by removing the object from the domain of discourse.

The Incompleteness Theorems are Meta-Theorems talking about Proper Theorems.

I'm not sure Set Theory has really solved the self-reference problem in any, real sense besides avoiding it. ( which may be the best solution possible)

The closest might be the Recursion Theorems, which allow functions to "build-themselves" by referencing earlier versions of themself. But, that isn't proper self-reference.

My issue with the practicality of any kind of real world computation of self reference, is I believe it would require infinite time/energy/space. As each time you "update" your current self, you change, and so would need to update again- etc.. You could approximate a RDT decision tree, but not apply it. The exception being for "fixed-points" Decisions which stay constant when we apply the RDT decision algorithm.

I imagine you are refering to a Turing Machine halting or not halting.

There are statements in Set Theory, which Turing Machines cannot interpret at all ( formally, they have a particular complexity), and require the introduction of "Oracles" in order to assist in interpreting. These are called Oracle Turing Machines. They come about frequently in Descriptive Set Theory.

What do you mean by "believe in the Law of Excluded Middle"

Do you need to believe it applies to all conceivable statements?

Usually, when one is working within a framework assuming the Law Of Excluded Middle, it's only true for their Domain of Discourse.

Whether it's true outside that domain is irrelevent.

The Law of Excluded Middle is obviously false, in the framework of quantum bits, where 1 = true, 0 = False. So, I doubt anyone believes it applies Universally, under all interpretations.