Thinking that evolution is smart on the timescales we care about is probably a worse heuristic, though. Evolution can't look ahead, which is fine when it's possible to construct useful intermediate adaptations, but poses a serious problem when there are no useful intermediates. In the case of infosec, it's as all-or-nothing as it gets. A single mistake exposes the whole system to attack by adversaries. In this case, the attack could destroy the mind of the person using their neural connection.
Consider it from this perspective: a single delet...
I'm not sure what "statistically immoral" means nor have I ever heard the term, which makes me doubt it's common speech (googling it does not bring up any uses of the phrase).
I think we're using the term "historical circumstances" differently; I simply mean what's happened in the past. Isn't the base rate purely a function of the records of white/black convictions? If so, then the fact that the rates are not the same is the reason that we run into this fairness problem. I agree that this problem can apply in oth...
Didn't you just show that "machines are biased because it learns from history and history is biased" is indeed the case? The base rates differ because of historical circumstances.
Going along with this, our world doesn't appear to be the result of each individual making "random" choices in this way. If every good decision was accompanied by an alternate world with the corresponding bad decision, you'd expect to see people do very unexpected things all the time. e.g., this model predicts that each time I stop at a red light, there is some alternate me that just blows right through it. Why aren't there way more car crashes if this is how it works?
For #1, I'm not sure I agree that not everyone in the room knows. I've seen introductions like this at conferences dedicated entirely to proteins where it assumed, rightly or not, that everyone knows the basics. It's more that not everyone will have the information cached as readily as the specialists. So I agree that sometimes it is more accurate to say "As I'm sure most of you know" but many times, you really are confident that everyone knows, just not necessarily at the tip of their tongue. It serves as a reminder, not actu...
Echoing the other replies so far, I can think of other practical explanations for saying "everybody knows..." that don't fall into your classification.
1) Everybody knows that presenting a fact X to someone who finds X obvious can sometimes give them the impression that you think they're stupid/uninformed/out-of-touch. For instance, the sentence you just read. For another instance, the first few slides of a scientific talk often present basic facts of the field, e.g. "Proteins comprise one or more chains of amino acids, of which the...
Simplified examples from my own experience of participating in or witnessing this kind of disagreement:
Poverty reduction: Alice says "extreme poverty is rapidly falling" and Bob replies "$2/day is not enough to live on!" Alice and Bob talked past each other for a while until realizing that these statements are not in conflict; the conflict concerns the significance of making enough money to no longer be considered in "extreme poverty." The resolution came from recognizing that extreme poverty reduction is important, but that e...
One notable aspect in my experience with this is that exhaustion is not exclusively a function of the decision's complexity. I can experience exhaustion when deciding what to eat for dinner, for instance, even though I've made similar decisions literally thousands of times before, the answer is always obvious (cook stuff I have at home or order from a restaurant I like - what else is there?), and the stakes are low ("had I given it more thought, I would have realize I was more in the mood for soup than a sandwich" is not exactly a harro...
Thanks for the spot check! I had heard this number (~4 hours per day) as well and I now have much less confidence in it. That most of the cited studies focus on memorization / rote learning seriously limits their generality.
Anecdotally, I have observed soft limits for the amount of "good work" I can do per day. In particular, I can do good work for several hours in a day but - somewhat mysteriously - I find it more difficult to do even a couple hours of good work the next day. I say "mysteriously" because sometimes the lethargy manifest...
While a true Bayesian's estimate already includes the probability distributions of future experiments, in practice I don't think it's easy for us humans to do that. For instance, I know based on past experience that a documentary on X will not incorporate as much nuance and depth as an academic book on X. I *should* immediately reduce the strength of any update to my beliefs on X upon watching a documentary given that I know this, but it's hard to do in practice until I actually read the book that provides the nuance.
In a context like t...
Other than both being pictographic, I'm not sure emoticons and reactions are that related. Emoticons are either objects (neither here nor there for our purposes) or facial/bodily expressions. Reactions are emotional or high-level responses to information.
You can't really express the thumbs-up reaction with a facial expression emoticon. You can use a smiley face or something similar, but thumbs-up means approval, not happiness. If someone says "I'll be five minutes late - start without me" I don't want to express happiness at t...
(Just an attempt at an answer)
Both an explanation and a prediction seek to minimize the loss of information, but the information under concern differs between the two.
For an explanation, the goal is to make it as human understandable as possible, which is to say, minimize the loss of information resulting from an expert human predicting relevant phenomena.
For a prediction, the goal is to make it as machine understandable as possible, which is to say, minimize the loss of information resulting from a machine predicting relevant phenomena.
The reason there is...
I think what he's saying is that the existence of noise in computing hardware means that any computation done on this hardware must be (essentially) invariant to this noise, which leads the methods away from the precise, all-or-nothing logic of discrete math and into the fuzzier, smoother logic of probability distributions and the real line. This makes me think of analog computing, which is often done in environments with high noise and can indeed produce computations that are mostly invariant to it.
But, of course, analog computing is a niche field dw...
The pre/post conflation reminds me of Terence Tao's discussion of math pre/post proofs (https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/), which I've found to be a helpful guide in my journeys through math. I'm not surprised the distinction occurs more widely than in just math, but this post has encouraged me to keep the concept on hand in contexts outside of math.
I also enjoyed the discussion about how various religions are all getting at the same concepts through different lenses/frameworks. A...
I'm a little confused about how the burden of proof ended up as it is in this discussion. I think most people intuitively understand that blackmail is a bad thing. That they are not able to articulate a rigorous, general argument for why seems like a much higher bar than we expect for other things.
Consider murder. Murder should be illegal, obviously (I hope?! Not sure there is much to discuss if we disagree on that). But it's not trivial to construct a rigorous, general argument for why. Any demonstrated harm can be countered with another hypothe...
Something I didn't notice in the comments is how to handle the common situation that Bob is a one-hit wonder. Being a one-hit wonder is pretty difficult; most people are zero-hit wonders. Being a two-hit wonder is even more difficult, and very few people ever create many independent brilliant ideas / works / projects / etc.
Keeping that in mind, it seems like a bad idea to make a precedent of handing out epistemic tenure. Most people are not an ever-flowing font of brilliance and so the case that their one hit is indicative of many more is much less li...
I would call this a good visual representation of technical debt. I like to think of it as chaining lots of independently reasonable low order approximations until their joint behavior becomes unreasonable.
It's basically fine to let this abstraction be a little leaky, and it's basically reasonable to let that edge case be handled clumsily, and it's basically acceptable to assume the user won't ever give this pathological input, etc., until the number of "basically reasonable" assumptions N becomes large enough that 0.99^N ends...
Alternative hypothesis: the internet encourages people who otherwise wouldn't contribute to the general discourse to contribute to it. In the past, contributing meant writing some kind of article, or at least letter-to-the-editor, which 1) requires a basic level of literacy and intellectual capacity, and 2) provides a filter, removing the voices of those who can't write something publishers consider worth of publication (with higher-influence publications having, in general, stricter filters).
Anecdote in point: I have yet to see an internet comme...
I really like this framework! I've noticed that if someone makes a comment that assumes everyone in the group has CI, but I'm not sure if everyone does, I get a sense of awkwardness and feel the need to model two conversations: the one happening assuming everyone has CI, and the one happening assuming at least one person doesn't. This has the unfortunate side effect of consuming most of my thought-bandwidth, which makes me boring and quiet even if I would have otherwise been engaged and talkative.