Posts

Sorted by New

Wiki Contributions

Comments

Thinking that evolution is smart on the timescales we care about is probably a worse heuristic, though. Evolution can't look ahead, which is fine when it's possible to construct useful intermediate adaptations, but poses a serious problem when there are no useful intermediates. In the case of infosec, it's as all-or-nothing as it gets. A single mistake exposes the whole system to attack by adversaries. In this case, the attack could destroy the mind of the person using their neural connection.

Consider it from this perspective: a single deleterious mutation to part of the genome encoding the security system opens the person up to someone else poisoning their mind in serious and sudden ways: consider literal toxins, including the wide variety of organochlorides and other chemicals that can bind acetylcholinesterase and cause seizures (i.e., how many pesticides work), but also consider memetic attacks that can cause the person to act against their own interests (yes, language also permits these attacks, but much less efficiently than being able to directly update someone's beliefs/memories/heuristics/thoughts, which is entirely possible once you open a direct, physical connection to someone's brain from the outside of their skull - eyes are bad enough, from this perspective!).

A secure system would not only have to be secure for the individual it evolved in, but also be robust to the variety of mutations it will encounter in that individual's descendants. And the stage in between wherein some individuals have secure neural communication while others can have their minds ravaged by adversaries (or unwitting friends) would prevent any widespread adoption of the genes involved.

Over millions upon millions of years, it's possible that evolution could devise an ingenious system that gets around all of this, but my guess is that direct neural communication would only noticeably help language-bearing humans, which have existed for only ~100K years. Simpler organisms can just exchange chemicals or other simple signals. I don't think 100K years nearly enough time to evolve a robust-to-mutations security system for a process that can directly update the contents of someone's mind.

I'm not sure what "statistically immoral" means nor have I ever heard the term, which makes me doubt it's common speech (googling it does not bring up any uses of the phrase).

I think we're using the term "historical circumstances" differently; I simply mean what's happened in the past. Isn't the base rate purely a function of the records of white/black convictions? If so, then the fact that the rates are not the same is the reason that we run into this fairness problem. I agree that this problem can apply in other settings, but in the case where the base rate is a function of history, is it not accurate to say that the cause of the conundrum is historical circumstances? An alternative history with equal, or essentially equal, rates of convictions would not suffer from this problem, right?

I think what people mean when they say things like "machines are biased because they learn from history and history is biased" is precisely this scenario: historically, conviction rates are not equal between racial groups and so any algorithm that learns to predict convictions based on historical data will inevitably suffer from the same inequality (or suffer from some other issue by trying to fix this one, as your analysis has shown).

Didn't you just show that "machines are biased because it learns from history and history is biased" is indeed the case? The base rates differ because of historical circumstances.

For #1, I'm not sure I agree that not everyone in the room knows. I've seen introductions like this at conferences dedicated entirely to proteins where it assumed, rightly or not, that everyone knows the basics. It's more that not everyone will have the information cached as readily as the specialists. So I agree that sometimes it is more accurate to say "As I'm sure most of you know" but many times, you really are confident that everyone knows, just not necessarily at the tip of their tongue. It serves as a reminder, not actually new knowledge.

I suppose you could argue: since everyone is constantly forgetting little things here and there, even specialists forget some basics some of the time and so, at any given time, when a sufficiently large number of people is considered, it is very likely that at least one person cannot recall some basic fact X. Thus, any phrase like "everybody knows X" is almost certainly false in a big enough room.

With this definition of knowledge, I would agree with you that the phrase should be "as most of you know" or something similarly qualified. But I find this definition of knowledge sort of awkward and unintuitive. There is always some amount of prompting, some kind of cue, some latency required to access my knowledge. I think "remembers after 30 seconds of context" still counts as knowledge, for most practical purposes, especially for things outside my wheelhouse. Perhaps the most accurate phrase would be something like "As everyone has learned but not necessarily kept fresh in their minds..."

For #2, I should have clarified: this was an abbreviated reference to a situation in an apartment complex I lived in in which management regularly reminded everybody that bears would wreak havoc if trash were left out, and people regularly left trash out, to the delight of the bears. So I think in that scenario, everybody involved really did know, they just didn't care enough.

Echoing the other replies so far, I can think of other practical explanations for saying "everybody knows..." that don't fall into your classification.

1) Everybody knows that presenting a fact X to someone who finds X obvious can sometimes give them the impression that you think they're stupid/uninformed/out-of-touch. For instance, the sentence you just read. For another instance, the first few slides of a scientific talk often present basic facts of the field, e.g. "Proteins comprise one or more chains of amino acids, of which there are 20 natural types." Everybody who's a professional biologist/biochemist/bioinformatician/etc. knows this [1]. If you present this information as being even a little bit novel, you look ridiculous. So a common thing to do is to preface such basic statements of fact with "As is well known / As everybody knows / As I'm sure you know / etc." [2]

No bad faith at all! Just a clarification that your statements are meant to help newcomers or outsiders who may not remember such facts as readily as people who work with them every day.

2) I find myself saying "but everybody knows..." to myself or the person I'm talking to when trying to understand puzzling behavior of others. For example, "everybody knows that if trash bags are left outside the dumpster, bears will come and tear everything up, so why do people keep leaving them there?" In this context, the "everybody knows" clause isn't meant as a literal truth but as a seemingly reasonable hypothesis in tension with concrete evidence to the contrary. If everybody has been told, repeatedly, that trash is to be put in the dumpster and not next to it, why do they act like they don't know this? Obviously there is no real mystery here: people do know, they just don't care enough to put in the effort.

But especially in more complex situations, it often helps to lay out a bunch of reasonable hypotheses and then think about why they might not hold. "Everybody knows ..." is a very common type of reasonable hypothesis and so discussion of this sort will often involve good faith uses of the phrase. Put another way: not all statements that look like facts are meant as facts and in particular, many statements are made expressly for the purpose of tearing them down as an exercise in reasoning (essentially, thinking out loud). But if you're not aware of this dynamic, and it's done too implicitly, it might seem like people are speaking in bad faith.

I guess what I'm trying to say in general is: "this statement of fact is too obviously false to be a mistake" has two possible implications: one, as you say, is that the statement was made in bad faith. The other, though, is that it's not a statement of fact. It's a statement intended to do something more so than to say something.

[1] Of course, even such basic facts aren't even strictly true. There are more than 20 natural amino acids if you include all known species, but, as everybody knows, everybody excludes selenocysteine and pyrrolysine in the canonical list.

[2] The alternative is to exclude these first few slides altogether, but this often makes for a too-abrupt start and the non-specialists are more likely to get lost partway through without those initial reminders of what's what.

Answer by sirjackhollandJul 01, 2019300

Simplified examples from my own experience of participating in or witnessing this kind of disagreement:

Poverty reduction: Alice says "extreme poverty is rapidly falling" and Bob replies "$2/day is not enough to live on!" Alice and Bob talked past each other for a while until realizing that these statements are not in conflict; the conflict concerns the significance of making enough money to no longer be considered in "extreme poverty." The resolution came from recognizing that extreme poverty reduction is important, but that even 0% extreme poverty does not imply that we have solved starvation, homelessness, etc. That is, Alice thought Bob was denying how fast and impressively extreme poverty is being reduced, which he was not, and Bob thought Alice believed approaching 0% extreme poverty was sufficient, when she in fact did not.

Medical progress: Alice says "we don't understand depression" and Bob replies "yes we do, look at all the anti-depression medications out there." Alice and Bob talked past each other for a while, with the discussion getting increasingly angry, until it was realized that Alice's position was "you don't fully understand a problem until you can reliably fix it" and Bob's position was "you partially understand a problem when you can sometimes fix it". These are entirely compatible positions and Alice and Bob didn't actually disagree on the facts at all!

Free markets: Alice says "free markets are an essential part of our economy" and Bob replies "no they're not because there are very few free markets in our economy and none of the important industries can be considered to exist within one." The resolution to this one is sort of embarrassing because it's so simple and yet took so long to arrive at: Alice's implicit definition of a free market was "a market free from government interference" while Bob's implicit definition was "a market with symmetric information and minimal barriers to entry." Again, while it sounds blindingly obvious why Alice and Bob were talking past each other when phrased like this, it took at least half an hour of discussion among ~6 people to come to this realization.

Folk beliefs vs science: Alice says "the average modern-day Westerner does not have a more scientific understanding of the world than the average modern-day non-Westerner who harbors 'traditional'/'folk'/'pseudoscientific' beliefs" and Bob replies "how can you argue that germ theory is no more scientific than the theory that you're sick because a demon has inhabited you?" After much confusing back and forth, it turns out Alice is using the term 'scientific' to denote the practices associated with science while Bob is using the term to denote the knowledge associated with science. The average person inculcated in Western society indeed has more practical knowledge about how diseases work and spread than the average person inculcated in their local, traditional beliefs, but both people are almost entirely ignorant of why the believe what they believe and could not reproduce the knowledge if needed, e.g. the average person does not know the biological differences between a virus and a bacterium even though they are aware that antibiotics work on bacteria but not viruses. Once the distinction was made between "science as a process" and "science as the fruits of that process" Alice and Bob realized they actually agreed.

I think the above are somewhat "trivial" or "basic" examples in that the resolution came down to clearly defining terms: once Alice and Bob understood what each was claiming, the disagreement dissolved. Some less trivial ones for which the resolution was not just the result of clarifying nebulous/ambiguous terms:

AI rights: Alice says "An AGI should be given the same rights as any human" and Bob replies "computer programs are not sentient." After much deliberation, it turns out Alice's ethics are based on reducing suffering, where the particular identity and context surrounding the suffering don't really matter, while Bob's are based on protecting human-like life, with the moral value of entities rapidly decreasing as an inverse function of human-like-ness. Digging deeper, for Alice, any complex system might be sentient and the possibility of a sentient being suffering is particularly concerning when that being is traditionally not considered to have any moral value worth protecting. For Bob, sentience can't possibly exist outside of a biological organism and so efforts into that ensuring computer programs aren't deleted while running are a distraction that seems orthogonal to ethics. So while the ultimate question of "should we give rights to sentient programs?" was not resolved, a great amount of confusion was reduced when Alice and Bob realized they disagree about a matter of fact - can digital computers create sentience? - and not so much about how to ethically address suffering once the matter of who is suffering has been agreed on (Actually, it isn't so much a "matter of fact" since further discussion revealed substantial metaphysical disagreements between Alice and Bob, but at least the source of the disagreements was discovered).

Government regulation: Alice says "the rise of the internet makes it insane to not abolish the FDA" and Bob replies "A lack of drug regulation would result in countless deaths." Alice and Bob angrily, vociferously disagree with each other, unfortunately ending the discussion with a screaming match. Later discussion reveals that Alice believes drug companies can and will regulate themselves in the absence of the FDA and that 1) for decades now, essentially no major corporation has deliberately hurt their customers to make more profit and that 2) the constant communication enabled by the internet will educate customers on which of the few bad apples to avoid. Bob believes drug companies cannot and will not regulate themselves in the absence of the FDA and that 1) there is a long history of corporations hurting their customers to make more profit and that 2) the internet will promote just as much misinformation as information and will thus not alleviate this problem. Again, the object-level disagreement - should we abolish the FDA given the internet? - was not resolved, but the reason for that became utterly obvious: Alice and Bob have *very* different sets of facts about corporate behavior and the nature of the internet.

How to do science: Alice says "you should publish as many papers as possible during your PhD" and Bob replies "paper count is not a good metric for a scientist's impact." It turns out that Alice was giving career advice to Carol in her particular situation while Bob was speaking about things in general. In Carol's particular, bespoke case, it may have been true that she needed to publish as many papers as possible during her PhD in order to have a successful career even though Alice was aware this would create a tragedy-of-the-commons scenario if everyone were to take this advice. Bob didn't realize Alice was giving career advice instead of her prescriptive opinion on the matter (like Bob was giving).

Role playing: Alice says "I'm going to play this DnD campaign as a species of creature that can't communicate with most other species" and Bob replies "but then you won't be able to chat to your fellow party members or share information with them or strategize together effectively." Some awkwardness ensued until it became clear that Alice *wanted* to be unable to communicate with the rest of the party due to anxiety related concerns. Actually, realizing this didn't really reduce the awkwardness, since it was an awkward situation anyway, but Alice and Bob definitely talked past each other until the difference in assumptions was revealed and had Bob realized what Alice's concerns were to begin with, he probably would not have initiated the conversation since he didn't have a problem with a silent character but simply wanted to ensure Alice understood the consequences of this, with the discussion revealing that she did.

Language prescriptivism: Alice says "that's grammatically incorrect" and Bob replies "there is no 'correct' or 'incorrect' grammar - language is socially constructed!" Alice and Bob proceed to have an extraordinarily unproductive discussion until Alice points out that while she doesn't know exactly how it's decided what is correct and incorrect in English, there *must* be some authority that decides, and that's the authority she follows. While Alice and Bob did not come to an agreement per se, it became clear that what they really disagreed about was whether or not the English language has a definitive authority, not whether or not one should follow the authority assuming it exists.

I'm going to stop here so the post isn't too long, but I very much enjoyed thinking about these circumstances and identifying the "A->B" vs "X->Y" pattern. So much time and emotional energy wasted that wouldn't have been had Alice and Bob first established exactly what they were talking about.


Answer by sirjackhollandJun 29, 201960

One notable aspect in my experience with this is that exhaustion is not exclusively a function of the decision's complexity. I can experience exhaustion when deciding what to eat for dinner, for instance, even though I've made similar decisions literally thousands of times before, the answer is always obvious (cook stuff I have at home or order from a restaurant I like - what else is there?), and the stakes are low ("had I given it more thought, I would have realize I was more in the mood for soup than a sandwich" is not exactly a harrowing loss).

Another aspect to note is that decisions that end up exhausting me usually entail doing work I don't want to do. I never get exhausted when deciding where to hike, for instance, because no matter what I know I will enjoy myself, even if one spot requires a long drive, or inconvenient preparations, or whatever. One possibility is that part of me recognizes that the correct decision will inevitably cause me to do work I don't want to do. Actually deciding sets whatever work I have to do into motion while "deliberating" endlessly lets me put it off, which might end up feeling internally like the decision is hard to make. A motivated mind is great at coming up with bogus reasons for why an obvious decision is not so obvious.

A key insight for me was recognizing that my reluctance to do work is pretty directly proportional to what I expect the value of its product to be, biased towards short term gains unless I explicitly visualize the long term consequences. If realizing that the best decision for dinner is to cook, and that reminds me that I need to do dishes and chop vegetables and clean the stove, etc. etc. then I have a hard time "deciding" that cooking is the way to go because it implies that in the short term, I will be less happy than I am currently. If I think about the scenario where I procrastinate and don't cook, and focus on how hungry I will be and how unpleasant that feeling is, then my exhaustion often fades and the decision becomes clearer.

Thanks for the spot check! I had heard this number (~4 hours per day) as well and I now have much less confidence in it. That most of the cited studies focus on memorization / rote learning seriously limits their generality.

Anecdotally, I have observed soft limits for the amount of "good work" I can do per day. In particular, I can do good work for several hours in a day but - somewhat mysteriously - I find it more difficult to do even a couple hours of good work the next day. I say "mysteriously" because sometimes the lethargy manifests itself in odd ways but the end result is always less productivity. My folk theory-ish explanation is that I have some amount of "good work" resources that only gradually replenish, but I have no idea what the actual mechanism might be and my understanding is that ego depletion has not survived the replication crisis, so I'm not very confident in this.

Load More