localdeity

Wiki Contributions

Comments

To take some version of the opposite side: If we managed to figure out that, say, there was an X% chance per year of lab-leaking something like COVID, and a Y% chance per year of natural origin + wet market crossover producing something like COVID... that would determine the expected-value badness of lab practices and wet market practices, and the respective urgencies of doing something about them.  It wouldn't matter which specific thing happened in 2019.  (For an analogy, if the brakes on your car stopped working for 30 seconds while you were on the highway, this would be extremely concerning and warrant fixing, regardless of whether you managed to avoid crashing in that particular incident.)

That said, it seems unlikely that we'll get decent estimates on X and Y, and much more unlikely that there would be mainstream consensus on such estimates.  More likely, if COVID is proven to have come from a lab leak, then people will do something serious about bio-lab safety, and if it's proven not to have come from a lab leak, then people will do much less about bio-lab safety; this one data point will be taken as strong evidence about the danger.  So, getting an answer is potentially useful for political purposes.

(Remember: SARS 1 leaked from a lab 4 times.  That seems to me like plenty of evidence that lab leaks are a real danger, unless you think labs have substantially improved practices since then.)

Berkson's Bias seems to be where you're getting a subset of people that are some combination of trait X and trait Y; that is, to be included in the subset, X + Y > threshold.  Here, "> threshold" seems to mean "willing to advocate for regulations".  It seems reasonably clear that "pessimism (about the default course of AI)" would make someone more willing to advocate for regulations, so we'll call that X.  Then Y is ... "being non-libertarian", I guess, since probably the more libertarian someone is, the more they hate regulations.  Is that what you had in mind?

I would probably put it as "Since libertarians generally hate regulations, a libertarian willing to resort to regulations for AI must be very pessimistic about AI."

You might be interested in Gigerenzer's "bias bias" paper (reviewed here):

Behavioral economics began with the intention of eliminating the psychological blind spot in rational choice theory and ended up portraying psychology as the study of irrationality. In its portrayal, people have systematic cognitive biases that are not only as persistent as visual illusions but also costly in real life—meaning that governmental paternalism is called upon to steer people with the help of “nudges.” These biases have since attained the status of truisms. In contrast, I show that such a view of human nature is tainted by a “bias bias,” the tendency to spot biases even when there are none. This may occur by failing to notice when small sample statistics differ from large sample statistics, mistaking people’s random error for systematic error, or confusing intelligent inferences with logical errors. Unknown to most economists, much of psychological research reveals a different portrayal, where people appear to have largely fine-tuned intuitions about chance, frequency, and framing. A systematic review of the literature shows little evidence that the alleged biases are potentially costly in terms of less health, wealth, or happiness. Getting rid of the bias bias is a precondition for psychology to play a positive role in economics.

An example from the paper:

Unsystematic Error Is Mistaken for Systematic Error

The classic study of Lichtenstein et al. [about causes of death] illustrates the second cause of a bias bias: when unsystematic error is mistaken for systematic error. One might object that systematic biases in frequency estimation have been shown in the widely cited letter-frequency study (Kahneman, 2011; Tversky and Kahneman, 1973). In this study, people were asked whether the letter K (and each of four other consonants) is more likely to appear in the first or the third position of a word. More people picked the first position, which was interpreted as a systematic bias in frequency estimation and attributed post hoc to the availability heuristic. After finding no single replication of this study, we repeated it with all consonants (not only the selected set of five, each of which has the atypical property of being more frequent in the third position) and actually measured availability in terms of its two major meanings, number and speed, that is, by the frequency of words produced within a fixed time and by time to the first word produced (Sedlmeier et al., 1998). None of the two measures of availability was found to predict the actual frequency judgments. In contrast, frequency judgments highly correlated with the actual frequencies, only regressed toward the mean. Thus, a reanalysis of the letter-frequency study provides no evidence of the two alleged systematic biases in frequency estimates or of the predictive power of availability.

Because I'm not a real mathematician, I'm not going to find the actual limit, but just show that the limit is at least 50%.

Note that 1 + 2 + 4 + ... + 2^(n-1) = 2^n - 1.  Therefore, if we have a bunch of blue-eyed groups of size 1, 2, 4, ..., 2^(n-1), and one red-eyed group of size 2^n, then the overall fraction of snakes that are red-eyed is 2^n / (2^n + 2^n - 1), which, if we divide the numerator and denominator by 2^n, comes out to 1 / (2 - 1/(2^n)).  This is slightly above 1/2, and the limit as n -> ∞ is exactly 1/2.

Consider how people say, for example, that it's impossible to revolt against the government using just personal firearms, given that the government has nukes, fighter jets etc.

People do say that kind of thing.  Counterarguments:

  • Successful revolts don't need to be capable of defeating the army in a fair fight.  All you need to do is make it sufficiently painful for them to keep fighting that they give up.  I think the Middle East has modern examples of this.
  • A revolt may have some portion of the army on its side, and another portion might refuse to fight their own people.  Nukes in particular—I would be extremely astonished if any government used a large nuke, killing a bunch of civilians, when putting down a rebellion.  (Maybe they'd use very small tactical nukes—equivalent to large conventional bombs—in situations where there'd be no civilian casualties, but I suspect (and hope) that there'd still be strong resistance to breaking the nuclear taboo.  And would there even be an advantage to doing so?  Are the tactical nukes cheaper than the equivalents?  Heh, someone has looked into it: probably not.)

Regarding the first part, here's what comes to mind: Long before brains evolved any higher capacities (for "conscious", "self-reflective", etc. thought), they evolved to make their hosts respond to situations in "evolutionarily useful" ways.  If you see food, some set of neurons fire and there's one group of responses; if you see a predator, a different set of neurons fire.

Then you might define "food (as perceived by this organism)" to be "what tends to make this set of neurons fire (when light reflects off it (for certain ranges of light) and reaches the eyes of this organism)".  Boundary conditions (like something having a color that's on the edge of what is recognized as food) are probably resolved "stochastically": whether something that's near the border of "food" actually fires the "food" neurons probably depends significantly on silly little environmental factors that normally don't make a difference; we tend to call this "random" and say that this almost-food thing has a 30% chance of making the "food" neurons fire.

There probably are some self-reinforcing things that happen, to try[1] to make the neurons resolve one way or the other quickly, and to some extent quick resolution is more important than accuracy.  (See Buridan's principle: "A discrete decision based upon an input having a continuous range of values cannot [always] be made within a bounded length of time.")  Also, extremely rare situations are unimportant, evolutionarily speaking, so "the API does not specify the consequences" for exactly how the brain will respond to strange and contrived inputs.

("This set of neurons fires" is not a perfectly well-defined and uniform phenomenon either.  But that doesn't prevent evolution from successfully making organisms that make it happen.)

Before brains (and alongside brains), organisms could adapt in other ways.  I think the advantage of brains is that they increase your options, specifically by letting you choose and execute complex sequences of muscular responses to situations in a relatively cheap and sensitive way, compared to rigging up Rube Goldberg macroscopic-physical-event machines that could execute the same responses.

Having a brain with different groups of neurons that execute different responses, and having certain groups fire in response to certain kinds of situations, seems like a plausibly useful way to organize the brain.  It would mean that, when fine-tuning how group X of neurons responds to situation Y, you don't have to worry about what impacts your changes might have in completely different situations ABC that don't cause group X to fire.

I suspect language was ultimately built on top of the above.  First you have groups of organisms that recognize certain things (i.e. they have certain groups of neurons that fire in response to perceiving something in the range of that thing) and respond in predictable ways; then you have organisms that notice the predictable behavior of other organisms, and develop responses to that; then you have organisms noticing that others are responding to their behavior, and doing certain things for the sole purpose[1] of signaling others to respond.

Learning plus parent-child stuff might be important here.  If your helpless baby responds (by crying) in different ways to different problems, and you notice this and learn the association, then you can do better at helping your baby.

Anyway, I think that at least the original notion of "a thing that I recognize to be an X" is ultimately derived from "a group of neurons that fire (reasonably reliably) when sensory input from something sufficiently like an X enters the brain".  Originally, the neuronal connections (and the concepts we might say they represented) were probably mostly hardcoded by DNA; later they probably developed a lot of "run-time configuration" (i.e. the DNA lays out processes for having the organism learn things, ranging from "what food looks like" [and having those neurons link into the hardcoded food circuit], through learning to associate mostly-arbitrary "language" tokens to concepts that existing neuron-groups recognize, to having general-purpose hardware for describing and pondering arbitrary new concepts).  But I suspect that the underlying "concept X <--> a group of neurons that fires in response to perceiving something like X, which gates the organism's responses to X" organization principle remains mostly intact.

  1. ^

    Anthropomorphic language shorthand for the outputs of evolutionary selection

Wow, that article has some delicious allegations.

Upon careful consideration, Shapiro’s accounting for the origins of EMDR is questionable. This is because saccades during everyday functioning are physiologically invisible (Moses & Hart, 1987). Rosen (1995) addressed this concern by asking six individuals if they could experience eye movements while walking around and thinking of positive and negative thoughts. None were successful.

After publication of Rosen’s challenge to Shapiro’s origin story she alerted members of an EMDR listserv (traumatic-stress@freud.apa.org, September 12, 1996) that a responsive critique would be published by a “world renowned perceptual psychology researcher.“ Shapiro was referring to Robert Welch [...]

Welch’s praise of Shapiro’s sensitivity and diligence, following as it did Shapiro’s praise of his expertise, occurred without either party disclosing a likely conflict of interest: they had a relationship and married (Carey, 2019). Remarkably, a similar failure to disclose involved Shapiro’s earlier marriage in 1969 to Gerald Puk (retrieved March 1, 2021 from https://www.nycmarriageindex.com/) when both were students in Brooklyn, New York. [...]

Licensed in New York State and without academic credentials (PsycInfo, retrieved on March 1, 2021) Puk was not on the faculty at the Professional School of Psychological Studies in California: yet somehow he became a member of Shapiro’s dissertation committee (Shapiro, 1988). As with Welch, Shapiro’s relationship history with Puk remained undisclosed to relevant parties (Anne Hanley, dissertation committee member, personal communication March 10, 2021).

...

It was in 1985 that Shapiro published an article in Holistic Life Magazine and discussed Neuro-Linguistic Programming (NLP) theories on various topics including the importance of eye movement patterns (Shapiro, 1985, pp. 41–43):

Neuro-Linguistic Programming is a technique developed over eight years ago. . .. It has been dubbed the “Super-Achievers” technology because the research team studied the most successful people they could find in law, medicine, business and psychology to see what made them so successful. .. In NLP, the key is that since people share the same neurological system, responses are predictable, verifiable, and repeatable. In other words, Neuro-Linguistic Programming is scientifically rather than merely theoretically based.

One of the findings of the Neuro-Linguistic Programming research is that all people cross-culturally (with the exception of the Basque nationality) show how they are thinking by the way their eyes move. . . Even without their saying a word, if you watch their eyes carefully, you can determine whether they are seeing a picture, hearing, or feeling something. As a further refinement, you can tell if they are remembering something or constructing it. Thousands have learned to walk on red-hot coals without injury, using Neuro-Linguistic Programming.. . Using Neuro-Linguistic Programming, people are shown how to tap into their own unlimited source of personal power, get rid of even the basic fear of fire and change their physiology to walk across the coals. The major dilemma that people are confronted with in Neuro-Linguistic Programming is the question of manipulation and free will. Since the powerful technology allows you to practically “read minds” and have people respond automatically in any way you choose, there is a distinct ethical issue.

(For those who aren't familiar: Wiki on firewalking)

Of course, it is possible that a person who appears to be generally dishonest, and over-credulous (and/or consciously dishonest) about the magic powers of NLP, might have stumbled upon a genuinely correct technique.  But it would seem prudent to, at the very least, discount any evidence that came from that person and anyone connected to her.

If all it is doing is letting you issue commands to a computer, sure, fine. But if it’s letting you gain skills or writing to your memory, or other neat stuff like that, what is to keep the machine (or whoever has access to it) from taking control and rewriting your brain?

This brings to mind the following quotes from Sid Meier's Alpha Centauri (1999).

Neural Grafting

"I think, and my thoughts cross the barrier into the synapses of the machine—just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams the sensibility of the machine invades the periphery of my consciousness. Dark. Rigid. Cold. Alien. Evolution is at work here, but just what is evolving remains to be seen."
– Commissioner Pravin Lal, "Man and Machine"

 

Mind-Machine Interface

"The Warrior's bland acronym, MMI, obscures the true horror of this monstrosity. Its inventors promise a new era of genius, but meanwhile unscrupulous power brokers use its forcible installation to violate the sanctity of unwilling human minds. They are creating their own private army of demons."
– Commissioner Pravin Lal, "Report on Human Rights"

Even indoors, everyone is coughing and our heads don’t feel right. I can’t think fully straight.

I highly recommend ordering an air purifier if you haven't already.  (In California we learned the utility of this from past wildfire seasons.)  Coway Airmega seems to be a decent brand.

Maybe kindness is also like this: there might be benefits to behaving kindly, in some situations. But a mind behaving kindly (pico-psuedokindly?) need not value kindness for its own sake, nor have any basic drive or instinct to kindness.

I feel like this is common enough—"are they helping me out here just because they're really nice, or because they want to get in my good graces or have me owe them a favor?"—that authors often have fictional characters wonder if it's one or the other.  And real people certainly express similar concerns about, say, whether someone donates to charity for signaling purposes or for "altruism".

Also reminds me:

"You don't see nice ways to do the things you want to do," Harry said. His ears heard a note of desperation in his own voice. "Even when a nice strategy would be more effective you don't see it because you have a self-image of not being nice."

"That is a fair observation," said Professor Quirrell. "Indeed, now that you have pointed it out, I have just now thought of some nice things I can do this very day, to further my agenda."

Harry just looked at him.

Professor Quirrell was smiling. "Your lesson is a good one, Mr. Potter. From now on, until I learn the trick of it, I shall keep diligent watch for cunning strategies that involve doing kindnesses for other people. Go and practice acts of goodwill, perhaps, until my mind goes there easily."

Cold chills ran down Harry's spine.

Professor Quirrell had said this without the slightest visible hesitation.

Load More