Wiki Contributions

Comments

My current theory is that self-esteem isn't about yourself at all!

Self-esteem is your estimate of how much help/support/contribution/love you can get from others.

This explains why a person needs to feel a certain amount of "confidence" before trying something that is obviously their best bet. By "confidence" we basically just mean "support from other people or the expectation of same." The kinds of things that people usually need "confidence" to do are difficult and involve the risk of public failure and blame, even if they're clearly the best option from an individual perspective.

Basically, AI professionals seem to be trying to manage the hype cycle carefully.

Ignorant people tend to be more all-or-nothing than experts. By default, they'll see AI as "totally unimportant or fictional", "a panacea, perfect in every way" or "a catastrophe, terrible in every way." And they won't distinguish between different kinds of AI.

Currently, the hype cycle has gone from "professionals are aware that deep learning is useful" (c. 2013) to "deep learning is AI and it is wonderful in every way and you need some" (c. 2015?) to "maybe there are problems with AI? burn it with fire! Nationalize! Ban!" (c. 2019).

Professionals who are still working on the "deep learning is useful for certain applications" project (which is pretty much where I sit) are quite worried about the inevitable crash when public opinion shifts from "wonderful panacea" to "burn it with fire." When the public opinion crash happens, legitimate R&D is going to lose funding, and that will genuinely be unfortunate. Everyone savvy knows this will happen. Nobody knows exactly when. There are various strategies for dealing with it.

Accelerate the decline: this is what Gary Marcus is doing.

Carve out a niche as an AI Skeptic (who is still in the AI business himself!) Then, when the funding crunch comes, his companies will be seen as "AI that even the skeptic thinks is legit" and have a better chance of surviving.

Be Conservative: this is a less visible strategy but a lot of people are taking it, including me.

Use AI only in contexts that are well justified by evidence, like rapid image processing to replace manual classification. That way, when the funding crunch happens, you'll be able to say you're not just using AI as a buzzword, you're using well-established, safe methods that have a proven track record.

Pivot Into Governance: this is what a lot of AI risk orgs are doing

Benefit from the coming backlash by becoming an advisor to regulators. Make a living not by building the tech but by talking about its social risks and harms. I think this is actually a fairly weak strategy because it's parasitic on the overall market for AI. There's no funding for AI think tanks if there's no funding for AI itself. But it's an ideal strategy for the cusp time period when we're just shifting between blind enthusiasm to blind panic.

Preserve Credibility: this is what Yann LeCun is doing and has been doing from day 1 (he was a deep learning pioneer and promoter even before the spectacular empirical performance results came in)

Try to forestall the backlash. Frame AI as good, not bad, and try to preserve the credibility of the profession as long as you can. Argue (honestly but selectively) against anyone who says anything bad about deep learning for any reason.

Any of these strategies may say true things! In fact, assuming you really are an AI expert, the smartest thing to do in the long run is to say only true things, and use connotation and selective focus to define your rhetorical strategy. Reality has no branding; there are true things to say that comport with all four strategies. Gary Marcus is a guy in the "AI Skeptic" niche saying things that are, afaik, true; there are people in that niche who are saying false things. Yann LeCun is a guy in the "Preserve AI Credibility" niche who says true things; when Gary Marcus says true things, Yann LeCun doesn't deny them, but criticizes Marcus's tone and emphasis. Which is quite correct; it's the most intellectually rigorous way to pursue LeCun's chosen strategy.

Re: 2: nonprofits and academics have even more incentives than business to claim that a new technology is extremely dangerous. Think tanks and universities are in the knowledge business; they are more valuable when people seek their advice. "This new thing has great opportunities and great risks; you need guidance to navigate and govern it" is a great advertisement for universities and think tanks. Which doesn't mean AI, narrow or strong, doesn't actually have great opportunities and risks! But nonprofits and academics aren't immune from the incentives to exaggerate.

Re: 4: I have a different perspective. The loonies who go to the press with "did you know psychiatric drugs have SIDE EFFECTS?!" are not really a threat to public information to the extent that they are telling the truth. They are a threat to the perceived legitimacy of psychiatrists. This has downsides (some people who could benefit from psychiatric treatment will fear it too much) but fundamentally the loonies are right that a psychiatrist is just a dude who went to school for a long time, not a holy man. To the extent that there is truth in psychiatry, it can withstand the public's loss of reverence, in the long run. Blind reverence for professionals is a freebie, which locally may be beneficial to the public if the professionals really are wise, but is essentially fragile. IMO it's not worth trying to cultivate or preserve. In the long run, good stuff will win out, and smart psychiatrists can just as easily frame themselves as agreeing with the anti-psych cranks in spirit, as being on Team Avoid Side Effects And Withdrawal Symptoms, Unlike All Those Dumbasses Who Don't Care (all two of them).

Some examples of valuable true things I've learned from Michael:

  • Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
  • Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you're not any smarter.
  • Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it "comes out right".) Sometimes the best work of this kind doesn't look grandiose or prestigious at the time you're doing it.
  • The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
  • Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
  • Examples of potentially valuable medical innovation that never see wide application are abundant.
  • A major problem in the world is a 'hope deficit' or 'trust deficit'; otherwise feasible good projects are left undone because people are so mistrustful that it doesn't occur to them that they might not be scams.
  • A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
  • Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not *all* conflicts are merely misunderstandings.
  • How intersubjectivity works; "objective" reality refers to the conserved *patterns* or *relationships* between different perspectives.
  • People who have coherent philosophies -- even opposing ones -- have more in common in the *way* they think, and are more likely to get meaningful stuff done together, than they can with "moderates" who take unprincipled but middle-of-the-road positions. Two "bullet-swallowers" can disagree on some things and agree on others; a "bullet-dodger" and a "bullet-swallower" will not even be able to disagree, they'll just not be saying commensurate things.


I'm not actually asking for people to do a thing for me, at this point. I think the closest to a request I have here is "please discuss the general topic and help me think about how to apply or fix these thoughts."

I don't think all communication is about requests (that's a kind of straw-NVC) only that when you are making a request it's often easier to get what you want by asking than by indirectly pressuring.

That's flattering to Rawls, but is it actually what he meant?

Or did he just assume that you don't need a mutually acceptable protocol for deciding how to allocate resources, and you can just skip right to enforcing the desirable outcome?

Can you explain why return on cash vs. return on equity matters?

I'm struck by the assumption in this essay that you have a clear distinction between your own values and other people's.

I think that having a clear sense of personal identity can be difficult and not everyone may be able to hold on to their own perspective. I am concerned that this might be especially hard in an era of social media, when opinions are shared almost as soon as they are formed. Think of a blog/tumblr/fb that consists almost entirely of content copied from other sources: it is nominally a space curated/created by "you", but really it is a lot of other people's thoughts aggregated with very little personal modification. That could be a recipe for really poor internal coherence.

It's pretty standard psychologist's advice to have a journal where you write truly private reflections, shared with literally nobody else. I imagine this helps in constructing a self with boundaries.

Relatedly, "self-affirmation" (really kind of a misnomer: it means writing essays about what values are priorities for you) has a large psychology literature showing lots of good effects, and I find it extremely helpful for my own thoughts. A lot of self-help seems to boil down to "sit down and write reflections on what your priorities are." Complice is this in productivity-app form, The Desire Map is this in book form, etc.

Note that the examples in the essay of mechanisms that produce inefficiency are union work rules, non-compete agreements between firms, tariffs, and occupational licensing laws. The former three are not federal regulations on industries, and so would not show up in a comparison of industry dynamism vs. regulatory stringency.

Ok, this is a counterargument I want to make sure I understand.

Is the following a good representation of what you believe?

When you divide GDP by a commodity price, when the commodity has a nearly-fixed supply (like gold or land) we'd expect the price of the commodity to go up over time in a society that's getting richer -- in other words, if you have better tech and better and more abundant goods, but not more gold or land, you'd expect that other goods would become cheaper relative to gold or land. Thus, a GDP/gold or GDP/land value that doesn't increase over time is totally consistent with a society with increasing "true" wealth, and thus doesn't indicate stagnation.

Load More