ryan_b

Sequences

National Institute of Standards and Technology: AI Standards

Comments

Is Rhetoric Worth Learning?

Just over three years after this post was published, I returned to it and switch to a strong upvote from a regular upvote.  The post is well written and engaging, and it recently appears to me that it continues to be highly relevant.  The proximate cause was the post over at the EA forum about an EA debate competition; there was a lot of well-articulated and popular concerns about debate as an activity, the most excellent of which had, if I understand it correctly, the following true concern:

Methods of communicating that are not truth-seeking compete with methods that are for our mastery. Which is to say, by spending time on symmetric weapons like rhetoric, we are forsaking time spent on advancing the truth.

I think this is a mistake, and that the value of rhetoric spoken-or-written to pursuit of the truth is being neglected. I publicly register my intent to write a post on this.

What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)

I would like to register a preference for resilient over robust. This is because in every day language robust implies that the thing you are talking about is very hard to change, whereas resilient implies it recovers even when changes are applied. So the outcome is robust, because the processes are resilient.

I also think it would be good to agree with the logistics and systems engineering literature suggested in the google link, but how regular people talk is my true motivation because I feel like getting this to change will involve talking to a lot of non-experts in any technical literature.

Predictive Coding has been Unified with Backpropagation

I see in the citations for this they already have the Neural ODE paper and libraries.  Which means the whole pipeline also has access to all our DiffEq tricks.

In terms of timelines, this seems bad: unless there are serious flaws with this line of work, we just plugged in our very best model for how the only extant general intelligences work into our fastest-growing field of capability.

A New Center? [Politics] [Wishful Thinking]

Meta level: strong upvote, because I strongly endorse this kind of thinking (actionable-ish, focused on coordination problems); I am also very excited that we are now showing signs of being able to tackle politics reliably without tripping over our traditional taboo.

Object level: I wonder if you'd consider revising your position on the not-a-party point. Referring to your comment else-thread: 

Instead, the proposal is to organize a legible voting bloc. More like "environmentalists" than "the green party".

Environmentalists are a movement, and not an organization; the proposal is for an organization. They are a single-topic group that tackles a narrow range of policies; the proposal shows no intention of isolating itself to a narrow range of policies.

What you have proposed is an organization which will recruit voters, establish consensus within the organization on a broad range of policies, with the goal of increasing their power as voters, and you intend to compete directly with the two major parties in their values. Finally, there are no environmentalist kingmaker organizations precisely because there are lots of environmental organizations, which means the positions of individual environmental organizations are not particularly meaningful in elections; this means the organization will need to compete with, or co-opt voters from, other organizations with similar values/goals.

I put it to you that the most natural fit for what you are proposing is a new political party which chooses not to put candidates on the ballot.

This is an ingenious strategy, in my view: by not advancing candidates, the organization is liberated from the focus on winning campaigns, and it is the focus on winning campaigns that drives most of the crappy behavior from the major parties.  At the same time, creating a legible block of voters does a marvelous job of avoiding direct competition while capitalizing on the short-term incentives direct competition creates.

This looks to me very much like a political party that takes the short-term hit of not directly holding office in exchange for the freedom to place longer-term bets on values and policy overall.  As you observed with third-party viability, winning office is unlikely and so not even trying is not much of a hit, and the potential upside is big.

A New Center? [Politics] [Wishful Thinking]

The short answer is the same thing that prevents the target audience from joining the reds or the blues and influencing them in the direction they would prefer: too much work.

But based on the idea so far, I claim this is a requirement for effectiveness. In order to get either party to change their behavior, they need to have a good understanding of what this group of swing voters want, and that requires getting an inside view.

It is much, much harder to persuade a group of people than it is to simply tell them what they want to hear.  You will be encouraged to know that this is the formal position of virtually all political operatives, because their unit of planning is an election campaign and research shows that is too short a time to effectively persuade a population of voters.

It would also be super weird if when targeting disaffected voters in the middle there were no converts from the disaffected margins of either major party (who presumably will still naturally advocate for the things that drew them to the party in the first place, which is almost the same as a true believer in the party advocating). This too is a desirable outcome.

People Will Listen

I have no idea if this is the answer, but there's a cluster if investing discussion on the EA side around mission hedging. That may be relevant.

Generalizing Power to multi-agent games

I find this line of research tremendously exciting, and look forward to every new post in this vein.

As ever, I favor the ease with which this can be pointed at other problems in addition to AI. It feels intuitively like power-scarcity will allow us to get finely graded quantitative results for all sorts of applications.

Rationalism before the Sequences

I agree with this comment. There is one point that I think we can extend usefully, which may dissolve the distinction with Homo Novis:

I think there's a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments.

While I agree, I also fully expect the list of environments in which we are able to think clearly should expand over time as the art advances. There are two areas where I think shaping the environment will fail as an alternative strategy: first is that we cannot advance the art's power over a new environment without testing ourselves in that environment; second is that there are tail risks to consider, which is to say we inevitably will have such environments imposed on us at some point. Consider events like car accidents, weather like tornadoes, malevolent action like a robbery, or medical issues like someone else choking or having a seizure.

I strongly expect that the ability to think clearly in extreme environments would have payoffs in less extreme environments. For example, a lot of the stress in a bad situation comes from the worry that it will turn into a worse situation; if we are confident of the ability to make good decisions in the worse situation, we should be less worried in the merely bad one, which should allow for better decisions in the merely bad one, thus making the worse situation less likely, and so on.

Also, consider the case of tail opportunities rather than tail risks; it seems like a clearly good idea to work extending rationality to extremely good situations that also compromise clear thought. Things like: winning the lottery; getting hit on by someone you thought was out of your league; landing an interview with a much sought after investor. In fact I feel like all of the discussion around entrepreneurship falls into this category - the whole pitch is seeking out high-risk/high-reward opportunities. The idea that basic execution becomes harder when the rewards get huge is a common trope, but if we apply the test from the quote it comes back as avoid environments with huge upside which clearly doesn't scan (but is itself also a trope).

As a final note - and I emphasize up front I don't know how to square this exactly - I feel like there should be some correspondence between bad environments and bad problems. Consider that one of the motivating problems for our community is X-risk, which is a suite of problems that are by default too huge to wrap our minds around, too horrible to emotionally grapple with, etc. In short, they also meet the criteria for reliably causing rationality to fail, but this motivates us to improve our arts to deal with it. Why should problems be treated in the opposite way as environments?

So I think the Homo Novis distinction comes down to them being in possession of a fully developed art already; we are having to make do with an incomplete one.

For now.

Logan Strohl on exercise norms

I strongly agree with the claim, even if we differ on the motivations. I cultivate a sense of shame myself.

Come to think of it, I also deploy my sense of shame with respect to exercise. Following on Rob's questions, it could probably be considered private.

Logan Strohl on exercise norms

Welp, I've clearly botched this, for which I apologize. To start with, I never meant to make any assumptions about what Logan was thinking, but I can clearly see where that was what I communicated despite myself. This was an unforced error on my part.

I can't get the In Defense of Shame post, because I don't have facebook, but I'd be keen to read - do you know if it was reposted anywhere else? I was unable to locate it at Agenty Duck or here. However, if it is about the book In Defense of Shame, then I was talking about the first of the two dogmas mentioned (which they reject).

What I meant to be talking about was the language drift between the past and present, though I now see Logan wasn't using any more of a standard use of shame than I was. From the Shame Processing link, I see this:

According to me, shame is for keeping your actions in line with what you care about. It happens when you feel motivated to do something that you believe might damage what is valuable (whether or not you actually do the thing).

Shame indicates a particular kind of internal conflict. There's something in favor of the motivation, and something else against it. Both parts are fighting for things that matter to you.

This is very interesting: on the one hand, it is closer to what I mean by guilt than what I mean by shame; on the other hand, it's about reconciling competing priorities, which is supposed to be one of shame's attributes over guilt.

I'm sad about the lack of a social element, but I was sad about that beforehand.

Regarding social and private shame: I think I agree that a utopian society would make use of social shame, but there's a bunch of conditions attached to enable that good use which we now lack. That being said, I'll consider the problem; I have an ongoing related reading list that should let me come to grips with the idea better.

Saying private shame is interesting; even in the sense that I was using shame, I'm not sure I'd oppose a notion of private shame. It's really the suggestion of exclusively-private shame, or anti-social shame, with which I would quibble.

Load More