From Twitter:

If you listened to my podcast w/Michael Sandel, you know we have very different views on whether markets are "degrading"

One thing I didn't mention to him: This bit in his book cracked me up -- because I remember my friends & I found this aspect of Moneyball SO HEARTWARMING <3 pic.twitter.com/9W6Op30vF8

— Julia Galef (@juliagalef) December 10, 2020

I haven’t actually seen Moneyball, but it does sound heartwarming, and I have had to hide my tears when someone described a payment app their company was working, so I’m probably in Julia’s category here.

If I didn’t feel this way though, reading this I might imagine it as some alien nerdly aberration, and not a way that I could feel from the inside, or that would seem the ‘right’ way to feel unless I became brain-damaged. Which I think is all wrong—such feelings seem to me to be a warm and human response to appreciating the situation in certain ways. So I want to try to describe what seems to be going on in my mind when my heart is warmed by quantitative methods and efficient algorithms.

When using good quantitative methods makes something better, it means that there wasn’t any concrete physical obstacle to it being better in the past. We were just making the wrong choices, because we didn’t know better. And often suffering small losses from it at a scale that is hard to imagine.

Suppose the pricing algorithm for ride sharing isn’t as good as it could be. Then day after day there will be people who decide to walk even though they are tired, people who wait somewhere they don’t feel safe for a bit longer, countless people who stand in their hallway a bit longer, people who save up their health problems a bit more before making the expensive trip to a doctor, people who decide to keep a convenient car and so have a little bit less money for everything else. All while someone who would happily to drive each of them at a price they would happily pay lives nearby, suffering for lack of valuable work.

I’m not too concerned if we make bad choices in baseball, but in lots of areas, I imagine that there are these slow-accreting tragedies, in thousands or millions or billions of small inconveniences and pains accruing each day across the country or the world. And where this is for lack of good algorithms, it feels like it is for absolutely nothing. Just unforced error.

Daily efforts and suffering for nothing are a particular flavor of badness. Like if someone erroneously believed that it was important for them to count to five thousand out loud at 10am each day, and every day they did this—and if they traveled they made sure there would be somewhere non-disturbing to do it, and if they stayed up late they got up by 10am; and if they were doing something they stepped out—there would be a particular elation in them escaping this senseless waste of their life, perhaps mixed with sorrow for what had been senselessly lost.

Also, having found the better method, you can usually just do it at no extra cost forever. So it feels reelingly scalable in a way that a hero fighting a bad guy definitively does not. This feels like suddenly being able to fly, or walk through walls.

So basically, it is some combination of escape from a senseless corrosion of life, effortlessly, at a scale that leaves me reeling.

Another thing that might be going on, is that it is a triumph of what is definitely right over what is definitely wrong. Lots of moral issues are fraught in some way. No humans are absolutely bad and without a side to the story. But worse quantitative methods are just straightforwardly wrong. The only reason for picking baseball players badly is not knowing how to do it better. The only reason for using worse estimates for covid risk is that you don’t have better ones. So a victory for better quantitative methods is an unsullied victory for light over darkness in a way that conflicts between human forces of good and bad can’t be.

Yet another thing is that a victory for quantitative methods is always a victory for people. And if you don’t know who they are, that means that they quietly worked to end some ongoing blight on humanity, and did it, and weren’t even recognized. Often, even the good they did will look like a boring technical detail and won’t look morally important, because saving every American ten seconds doesn’t look like saving a life. And I’m not sure if there is anything more heartwarming than someone working hard to do great good, relieving the world from ongoing suffering, knowing that neither they nor and what they have given will be appreciated.

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 2:54 PM

I think of it like this. Quantitative methods scale. So even if they are marginal wins they can be levered to the hilt for big wins. Our system 1 doesn't really understand the concept of things varying by more than a factor of 100, so it's hard for people to have appropriate responses to such things.

I don't think quantitative methods are always a victory for people. They are certainly a victory for the entity using them, helping it achieve its goals, but not they are no necessarily good for humanity as a whole. My goals don't align with Google's, so every bit of increase in youtube's ability to recommend me videos that successfully waste my time is a negative for me.

saving every American ten seconds doesn’t look like saving a life

 

I like this framing. I agree with the sentiment quantitatively and morally, but still have a hard time internalizing it, and probably always will..

Yes and no.

Most social changes have too many effects to be pure wins, so a widespread shift to quantification will likely cause some harm in addition to benefits.

The WEIRDest People in the World has some hints about the trade-offs. E.g. in many cultures, people don't buy from the merchant who offers them the best deal. They buy from the merchant to whom they're most closely connected (as in kin). A switch to more competitive markets makes trade more efficient, at the cost of weakening some social bonds.

The Institutional Revolution: Measurement and the Economic Emergence of the Modern World, by Douglas Allen, argues that trends of this nature played an important role in the industrial revolution. In particular, the practice of honor duels seems to have ended due to better ways of measuring how honorable a person is.

See also Propagating Facts Into Aesthetics. I think there are a bunch of reasons why quantitative methods are heartwarming, but there's something of a skill of integrating those reasons into your emotional response.

Thanks for this post! I wasn't disgusted by the quantitative methods in the Moneyball example, but it didn't feel heartwarming either. With you explanation of what it feels from the inside, I think I better get how it could be heartwarming, and I like it.

Thank you for writing this! I especially resonate with the unsullied victory part.

And where this is for lack of good algorithms, it feels like it is for absolutely nothing. Just unforced error.

This is where I feel differently. Not knowing a good algorithm is a good reason not to be able to do something. Brainpower is a limited resource. It feels no more of an unforced error than being unable to do something due to lack of energy. 

 

And, given background singularitarian assumptions that a sufficiently smart AI could bootstrap self replicating nanotech, and make a radically utopian transhumanist future in a matter of days. From this point of view, anything resembling normality is entirely due to lack of good algorithms.

Love this! Very much agree. I do work on improving pricing methods in my day job, but I hadn’t been equipped with the emotional lens that this post describes - so this is useful to me (and just nice!). I’m gonna share it with people at work.

Speaking of little tragedies, some tragedies got me thinking a long time ago.

My biggest one was the fact that most programming languages (1) aren't compatible...Python doesn't Interop with C# doesn't Interop with C++... So people struggle to keep reinventing the wheel in different languages and only rarely is a job done well; And (2) popular languages aren't powerful or extensible (or efficient) enough - e.g. I made a prototype unit inference engine for an obscure language in 2006 and still today not a single one of the popular languages has a similar feature. So I set out to fix these problems 13 years ago in my free time... and I'm still stuck on these problems today. I wished so much I could spend more time on it that in 2014 I quit my job, which turned out to be a huge mistake, but never mind. (There are web sites for my projects which go unnoticed by pretty much everyone. My progress has been underwhelming, but even when I think I've done a great job with great documentation and I've tried to publicize it, it makes no difference to the popularity. Just one of life's mysteries.)

Anyway, I've come to think that actually there are lots of similar problems in the world: problems that go unsolved mainly because there is just no way to get funding to solve them. (In a few cases maybe it's possible to get funding but the ideas man just isn't a businessman so it doesn't happen... I don't think this is one of those cases.) For any given problem whose solutions are difficult and don't match up with any capitalist business model, they're probably just not going to be solved, or they will be solved in a very slow and very clumsy way.

I think government funding for "open engineering" is needed, where the work product is a product, service, or code library, not a LaTeX jargonfest in a journal. Conventional science itself seems vaguely messed up, too; I've never seen the sausage get made so I'm unfamiliar with the problems, but they seem rather numerous and so I would qualify the previous statement by saying we need open engineering that doesn't work as badly as science.

UBI might work as an alternative. It would lack the motivating structure of a conventional job, but if I could find one other UBI-funded person who wanted to do the same project, maybe we could keep each other motivated. I noticed a very long time ago that most successful projects have at least two authors, but still I never found a second person who wanted to work on the same project and earn zero income.

There are web sites for my projects which go unnoticed by pretty much everyone. 

When wanting to create change you have the choice to engage with the existing system or do something outside of the system. 

Here you made a choice to be outside of the system and thus don't affect the system. Many languages have open design processes and engaging with those is the key of creating change. 

We don't live in a world where all the programming languages we have are closed source with closed language design.

It has not escaped my notice that hopping on a bandwagon is an easier way to gain attention, but a lot of people have had success by starting their own projects.

How is what I did different from Ruby, Python, Vue, Unison, V, Nim, or any of those projects where people make general-purpose libraries to supplement the standard libraries? And in particular, how is my LeMP front-end for C# different from C++, which began as a C preprocessor called "C with Classes"?

A tempting answer is that I was simply 20 years too late to be able to do something new, but V and Vue are quite recent examples. In any case, if we're talking about a project like LES - I am unaware of anything else like it, so which existing project should I have engaged with in order to make it a success? I did try to engage in 2016 with the WebAssembly CG, but that was a flop, as the other members mostly chose not to participate in the conversation I tried to start.

I did try to engage in 2016 with the WebAssembly CG, but that was a flop, as the other members mostly chose not to participate in the conversation I tried to start.

If you engage with an existing project you actually have to good arguments that convince other people. That isn't easy but a very different problem then no business man wanting to fund it.

In any case, if we're talking about a project like LES - I am unaware of anything else like it, so which existing project should I have engaged with in order to make it a success? 

A new project needs a value proposition that allows people to make a decision that it solves their problems better then other solutions. In your description here you haven't articulated any value proposition that would make a person prefer to write something in your framework.

The page about LES doesn't say anything about who might have problems that LES solves and what kind of problems that happen to be. 

In contrast Vue.js has a clear target audience. It's for people who want to build larger html/js projects and thus need a js framework to manage the complexity, then it promises to be approachable, versatile and performant which are things that some of the users want. 

So, how is that different from JSON? I could take the elevator pitch at JSON.org and change some words to make it about LES:

LES (Loyc Expression Syntax) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is a superset of JSON that looks similar to numerous other programming languages. LES is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make LES an excellent data-interchange language, but also a good basis for DSLs and new programming languages.

LES is built on three structures:

  • Blah blah blah

I guess JSON had some additional evangelism which LES does not, but I don't know what exactly made it popular. You might say "well, JSON is valid JavaScript" but in practice that's usually not relevant, as it is unsafe to eval() JSON, and it is virtually unheard of to go in the opposite direction and write JavaScript data in JSON style. In practice, JSON is similar to JS but is clearly distinct, just as LES is similar to JS but distinct.

To put it another way: JSON provides data interoperability. It seems like no one has to explain why this is good, it's just understood. So I am puzzled why the same argument for code falls flat, even though I see people gush about things like "homoiconicity" (which LES provides btw) without having to explain why that is good.

P.S. no one disagreed with my arguments at the WebAssembly CG, so don't be too quick to judge my arguments as bad.

P.P.S. and to be clear, I don't expect the average developer to get it at this point in time, but the argument apparently fell flat even among language designers at FoC. Nobody said they didn't understand, but no interest was expressed either.

In Client/Server world the fact that you need to package in your frontend to be read in the backend is an obvious problem. On the hand few developers have the problem of passing code between applications. The practical cases that come to my mind just use Javascript. The crossplatform code you write for CouchDB for example is in Javascript. 

You might say "well, JSON is valid JavaScript" but in practice that's usually not relevant, as it is unsafe to eval() JSON

That's not my experience. While you might not want to call eval() in production, it seems very useful for testing purposes to be able to copy-paste the data that you send and make an object out of it.

In 2002 when JSON was first developed and found more adoption I would expect that people used a lot of eval()

P.S. no one disagreed with my arguments at the WebAssembly CG, so don't be too quick to judge my arguments as bad.

In this context a good argument is one that convinces people that it's valuable to adopt something and your arguments didn't do that.