Gordon Seidoh Worley

If you are going to read just one thing I wrote, read The Problem of the Criterion.

More AI related stuff collected over at PAISRI

Sequences

Advice to My Younger Self
Fundamental Uncertainty: A Book
Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment

Comments

I love stories like this. It's not immediately obvious to me how to translate them to AI—like what is the equivalent of what the Wright brother's did for AI?—but I think hearing these are helpful to developing the mindset that will create the kinds of precautions necessary to work with AI safely.

Can we measure this somehow? Seems like something someone would have already studied. For all the perceived value of open source, does it actually generate a lot of economic value? Probably, seems likely, but until it's quantified we're just arguing intuitions.

My own guess is that open source provides about average value, and the real high value adds come from engineers building things you've probably never heard of, like some obscure performance improvement or new feature that increases conversion rates half a percent for some large organization and thus produces tens of millions of dollars in revenue for one FTE quarter worth of work. And for this kind of work, maybe it really does help to be in person, because it requires knowing a large amount of context about the business in order to be able to effect the necessary changes in the code, whereas open source depends more on things that are overdetermined, and so just a matter of someone smart working on them, and thus less coordination is needed.

This post is interesting, but I think it fails to do enough to provide possible causal mechanisms to help folks really think about this. It's a lot of worrying trends, which are good to worry about and an important start, but for me this post is missing something it really needs. So, let me see if I can provide a simple story that might make sense of the trend and give us some clues about how much to worry.

Lately, times have been getting harder for a large number of folks in Western countries. There's less slack in the system. Historically, left-coded positions have existed only when there's sufficient slack, since people don't care about equity or even equality when they aren't sure if they'll have enough grain to survive the winter—all they want is to at least get by. We may not be on the verge of actual famine, but despite rising absolute wealth, relative perceptions of wealth are falling among the working class and some of the professional class.  Some of this is real, and some of this is hedonic treadmill, failing to recognize the tide that has lifted all boats. Regardless, the experience of decline, even if it is only relative to other people in the same society, leads to a loss of purpose and a desire for a return to the good old days. The political implications follow naturally.

I have a personal anecdote you might find interesting.

All through elementary school I seemed to be the smartest kid in every situation. Not surprising: my IQ scores came in around 145 which puts me at +3 standard deviations or in the 99.7th percentile, or put another way I should only expect to encounter 3/1000 people who are as smart or smarter than me. The entire population of the school was < 800 kids across all grades with about 120 in my grade, so not unexpected that I never met anyone as smart.

It wasn't until middle school, in 8th grade, that I met someone definitely smarter than me. To make matters worse, he was 2 years younger than me in 6th grade. But that was thankfully just in math (he was the only person to solidly outperform me on my school's Mathcounts team). So I was able to keep up the charade that I was the smartest kid in school, there was just the dweeb that was some sort of math savant. So much cope.

I was able to keep this up through high school. There'd be kids who were smarter than me in some narrow domain but I was able to hold onto the idea that I might well be the smartest of them all in general.

Then I met @Eliezer Yudkowsky and was humbled. I mean not at first. It took a few years of seeing him operate up close (can you be up close online?), but I eventually had to accept that I was out classed. And of course I should be: there's ~24 million people in the world who should have the same or higher IQ than me, and that's a helluva lot of people. I'm just a +3 scrub living in the +4's world.

Only, not quite. As I eventually learned, just being smarter, at least for humans, is not always correlated with better life outcomes. I saw people who I was smarter than also doing better than me, getting promotions a head of me, making more money, etc. Since I was young I'd put all my eggs in the IQ basket, and then sometimes in my mid-twenties found out that was a mistake for all but a tiny minority of people.

As you note, I had to learn how to make the most of my comparative advantage. And this has only become more important as I've aged because my fluid intelligence has definitely started to fall off despite trying my best to prevent it. Without the help of something like ChatGPT, I may well never write better code than I did in the past or come up with more clever proofs of mathematical propositions. So I've really leaned into finding other ways to excel, because there's always going to be someone younger, smarter, and faster than me. And at least for now, that's enough.

I like some of the other answers, but they aren't phrased how I would explain it, so I'll add my own. This is something like the cybernetics answer to your question.

The world is made of "stuff". This "stuff" is a mixed soup that has no inherent divisions. But then some of this stuff gets organized into negative feedback processes, and the telos of these feedback processes creates meaning when they extract information from sensors. This information extraction tells this from that in order for the stuff thus organized to do things. From this we get the basis of what we call minds: stuff arranged to perform negative feedback that generates information about the world via observation that models the world to change behavior. Stack enough of these minds up and you get interesting stuff like plants and animals and machines.

So the brain, though kind of weird to think about, is just this kind of control system, or rather an agglomeration of control systems, that is able to do things like map the territory it finds itself in.

I try to cover this topic in some depth in this chapter of my in-progress book, here.

This is an excellent point that I think is under-appreciated, especially by would-be and new rationalists.

It's really tempting to dismiss stuff that looks like it shouldn't work. And to some extent that's fair, but only because most stuff doesn't work, including the stuff that looks like it should work. Things have to be tried, and even then the result of our best attempts at controlled experiments sometimes return inconclusive results. Determining causal relationships is hard, and when you find something that seems to work sometimes you just have to go with it whether it makes sense or not since reality is going to be how it is whether or not it fits within your model.

Meanwhile we've got to get on with the project of living our best lives whether the things we do seem like they should lead to winning. If you want to win, you've got to sometimes be willing to take the status hit, get out there, and do something weird that people will think is nuts to try because it sometimes works. It doesn't mean throwing out everything you know, but it does mean bothering to go live in the real world where things are messy and you can't always figure out what's up.

Time will tell. If you keep doing crazy stuff after it becomes clear it doesn't work, sure, that's a mistake. But it's also a mistake not to check. If you never verified that you don't have psychic powers, can't teleport, that healing crystals don't work, etc. then you're also going to miss out on things like weird therapy modalities that do work for some people for unclear reasons and idiosyncratic dietary changes that dramatically improve your life but would make someone else's life worse.

As you point out, the leading edge is not respectable. Being on the leading edge has costs, but also rewards. It's a test of one's strength of a rationalist to be able to go all in on something that has high EV and low probability to see if it might work. The only real failure is the failure to update once the evidence comes in.

Simulating rejection can help you overcome fear of rejection by making it feel safe, but it will only take you so far because, if you fear rejection, there is likely a "stuck" memory causing you to have something like a strong prior for expecting bad outcomes from rejection. To address such stuck memories, consider memory reconsolidation.

https://www.lesswrong.com/shortform

You can also get them on the allPost page by changing your display settings.

FWIW the title threw a red flag for me, leading me to expect some poorly reasoned take. I'm not sure why. Reading the Overview section my expectations were immediately reset. Possibly some amount of downvotes are just people reacting to the title without reading or failing to update their sentiment even after reading (I'd like to believe that doesn't happen on LessWrong but I'm sure it does, especially on long posts that many are likely to skim or not read).

Like it. Seems like another way of saying that sometimes what you really need is more dakka. Tagged the post as such to reflect that.

Load More