mike_hawke

Wiki Contributions

Comments

Point well taken that technological development and global dominance were achieved by human cultures, not individual humans. But I claim that it is obviously a case of motivated reasoning to treat this as a powerful blow against the arguments for fast takeoff. A human-level AI (able to complete any cognitive task at least as well as you) is a foom risk unless it has specific additional handicaps. These might include:
- For some reason it needs to sleep for a long time every night.
- Its progress gets periodically erased due to random misfortune or enemy action.
- It is locked into a bad strategic position, such as having no cognitive privacy from overseers.
- It can't copy itself.
- It can't gain more compute.
- It can't reliably modify itself.

I'll be pretty surprised if we get AI systems that can do any cognitive task that I can do (such as make longterm plans and spontaneously correct my own mistakes without them being pointed out to me) but that can also only improve themselves very slowly. It really seems like, if I were able to easily edit my own brain, then I would be able to increase my abilities across the board, including my ability to increase my abilities.

The part about airports reminds me of "If All Stories were Written Like Science Fiction Stories" by Mark Rosenfelder: 
https://www.bzpower.com/blogs/entry/58514-if-all-stories-were-written-like-science-fiction-stories/
 

No one else has mentioned The Case Against Education by Bryan Caplan. He says that after reading and arithmetic, schooling is mostly for signaling employable traits like conscientiousness, not for learning. I think Zvi Mowshowitz and Noah Smith had some interesting discussion about this years ago. Scott Alexander supposes that another secret purpose of school is daycare. Whatever the real purposes are, they will tend to be locked into place by laws. Richard Hanania has written a bit about what he thinks families might choose instead of standard schooling if the laws were relaxed.

Without passing judgment on this, I think it should be noted that it would have seemed less out of place when the Sequences were fresh. At that time, the concept of immaterial souls and the surrounding religious memeplexes seemed to be a genuinely interfering with serious discussion about minds.

However, and relatedly, there was not a lot of cooking discussion on LW in 2009, and this tag was created in 2020.

I'm out of the loop. Did Daniel Kokotajlo lose his equity or not? If the NDA is not being enforced, are there now some disclosures being made?

Thanks for the source.

I've intentionally made it difficult for myself to log into twitter. For the benefit of others who avoid Twitter, here is the text of Kelsey's tweet thread:

I'm getting two reactions to my piece about OpenAI's departure agreements: "that's normal!" (it is not; the other leading AI labs do not have similar policies) and "how is that legal?" It may not hold up in court, but here's how it works:

OpenAI like most tech companies does salaries as a mix of equity and base salary. The equity is in the form of PPUs, 'Profit Participation Units'. You can look at a recent OpenAI offer and an explanation of PPUs here: https://t.co/t2J78V8ee4

Many people at OpenAI get more of their compensation from PPUs than from base salary. PPUs can only be sold at tender offers hosted by the company. When you join OpenAI, you sign onboarding paperwork laying all of this out.

And that onboarding paperwork says you have to sign termination paperwork with a 'general release' within sixty days of departing the company. If you don't do it within 60 days, your units are cancelled. No one I spoke to at OpenAI gave this little line much thought.

And yes this is talking about vested units, because a separate clause clarifies that unvested units just transfer back to the control of OpenAI when an employee undergoes a termination event (which is normal).

There's a common legal definition of a general release, and it's just a waiver of claims against each other. Even someone who read the contract closely might be assuming they will only have to sign such a waiver of claims.

But when you actually quit, the 'general release'? It's a long, hardnosed, legally aggressive contract that includes a confidentiality agreement which covers the release itself, as well as arbitration, nonsolicitation and nondisparagement and broad 'noninterference' agreement.

And if you don't sign within sixty days your units are gone. And it gets worse - because OpenAI can also deny you access to the annual events that are the only way to sell your vested PPUs at their discretion, making ex-employees constantly worried they'll be shut out.

Finally, I want to make it clear that I contacted OpenAI in the course of reporting this story. So did my colleague SigalSamuel They had every opportunity to reach out to the ex-employees they'd pressured into silence and say this was a misunderstanding. I hope they do.

Even acknowledging that the NDA exists is a violation of it.

This sticks out pretty sharply to me.

Was this explained to the employees during the hiring process? What kind of precedent is there for this kind of NDA? 

There are things I would buy if they existed. Is there any better way to signal this to potential sellers, other than tweeting it and hoping they hear? Is there some reason to believe that sellers are already gauging demand so completely that they wouldn't start selling these things even if I could get through to them? 

Would I somehow feel this problem less acutely if I had never been taught Fahrenheit, Celcius, or Kelvin; and instead been told everything in terms of gigabytes per nanojoule? I guess probably not. Inconvenient conversions are not preventing me from figuring out the relations and benchmarks I'm interested in.

It's important to remember, though, that I will be fine if I so choose. After all, if the scary impression was the real thing then it would appear scary to everyone.  

 

Reading this makes me feel some concern. I think it should be seriously asked: Would you be fine if you hypothetically chose to take a gap year or drop out? Those didn't feel like realistic options for me when I was in high school and college, and I think this ended up making me much less fine than I would have been otherwise. Notably, a high proportion of my close friends in college ended up dropping out or having major academic problems, despite being the smartest and most curious people I could find.

My experiences during and after college seemed to make a lot more sense after hearing about ideas like credential inflation, surplus elites, and the signaling model. It seems plausible that I might have made better decisions if I had been encouraged to contemplate those ideas as a high schooler.

In measuring and communicating about the temperature of objects, humans can clearly and unambiguously benchmark things like daily highs and lows, fevers, snow, space heaters, refrigerators, a cup of tea, and the wind chill factor. We can place thermometers and thereby say which things are hotter than others, and by how much. Daily highs can overlap with fevers, but neither can boil your tea.
 

But then I challenge myself to estimate how hot a campfire is, and I'm totally stuck.

It feels like there are no human-sensible relationships once you're talking about campfires, self-cleaning ovens, welding torches, incandescent filaments, fighter jet exhaust, solar flares, Venus, Chernobyl reactor #4, the anger of the volcano goddess Pele, fresh fulgurites, or the boiling point of lead. Anything hotter than boiling water has ascended into the magisterium of the Divinely Hot, and nothing more detailed can be said of them by a mortal. If I were omnipotent, omniscient, & invulnerable, then I could put all those things in contact with each other and then watch which way the heat flows. But I am a human, so all I can say is that anything on that list could boil water.

Load More