I'm an admin of LessWrong. Here are a few things about me.
Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.
I'm not sure I understand this part actually, could you elaborate? Is this your concern with the OGI model or with your salary-only for first-N employees idea?
This is a concern I am raising with my own idea.
But since the first N employees usually get to sign off on major decisions, why would they go along with such an agreement?
I'm imagining a world where a group of people step forward to take a lot of responsibility for navigating humanity through this treacherous transition, and do not want themselves to be corrupted by financial incentives (and wish to accurately signal this to the external world). I'll point out that this is not unheard of, Altman literally took no equity in OpenAI (though IMO was eventually corrupted by the power nonetheless).
Couldn't we just... set up a financial agreement where the first N employees don't own stock and have a set salary?
My main concern is that they'll have enough power to be functionally wealthy all-the-same, or be able to get it via other means (e.g. Altman with his side hardware investment / company).
Sounds like a great empirical test!
Appreciate the example. I remember reading that retweet!
At the time it sounded plausible to me, and I assumed it was accurate about certain industries.
I'm interested in understanding a bit more what's going on here. Are we sure you're talking about the same kinds of companies? I'd guess you're dealing with companies in the range of 2k-20k employees, and I think Crowdstrike was substantially affecting companies in the range of 20k-200k employees (or at least that's what I thought of when I saw this tweet), where I imagine auditors have to use much more broad-brush tools to do auditing.
The sorts of companies I imagine as having this kind of broad-strokes audit are extremely broad service industries – airlines, trains, grocery stores, banks, hospitals – where my impression is they often use very old software and buggy hardware due to their overwhelming size and sloth, and where I suspect that a lot of decisions get made by the minimum possible thing required to meet some formal requirements.
Follow-up: Michael Trazzi wrapped up after 7 days due to fainting twice and two doctors saying he was getting close to being in a life-threatening situation.
(Slightly below my modal guess, but also his blood glucose level dropped unusual fast.)
FAO @Mikhail Samin.
That's a healthy hypothesis to track.
I recall a rationalist I know chiding Eliezer for his bad tweeting, and then Eliezer asked him to show him an example of a recent tweet that was bad, and then the rationalist failed to find anything especially bad.
Perhaps this has changed in the 2-3 years since that event. But I'd be interested in an example of a tweet you (lc) thought was bad.
Oh okay. I don't find this convincing, consistent with my position above I'd bet that in the longer term we'd do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead. (I think it's really embarrassing we don't have better things in their place, especially after the industrial revolution.) I don't think I can argue well for that position right now, I'll need to think on it more (and maybe write a post on it when I've made some more progress on the reasonining).
(Obvious caveat that actually we only have like 0.5-3 decades of being humans any more, so the above 'centuries' isn't realistic.)
Curated! I thought this walked through a lot of the relevant considerations helpfully, and I liked the reframing of ask and guess cultures (and the idea that there can be many levels of echos being tracked).