FWIW I normally eat dinner around 6, go to bed 5 hours later at 11pm, and eat my next meal 8.5 hours later at 7:30am; at which point "break-fast" is certainly the right word, since I haven't eaten for 13 hours. Contrast to breakfast, which only has to last me 5 hours (until lunch at 12:30pm), and lunch which again only has to last me 5.5 hours (until 6pm).
People say that meta-analyses can weed out whatever statistical vagaries there may be from individual studies; but looking of that graph of the meta-study of saturated fat, I'm just not convinced of that at all. Like, relative risk of CVD events suddenly goes from 0.2 to 0.8 at a threshold of 9%, and then just stays there? Relative risk of stroke goes from 0.6 at 9% to 0.9 at 12% and then down to 0.5 at 13%? Does that say to you, "more saturated fat is bad", or "there's a statistical anomaly causing this jump"?
The "purpose" of most martial arts is to defeat other martial artists of roughly the same skill level, within the rules of the given martial art.
Not only skill level, but usually physical capability level (as proxied by weight and sex) as well. As an aside, although I'm not at all knowledgeable about martial arts or MMA, it always seemed like an interesting thing to do might to use some sort of an ELO system for fighting as well: a really good lightweight might end up fighting a mediocre heavyweight, and the overall winner for a year might be the person in a given <skill, weight, sex> class that had the highest ELO. The only real reason to limit the ELO gap between contestants would be if there were a higher risk of injury, or the resulting fight were consistently just boring. But if GGP is right that a big upset isn't unheard of, it might be worth 9 boring fights for 1 exciting upset.
I like the MVP! One comment re the idea of this becoming a larger thing in journalism, in relation to Goodhart's Law ("Once a measure becomes a target, it ceases to be useful as a measure"):
For example, even now, how much of the "85% chance Russia gains territory" is pure "wisdom of crowds" placing bets based on knowledge, and how much is the Kremlin buying "Russia gains territory" shares, in an effort to convince people that things will go well for them? If the NYT and the Washington Post -- and then Senators -- regularly quoted prediction markets, you can bet the latter would go into overdrive.
I was chatting with a friend of mine who works in the AI space. He said that the big thing that got them to GPT-4 was the data set; which was basically the entire internet. But now that they've given it the entire internet, there's no easy way for them to go further along that axis;; that the next big increase in capabilities would require a significantly different direction than "more text / more parameters / more compute".
Thanks for these, I'll take a look. After your challenge, I tried to think of where my impression came from. I've had a number of conversations with relatives on Facebook (including my aunt, who is in her 60's) about whether GPT "knows" things; but it turns out so far I've only had one conversation about the potential of an AI apocalypse (with my sister, who started programming 5 years ago). So I'll reduce confidence in my assessment re what "people on the street" think, and try to look for more information.
Re HackerNews -- one of the tricky things about "taking the temperature" on a forum like that is that you only see the people who post, not the people who are only reading; and unlike here, you only see the scores for your own comments, not those of others. It seems like what I said about alignment did make some connection, based on the up-votes I got; I have no idea how many upvotes the dissenters got, so I have no idea if lots of people agreed with them, or if they were the handful of lone objectors in a sea of people who agreed with me.
Can you give a reference? A quick Google search didn't turn anything like that up.
To me it's an attempt at the simple, obvious strategy of telling people ~all the truth he can about a subject they care a lot about and where he and they have common interests. This doesn't seem like an attempt to be clever or explore high-variance tails. More like an attempt to explore the obvious strategy, or to follow the obvious bits of common-sense ethics, now that lots of allegedly clever 4-dimensional chess has turned out stupid.
But it does risk giving up something. Even the average tech person on a forum like Hacker News still thinks the risk of an AI apocalypse is so remote that only a crackpot would take it seriously. Their priors regarding the idea that anyone of sense could take it seriously are so low that any mention of safety seems to them a fig-leaf excuse to monopolize control for financial gain; as believable as Putin's claims that he's liberating the Ukraine from Nazis. (See my recent attempt to introduce the idea here .) The average person on the street is even further away from this I think.
The risk then of giving up "optics" is that you lose whatever influence you may have had entirely; you're labelled a crackpot and nobody takes you seriously. You also risk damaging the influence of other people who are trying to be more conservative. (NB I'm not saying this will happen, but it's a risk you have to consider.)
For instance, personally I think the reason so few people take AI alignment seriously is that we haven't actually seen anything all that scary yet. If there were demonstrations of GPT-4, in simulation, murdering people due to mis-alignment, then this sort of a pause would be a much easier sell. Going full-bore "international treaty to control access to GPUs" now introduces the risk that, when GPT-6 is shown to murder people due to mis-alignment, people take it less seriously, because they've already decided AI alignment people are all crackpots.
I think the chances of an international treaty to control GPUs at this point is basically zero. I think our best bet for actually getting people to take an AI apocalypse seriously is to demonstrate an un-aligned system harming people (hopefully only in simulation), in a way that people can immediately see could extend to destroying the whole human race if the AI were more capable. (It would also give all those AI researchers something more concrete to do: figure out how to prevent this AI from doing this sort of thing; figure out other ways to get this AI to do something destructive.) Arguing to slow down AI research for other reasons -- for instance, to allow society to adapt to the changes we've already seen -- will give people more time to develop techniques for probing (and perhaps demonstrating) catastrophic alignment failures.
Sorry -- that was my first post on this forum, and I couldn't figure out the editor. I didn't actually click "submit", but accidentally hit a key combo that it interpreted as "submit".
I've edited it now with what I was trying to get at in the first place.
Hey! As an Evangelical Christian whose church sends out church plants fairly regularly, I appreciated the basically sympathetic outside-in view of ourselves. Love this: "The role of a pastor is to enable Jesus to take as many shots on goal as possible."
If I could add a bit of extra perspective:
If there's one weakness of the piece, it's the sort of implication about the percentage of narcissists. You state that it's the sort of job that would be attractive to narcissists, which is certainly true. And it's undeniable that narcissists occasionally end up in positions of power (Mars Hill is a great example). But there's sort of an unstated implication, therefore, that a high percentage of people (though unspecified) in church plants are narcissists, because you don't see anything in particular preventing it.
There are several filters; the big one being that it's just a lot of work. You're expected to work long hours, be humble, put up with all kinds of criticism, be willing to do low-level service, etc etc. You're going to have a hard time doing your plant without that initial "support team", and you're going to have a hard time finding an enthusiastic "support team" without playing the role. There are, on the whole, far easier ways to run your petty kingdom than by doing a church plant.
Which isn't to say it doesn't happen. From what I know, cancer-like mutations which cause unlimited cell growth happen all the time; after all, uniform cooperation of every cell in the body is an evolutionarily unstable equilibrium. But the body has mechanisms to detect and counter these. What we call cancer only occurs when a mutation has managed to evade the body's defenses. I think a similar process has happened when a genuine narcissist's church plant gains significant traction.