I agree that points 12 and 13 are at least mildly controversial. From the PoV of someone adopting these rules, it'd be enough if you changed the "will"s to "may"s.
By and large, the fewer points that are binding for the market creator, the easier it is to adopt the rules. I'm fine with a few big points being strongly binding (e.g. #15), and also fine with the more aspirational points where "Zvi's best judgement" automatically gets replaced with "Vitor's best judgement". But I'd rather not commit to some minutiae I don't really care about.
(It's more about "attack surface" or maybe in this case we should say "decision surface" than actual strong disagreement with the points, if that makes sense?)
Very interesting read, thank you!
How did you end up doing this work? Did you deliberately seek it out? What are teachers, probation officers and so on (everyone who is not a guard) like? What drives them?
This kind of thing has existed (for example optimal hardware layout) for decades. It sounds a lot less impressive when you sub out "AI" for "algorithm".
"for certain aspects of computer science, computer scientists are already worse than even naive sorting algorithms". Yes, we know that machines have a bunch of advantages over humans. Calculation speed and huge, perfect memory being the most notable.
Where on earth are you pulling those predictions about GPT-5 and 6 from? I'd take the other side of that bet.
The original chart is misleading in more ways than one. Facebook, Netflix et al might be household names now, but this has more to do with their staying power and network effects than any sort of exceedingly fast adoption.
I also suspect that chatGPT has a bunch of inactive accounts, as it's essentially a toy without an actual use-case for most people.
Recognise that almost all the Kolmogorov complexity of a particular simulacrum is dedicated to specifying the traits, not the valences. The traits — polite, politically liberal, racist, smart, deceitful — are these massively K-complex concepts, whereas each valence is a single floating point, or maybe even a single bit!
A bit of a side note, but I have to point out that Kolmogorov complexity in this context is basically a fake framework. There are many notions of complexity, and there's nothing in your argument that requires Kolmogorov specifically.
It seems to me that you are attempting to write a timeless, prescriptive reference piece. Then a paragraph sneaks in that is heavily time and culture dependent.
I'm honestly not certain about the intended meaning. I think you intent mask wearing to be an example of a small and reasonable cost. As a non-american, I'm vaguely aware what costco is, but don't know if there's some connotation or reference to current events that I'm missing. And if I'm confused now, imagine someone reading this in 2030...
Without getting into the object-level discussion, I think such references have no place in the kind of post this is supposed to be, and should be cut or made more neutral.
You didn't address the part of my comment that I'm actually more confident about. I regret adding that last sentence, consider it retracted for now (I currently don't think I'm wrong, but I'll have to think/observe some more, and perhaps find better words/framing to pinpoint what bothers me about rationalist discourse).
It's analogous to a customer complaining "if Costco is going to require masks, then I'm boycotting Costco." All else being equal, it would be nice for customers to not have to wear masks, and all else being equal, it would be nice to lower the barrier to communication such that more thoughts could be more easily included.
Just a small piece of feedback. This paragraph is very unclear, and it brushes on a political topic that tends to get heated and personal.
I think you intended to say that the norms you're proposing are just the basic cost of entry to a space with higher levels of cooperation and value generation. But I can as easily read it as your norms being an arbitrary requirement that destroys value by forcing everyone to visibly incur pointless costs in the name of protecting against a bogeyman that is being way overblown.
This unintended double meaning seems apt to me: I mostly agree with the guidelines, but also feel that rationalists overemphasize this kind of thing and discount the costs being imposed. In particular, the guidelines are very bad for productive babbling / brainstorming, for intuitive knowledge transfer, and other less rigorous ways of communicating that I find really valuable in some situations.
One thing I've read somewhere is that people who sign but aren't deaf, tend to use sign language in parallel with spoken language. That's an entire parallel communications channel!
Relatedly, rationalists lean quite heavily towards explicit ask/tell culture. This is sometimes great, but often clunky: "are you asking for advice? I might have some helpful comments but I'm not sure if you actually want peoples' opinions, or if you just wanted to vent."
Combining these two things, I see possible norms evolving where spoken language is used for communicating complex thoughts, and signing is used for coordination, cohesion, making group decisions (which is often done implicitly in other communities). I think there's a lot of potential upside here.