I've been a long-time user of book darts and highly recommend them.
The one other downside is that if they are on the page and catch on something so that they rotate, the clip can cut into the page edge. This can generally be avoided by making sure you put them all the way to the edge of the page and being aware not to let anything drag along the edge of the pages of a book with darts in it.
Do co-ops scale? I would guess they may not. If many firms are larger than the size that co-ops effectively scale to, then we would see more traditional firms than co-ops.
This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027
I think this isn't a strong enough statement. Indeed, the median narrative is longer. However, even the modal narrative ought to include at least one unspecified obstacle occurring. In a three-year plan, the most frequent scenarios have something go wrong.
I think it is interesting that you think it is not very neglected. I assume you think that because languages like Rust, Kotlin, Go, Swift, and Zig have received various funding levels. Also, academic research is funding languages like Haskell, Scala, Lean, etc.
I suppose that is better than nothing. However, from my perspective, that is mostly funding the wrong things and even funding some of those languages inadequately. As I mentioned, Rust and Go show signs of being pushed to market too soon in ways that will be permanently harmful to the developers using them. Most of those languages aren't improving programming languages in any meaningful way. They are making very minor changes at the margin. Of the ones I listed, I would say only Rust and Scala have made any real advances in mainstream languages, and Scala is still mired in many problems because of the JVM ecosystem. On the other hand, the Go language has been heavily funded and pushed by Google and has set programming languages back significantly.
I would say there is almost no path to funding a language that is both meant for widespread general use and pushes languages forward. Many of the languages that have received funding did so by luck and were funded too late in the process and underfunded. There is no funding that actually seeks out good early-stage languages and funds them.
Also, many of those languages got funding by luck. Luck is not a funding plan.
Thanks for the summary of various models of how to figure out what to work on. While reading it, I couldn't help but focus on my frustration about the "getting paid for it" part. Personally, I want to create a new programming language. I think we are still in the dark age of computer programming and that programming languages suck. I can't make a perfect language, but I can take a solid step in the right direction. The world could sure use a better programming language if you ask me. I'm passionate about this project. I'm a skilled software developer with a longer career than all the young guns I see. I think I've proved with my work so far that I am a top-tier language designer capable of writing a compiler and standard library. But...... this is almost the definition of something you can't and won't be paid for. At least not until you've already published a successful language. That fact greatly contributes to why we can't have better programming languages. No one can afford to let them incubate as long as needed. Because of limited resources, everyone has to push to release it as fast as possible. Unlike other software, languages have very strict backward compatibility requirements, so improving them is a challenge and inevitably leads to real issues as the language grows over time. However, they can never fix previous mistakes or address design changes needed to support new features.
I'm confused by the judges' lack of use of the search capabilities. I think we need more information about how the judges are selected. It isn't clear to me that they are representative of the kinds of people we would expect to be acting as judges in future scenarios of superintelligent AI debates. For example, a simple and obvious tactic would be to ask both AIs what one ought to search for in order to be able to verify their arguments. An AI that can make very compelling arguments still can't change the true facts that are known to humanity to suit their needs.
This is not sound reasoning because of selection bias. If any of those predictions had been correct, you would not be here to see it. Thus, you cannot use their failure as evidence.
As someone who believes in moral error theory, I have problems with the moral language ("responsibility to lead ethical lives of personal fulfillment", "Ethical values are derived from human need and interest as tested by experience.").
I don't think that "Life’s fulfillment emerges from individual participation in the service of humane ideals" or "Working to benefit society maximizes individual happiness." Rather I would say some people find some fulfillment in those things.
I am vehemently opposed to the deathist language of "finding wonder and awe in the joys and beauties of human existence, its challenges and tragedies, and even in the inevitability and finality of death." Death is bad and should not be accepted.
I assume there are other things I would disagree with, but those are a few that stand out when skimming it.
I agree with your three premises. However, I would recommend using a different term than "humanism".
Humanism is more than just the broad set of values you described. It is also a specific movement with more specific values. See for example the latest humanist manifesto. I agree with what you described as "humanism" but strongly reject the label humanist because I do not agree with the other baggage that goes with it. If possible, try to come up with a term that directly states the value you are describing. Perhaps something along the lines of "human flourishing as the standard of value"?
I did watch this interview, but not his other videos. It does start with the intro from that trailer. However, I did not see it as reflecting a personality cult. Rather, it seemed to me that it was trying to establish Eliezer's credibility and authority to speak on the subject for people who don't know who he is. You have to remember that most people aren't tracking the politics of the rationality community. They are very used to an introduction that hypes up the guest. Yes, it may have been a bit more hyperbolic than I would like, but given how much podcast/interview guests are hyped on other channels and the extent to which Eliezer really is an expert on the subject, much more so than many book authors that get interviewed, it was necessary to lay it on strong.