I write in the books I read so I just flip to about where I stopped underlining or taking notes.
Couldn't you just ask "What would you estimate the probability I won the lottery before I asked you this question?" Or perhaps ask it a thousand questions generated randomly, with the one you actually want to know the answer to mixed in. There would be almost no information content in that question if your oracle knew there was a 99.9% chance any given question was generated randomly.
I agree with your first post.
However, I think the part of empathy that people will say you're missing is that you're putting "yourself" in someone else's shoes, which is only half of it. Imagine you were Isekai'd away into someone else's life where you had a history of being ineffective, unaccomplished, and self-pitying, with problems that could be solved quickly with relatively little objective effort. You can probably easily imagine how you'd quickly take actions to at least fix most of the low-hanging fruit in this situation. Clean your room, get a job, do at least some exercise, remove the nail and all that.
But if you put yourself in the position of that person, including their internal mental state, possible brain chemistry issues, history of failure despite attempts to fix it, and possibly limited inherent capacities relative to yours, the situation they are in would probably seem a lot less fixable. If you put yourself exactly in their position, with the exact mental state and capacities, you would be doing the exact same things they are currently.
For the woman whose pain is "not about the nail" there has to be something going on in her own head, whether it's some history of trauma, history of repeatedly failing to address the problem to the point it becomes painful to address it, that is stopping her from addressing the problem. Otherwise she would just fix it, no? To empathize with her isn't then to imagine yourself if you were that woman and had a nail in your head, but to imagine what it's like to be her, including whatever it is that's preventing her to solve her own problem.
This sort of empathy might be more useful in understanding people, which can help you achieve your own goals better. There's always a need to make friends and influence people after all. Otherwise you're right in that putting yourself in other people's shoes (the ones who demand empathy are probably more likely to be pathetic), then seeing all the relatively easy things they could do to make their lives significantly better reasonably results in what you described with your first post.
Thank you for the article. I'll give it a read.
It's not an easy answer. I'm a self-interested person, and I realized a while ago that many of my most productive and interesting relationships, both personal and in business, are the direct result of my activity on the internet. I already waste a lot of time commenting my thoughts, sometimes in long form, so I figure if I'm going to be reacting to stuff publicly, I might as well do so in the form of a blog where others might pick up on it. If that results in something good for me, influence, relationships, demonstration of niche intellectual ability the right sort of people in this world people find interesting, then that's not a small part of my motivation.
At the same time I have more naive views about the virtue of just doing things for their own sake. Writing is definitely an excellent tool for fixing your own thought, as it forces you to communicate in a way that makes sense to other people, thus causing your own ideas to make sense to you. The problem with this line of thinking is that I've never been an exemplary writer in any sense, although hopefully I am better and more self-motivated than I used to be. What I can currently write in long-form unassisted I'm not satisfied with, which causes a sort of writers block that I really hate.
I'm integrating the advice of other people into what I'm planning to do, and hopefully with enough effort I'll be able to produce (with critique but not rewriting by AI) something that satisfied both my desire to write for its own sake, while also producing something that other people might actually want to read. Also, I have the annoying consideration of being time- efficient. I by no means spend my time maximally efficiently, but struggling through writing burns a lot of my willpower points that ends up consuming a lot of time elsewhere.
I think the night-watchman concept is interesting, and probably is the ideal goal of alignment absent a good idea of what any other goal would ultimately lead to, but this post smuggles in concepts beyond the night watchman that would be very hard for anyone to swallow.
"Keeping the peace" internationally is pretty ambiguous, and I doubt that any major nation would be willing to give up the right of invasion as a last resort. Even if prevention of rogue super intelligence is seen as desirable, if preventing it also entails giving up a large amount of your current power, then I think world leaders will be pretty reluctant. The same can be said for "underhanded negotiation tactics", which is both ambiguous, and not something most nations would want to give up. Most tactics in negotiation are underhanded in some sense, in that you're using leverage you have over the other person to modify their actions.
The prevention of premature claims to space seems completely unnecessary as well. If the UK actually did something like sign over parts of Canada in return for a claim to the entire Milky Way, by now such a claim would be ignored completely (or maybe a small concession would be made for altering it) considering the UK has almost no space presence compared to the US, EU, China and Russia. The Treaty of Tordesillas was frequently renegotiated by Spain and Portugal, and almost completely ignored by the rest of the world.
Essentially I think this idea smuggles in a lot of other poison-pill, or unnecessary, ideas that would ultimately defeat the practically of implementing a night-watchman ASI at all. Either extinction is on the table, and we shouldn't be giving ASI the power and drive to settle international conflicts or it isn't, and we should be a lot more ambitious in the values and goals we assign.
This post is timed perfectly for my own issue with writing using AI. Maybe some of you smart people can offer advice.
Back in March I wrote a 7,000 word blog post about The Strategy of Conflict by Thomas Schelling. It did decently well considering the few subscribers I have, but the problem is that it was (somewhat obviously) written in huge part with AI. Here's the conversation I had with ChatGPT. It took me about 3 hours to write.
This alone wouldn't be an issue, but it is since I want to consistently write my ideas down for a public audience. I frequently read on very niche topics, and comment frequently on the r/slatestarcodex subreddit, sometimes in comment chains totaling thousands of words. The ideas discussed are usually quite half-baked, but I think can be refined into something that other people would want to read, while also allowing me to clarify my own opinions in a more formal manner than how they exist in my head.
The guy who wrote the Why I'm not a Rationalist article that some of you might be aware of wrote a follow up article yesterday, largely centered around a comment I made. He has this to say about my Schelling article; "Ironically, this commenter has some of the most well written and in-depth content I've seen on this website. Go figure."
This has left me conflicted. On one hand, I haven't really written anything in the past few months because I'm trying to contend with how I can actually write something "good" without relying so heavily on AI. On the other, if people are seeing this lazily edited article as some of the most well written and in-depth content on Substack, maybe it's fine? If I just put in a little more effort for post-editing, cleaning up the em dashes and standard AI comparisons (It's not just this, it's this), I think I'd be able to write a lot more frequently, and in higher quality than I would be able to do on my own. I was a solid ~B+ English student, so I'm well aware that my writing skill isn't anything exemplary.
I even agree with the conclusion of this article. That when someone notices they're reading something written or edited by AI, it's a serious negative signal and probably not worth spending the time to read more. I even got into a discussion earlier this week with someone who used AI to edit their book expressing that exact same sentiment.
So what do I do here? I want to write things, but I don't seem to be able to do so well on my own. What I "wrote" with AI seems to have been good enough to attract people to read it (and at the very least I think I can say all the ideas communicated were my own, not GPT's), so why not write more with it? For someone to say it's some of the most well written and in-depth content is somewhat depressing, since it means that the AI writing, and not my own writing, is what has attracted people, but if that's what the people like, who am I to disagree?
As far as improving my writing style, I read frequently, I try to comment an intelligent thought on everything I read (either in the margins of a book, or the comment section underneath an essay), but what more can I do? If this is a process that won't leave me a good writer within the next ~5 years, won't AI just get better at writing by then anyway, so wouldn't it make more sense to get used to utilizing AI for my writing now?
Apologies if this is unrelated, but I've been thinking about this since the blog post I mentioned yesterday, and the advice on the bottom of this post seems relevant to my situation.
Unrelated but where on earth is pictured at the bottom of https://ifanyonebuildsit.com/praise ? It doesn't really look like any recognizable night image so I suspect it's AI generated, but maybe I'm wrong.
I'll admit that I find these pretty funny. Not the jokes themselves, but the fact ChatGPT rates them as funny.
Don't prediction markets also serve as a tool for actually valuing a prediction? Without a clear metric to judge the likelihood of a prediction at the time it was made (as is the case with these one-off real world predictions like elections or what a politician will do), I'm liable to consider the guy who predicted the sun will come up tomorrow, and the guy who predicted the market will drop 8% last week as having an equal success rate.
We need something to judge a prediction against, otherwise people would just go for easy predictions that might sound complicated in hindsight, and when they get them right most of the time, tout their "impressive" prediction record. If we can instead say "The market was already predicting this was going to happen with a 95% certainty when you predicted it", we can know their prediction wasn't at all unordinary, or valuable. Likewise, if the market predicted something with a lower certainty, say 50%, and our predictor predicted with a 95% certainty, and was right, and this happened consistently, we could consider their predictions as more valuable than chance, with high alpha.
Maybe there's some political commentator out there who analyzes Trump's claims and expectations clearly, assigns accurate probabilities better than the market (and bets on them), then we can choose to rely on that commenter in the future as far as predicting Trump's actions, "beating" the market. We couldn't find that person without the market in the first place, so even if the market itself doesn't communicate much information about the underlying probability, it can communicate information about who is better than the market at estimating it.
The comparison between Church founders and Startup founders is accurate. In startup communities, The Purpose Driven Church is a well known manual for building startup culture, attracting dedicated employees, and raising capital. I know more than one founder who claims it was by far the most useful book for creating their company, beating out all the books that are literally about creating startups.