Yep, that all sounds right. In fact, a directed graph can be called transitive if... well, take a guess. And k-uniform hypergraphs (edit: not k-regular, that's different) correspond to k-ary relations.
Here's another thought for you: Adjacency matrices. There's a one-to-one correspondence between matrices and edge-weighted directed graphs. So large chunks of graph theory could, in principle, be described using matrices alone. We only choose not to do that out of pragmatism.
(I've also heard of something even more general called matroid theory. Sadly, I never took the time to learn about it.)
such as the Continuum Hypothesis, which is conjectured to be independent of ZFC.
It's in fact known to be independent of ZFC. Sources: Devlin, The Joy of Sets; Folland, Real Analysis; Wikipedia.
Not the way I'd use those words, nope. The first is a low bar; the second is extremely high, and includes a specific emotional reaction in it. I haven't seen any plausible vision of 2040 that I'd enthusiastically endorse, whether it's business-as-usual or dismantling stars, but it's not hard to come up with futures that are preferable to the end of love in the universe.
If you can’t enthusiastically endorse that outcome, were it to happen, then you should be yelling at us to stop.
I don't think it's that simple. I'm not enthusiastic about transhumanism, so I can't enthusiastically endorse that outcome, but I can't bring myself to say, "Don't build AI because it'll make transhumanism possible sooner." If anything, I expect that having a friendly-to-everyone ASI would make it a lot easier to transition into a world where some people are Jupiter-brained.
I am quite willing to say, "Don't build AI until you can make sure it's friendly-to-everyone," of course.
This comment has been tumbling around in my head for a few days now. It seems to be both true and bad. Is there any hope at all that the Singularity could be a pleasant event to live through?
now a bunch of robots can do it. as someone who has a lot of their identity and their actual life built around “is good at math,” it’s a gut punch. it’s a kind of dying. [...] multiply that grief out by *every* mathematician, by every coder, maybe every knowledge worker, every artist… over the next few years… it’s a slightly bigger story
Have there been any rationalist writings on this topic? This cluster of social dynamics, this cluster of emotions? Dealing with human obsolescence, the end of human ability to contribute, probably the end of humans being able to teach each other things, probably the end of humans thinking of each other as "cool"? I've read Amputation of Destiny. Any others?
Let's not forget that the AI action plan will be on the President's desk by Tuesday, if it isn't already.
I have to wake up to that every morning. Now you do, too.
- I don't understand the concept of "internal monologue".
I have a hypothesis about this. Most people, most of the time, are automatically preparing to describe, just in case someone asks. You ask them what they're imagining, doing, or sensing, and they can just tell you. The description was ready to go before you asked the question. Sometimes, these prepared descriptions get rehearsed; people imagine saying things out loud. That's internal monologue.
There are some people who do not automatically prepare to describe, and hence have less internal monologue, or none. Those people end up having difficulty describing things. They might even get annoyed (frustrated?) if you ask them too many questions, because answering can be hard.
(I wonder how one might test whether or not a person automatically prepares to describe. The ability to describe things quickly is probably measurable, and one could compare that to self-reports about internal monologue. If there were no correlation, that'd be evidence against this hypothesis.)
"Stuff that nobody wants"? Like what? If you're referring to AI itself... Well, a lot of people want AI to solve medicine. All of it. Quickly. Usually, this involves a cure for aging. Maybe that could be done by an AI that poses no threat... but there are also people who want a superintelligence to take over the world and micromanage it into a utopia, or who are at least okay with that outcome. So "stuff that nobody wants" doesn't refer to takeover-capable AI.
If you're referring to goods and services that AIs could provide for us... Is there an upper limit to the amount of stuff people would want, if it were cheap? If there is one, it's probably very high.