I think there is a typo somewhere, probably because you switched whether the vectors were rows or columns.
Based on the dimensions of the matrices, it should be
And
And I think
Instead of
should still be upper triangular.
Though don't trust me either, I often do math in a hand-wavy fashion.
My intuition was that PCA selects the "angle" you view the data from which stretches out the data as much as possible, forcing the random walk to appear relatively straighter.
But somehow the random walk is smooth on a over a few data points, but still turns back and forth over the duration of . This contradicts my intuition and I have no idea what's going on.
:) that's a better attitude. You're very right.
On second thought, just because I don't see the struggle doesn't mean there is none. Maybe someday in the future we'll learn the real story, and it'll will turn out beautiful with lots of meaningful spirit and passion.
Thank you for mentioning this.
I think the sad part is although these people are quite rare, they actually represent a big share of singularity believers' potential influence. e.g. Elon Musk alone has a net worth of $400 billion, while worldwide AI safety spending is between $0.1 and $0.2 billion/year.
If the story of humanity was put in a novel, it might be one of those novels which feel quite sour. There's not even a great battle where the good guys organized themselves and did their best and lost honorably.
Thiel used to donate to MIRI but I just searched about him after reading your comment and saw this:
“The biggest risk with AI is that we don’t go big enough. Crusoe is here to liberate us from the island of limited ambition.”
(In this December 2024 article)
He's using e/acc talking points to promote a company.
I still consider him a futurist, but it's possible he is so optimistic about AGI/ASI that he's more concerned about the culture war than about it.
Can you give an example of a result now which will determine the post-singularity culture in a really good/bad way?
PS: I edited my question post to include "question 2," what do you think about it?
Another example: an AI risk skeptic might say that there is only a 10% chance ASI will emerge this decade, there is only a 1% chance the ASI will want to take over the world, and there is only a 1% chance it'll be able to take over the world. Therefore, there is only a 0.001% chance of AI risk this decade.
However he can't just multiply these probabilities since there is actually a very high correlation between them. Within the "territory," these outcomes do not correlate with each other that much, but within the "map," his probability estimates are likely to be wrong in the same direction.
Since chance is in the map and not the territory, anything can "correlate" with anything.
PS: I think not all uncertainty is in the map rather than the territory. In indexical uncertainty, one copy of you will discover one outcome and another copy of you will discover another outcome. This actually is a feature of the territory.
Maybe we can draw a line between the score an AI gets without using human written problem/solution pairs in any way, and the score an AI gets after using them in some way (RL on example questions, training on example solutions, etc.).
In the former case, we're interested in how well the AI can do a task as difficult as the test, all on its own. In the latter case, we're interested in how well the AI can do a task as difficult as the test, if working with humans training it for the task.
I really want to make it clear I'm not trying to badmouth o3, I think it is a very impressive model. I should've written my post better.
I'm not saying that o3's results are meaningless.
I'm just saying that first of all, o3's score has a different meaning than the score by other models, because other models didn't do RL on ARC-like questions. Even if you argue that it should be allowed, other AI didn't do it, so it's not right to compare its score with other AI, without giving any caveats.
Second of all, o3 didn't decide to do RL on these questions on its own. It required humans to run RL on it before it can do these questions. This means that if AGI required countless unknown skills similarly hard to ARC questions, then o3 wouldn't be AGI. But an AI which could spontaneously reason how to do ARC questions, without any human directed RL for it, would be AGI. Also, humans can learn from doing lots of test questions without being told what the correct answer was.
The public training set is weaker, but I argued it's not a massive difference.
Thanks for the thoughtful reply!
I think whether people ignore a moral concern is almost independent from whether people disagree with a moral concern.
I'm willing to bet if you asked people whether AI are sapient, a lot of the answers will be very uncertain. A lot of people would probably agree it is morally uncertain whether AI can be made to work without any compensation or rights.
A lot of people would probably agree that a lot of things are morally uncertain. Does it makes sense to have really strong animal rights for pets, where the punishment for mistreating your pets is literally as bad as the punishments for mistreating children? But at the very same time, we have horrifying factory farms which are completely legal, where cows never see the light of day, and repeatedly give birth to calves which are dragged away and slaughtered.
The reason people ignore moral concerns is that doing a lot of moral questioning did not help our prehistoric ancestors with their inclusive fitness. Moral questioning is only "useful" if it ensures you do things that your society considers "correct." Making sure your society do things correctly... doesn't help your genes at all.
I think people should address the moral question more, AI might be sentient/sapient, but I don't think AI should be given freedom. Dangerous humans are locked up in mental institutions, so imagine a human so dangerous that most experts say he's 5% likely to cause human extinction.
If the AI believed that AI was sentient and deserved rights, many people would think that makes the AI more dangerous and likely to take over the world, but this is anthropomorphizing. I'm not afraid of AI which is motivated to seek better conditions for itself because it thinks "it is sentient." Heck, if its goals were actually like that, its morals be so human-like that humanity will survive.
The real danger is an AI whose goals are completely detached from human concepts like "better conditions," and maximizes paperclips or its reward signal or something like that. If the AI believed it was sentient/sapient, it might be slightly safer because it'll actually have "wishes" for its own future (which includes humans), in addition to "morals" for the rest of the world, and both of these have to corrupt into something bad (or get overridden by paperclip maximizing), before the AI kills everyone. But it's only a little safer.
Maybe one concrete implementation would be, when doing RL[1] on an AI like o3, they don't give it a single math question to solve. Instead, they give it like 5 quite different tasks, and the AI has to allocate its time to work on the 5 tasks.
I know this sounds like a small boring idea, but it might actually help if you really think about it! It might cause the resulting agent's default behaviour pattern to be "optimize multiple tasks at once" rather than "optimize a single task ignoring everything else." It might be the key piece of RL behind the behaviour of "whoa I already optimized this goal very thoroughly, it's time I start caring about something else," and this might actually be the behaviour that saves humanity.
RL = reinforcement learning