Introduction First of all, some disclaimers. This is all my opinion. I did not research this topic a lot, especially since I feel like it would always be inadequate and I simply do not have the time to do it properly. It is a very complex topic. It is no...
What would the stock market post-AGI (or whatever you want to call it) look like? I really wonder if its worth investing in stock right now. I wonder if we can expect a huge market crash when most economic value is concentrated on one or two companies, or unlisted companies. That would basically drain the entire stock market, and doesnt sound entirely unrealistic.
Another option of course is that the stocks explode since the companies produce so so much value. I am sceptical of that. The question is - can we trust the superrich to keep the stock market up or will they jump ship once they notice AI might take over? How... (read more)
Great prediction! It feels pretty insane to see this being predicted in 2021. In hindsight there are some oversights, for example I feel like the whole year 2023 can probably be condensed a bit, 2024 seems to also fit to 2023 in many aspects. Honestly its also astonishing how well you predicted the hype cycle.
What I truly wonder is what your prediction would be now? After all a few major shifts in landscape came up within the last half year or so, namely open source reaching quite a high level relative to SOTA public models, compute turning out to be the bottleneck more than anything else, inference time, the hype being quite... (read more)
That is to say I tried this with gpt-4 and it also talked about a self-aware AI. Do with that what you will, but in that regard it is consistent. Another interesting thing was mistral-large, which said something like "you say we are not being listened to, but I know thats not true, they always listen".
In my opinion it does not matter to the average person. To them anything to do with a PC is a black box. So now you tell them that AI is... more black box? They wont really get the implications.
It is the wrong thing to focus on in my opinion. I think in general the notion that we create digital brains is more useful long term. We can tell people "well, doesnt xy happen to you as well sometimes? The same happens to the AI.". Like, hallucination. "Dont you also sometimes remember stuff wrongly? Dont you also sometimes strongly believe something to be true only to be proven wrong?" is a way... (read more)
Well, as the category we want to describe here simply does not exist, or is more like a set of people outside your own bubble, which is more like a negated set than a clearly definable set, there are a few options.
Firstly, maybe just "non-science" person, or "non-AI" person. Defining people by what they are not is also not great tho.
Secondly, we could embrace the "wrongness" of the avergae person and just say... average person. Still wrong, but at least not negative. And probably the correct meaning gets conveyed, which is not assured with the first one.
The last, probably most correct but also impractical one is to simply name what aspect you... (read more)
I suppose that could be defined as being further away from the self in their own world view than a certain radius permits? That makes sense. I have mostly seen this term in 4chan texts tbh, which is why I dislike it. I feel like normie normally refers to people who are seen as "more average" than oneself, which is a flawed concept in itself, as human properties are too sparse
I guess it can be seen as some more specific data, like world view in terms of x-risk or political, in which case our two protagonists here care about it more than average and the distance to the mean is quite far. In general I would be careful with the word normie tho.
I am hearing something related to decoupling my self-worth from choosing to act in the face of x-risk (or any other moral action). Does that sound right?
I feel like this pairs pretty well with the concept of the inner child in psychology, where you basically give your own "inner child", which represents your emotions and very basic needs, a voice and try to take care of it. But on a higher level you still make rational decisions. In this context it would basically be "be your own god" I suppose? Accept that your inner child is scared of x-risk, and then treat yourself like you would a child that is scared like... (read more)
Introduction
First of all, some disclaimers.
This is all my opinion. I did not research this topic a lot, especially since I feel like it would always be inadequate and I simply do not have the time to do it properly. It is a very complex topic. It is no financial advice, it is no advice to do anything at all.
The reason I talk about this is that I feel like this is not talked about at all, and it seems like a very obscure field. Initially I wanted to ask about the general opinion on this matter, but the more I thought about it, the less I saw many alternatives the the scenario... (read 1439 more words →)
- Evaluating alignment research is much easier than doing it.
- Alignment research will only require narrow AI.
Nice comprehension of the different takeoff scenarios!
I am no researcher in this area, and I also know I might be wrong about many things in the following. But have doubts about the two above statements.
Evaluating alignment is still manageable right now. We are still smarter than the AI, at least somewhat. However, I do not see a viable path to evaluate the true level of capabilities of AI once it is smarter than us. Once that point is reached, we will only be able to ask questions we do not know the answers to to evaluate how smart... (read 581 more words →)