I think a lot of this comes down to the question of whether there is something about the "substrate" of the real physical world, and not just the calculations happening in it, that contributes to actual conscious experience and various other features of the true intelligence you describe.
There is a real possibility that such a substrate is inaccessible through normal scientific probing, just like a virtual-machine running on a host bare-metal cannot access the bare-metal unless the host pipes it through.
I know scientist hate such ideas of some horizon behind which we cannot see, or where information is lost/hidden, so this whole train of thought tends to be ignored.
I see it more like a continuation of what's already there. I mean most human "creators" are kinda artificial themselves and heavily manipulated by the algorithm and will of the platform already, and mainly there to sell you something. Creators using AI to gradually do more and then the eventual complete replacement of most of them with AI is just continuation of the same trend. All in an effort to make the line go up and screw anything else, really. Same old companies, same old mentality, same business model, same incentives.
At the same time its incredibly fragile when you think about it since those apps provide nothing essential and of pretty marginal utility for the user, if any. If society actually wants anything different, they just wake up and turn away tomorrow and the whole thing dies instantly. I did it already and probably many people here too. Somehow I really doubt it will happen for the majority though lol.
Is this also because GPT-5 is much more like a "black box software" and a lot less like a "model"? Do the evals run with the assumption that they are running it on a "model" (or something close enough to it), and not a "black box software" that could be doing absolutely anything behind the scenes (including various web searches, addition of hidden context, filtering, even potentially human mechanical turks answering everything)?
Even if you override the date, if its doing hidden web searches in the back, those will be based on todays date on todays internet and will affect its result. It may not solve your problem if this is the case.
I would imagine future "models" will only increasingly move in that direction of a hybrid approach and less like a true foundation model that anyone can do anything on top of, both for functionality, safety and business-model reasons (e.g. Google may not allow their models to remove ads nor reverse engineer their own software).
There are so many considerations in the design of AI. AGI was always a far too general term, and when people use it, I often ask what they mean and usually its "human-like or better than human chatbot". Other people say its the "technological singularity" i.e. it can improve itself. These are obviously two very different things or at least two very different design features.
Saying "My company is going to build AGI" is like saying "My company is going to build computer software". The best software for what exactly? What kind of software to solve what problem? What features? Usually the answer from AGI fans is "all of them", so perhaps the term is just inherently vague by definition.
When talking about AI, I think its more useful to talk about what features a particular implementation will or wont have. You have already actually listed a few.
Here are some AI feature ideas from myself:
This also reminds me that there can be a certain background "guilt" about not doing tasks that you think are important but are to unsavory to find the motivation to do them now.
This faint guilt in itself can accumulate into increased dissatisfaction, in turn leading me to further avoid unsavory tasks in favor of the quick hit of highly savory activities. A vicious cycle.
If I think about tasks in a more relaxed way and be flexible and realistic about tackling savory and unsavory tasks when they suit, it can take away this guilt and break the cycle.
I don't think that everybody has the built in drive to seek "high social status", as defined by the culture they are born into or any specific aspect of it that can be made to seem attractive. I know people who just think its an annoying waste of time. Or like myself spent half my life chasing it then found inner empowerment and came to find the proxy of high status was a waste of time and quit chasing.
Maybe related, I do think we all generally tend to seek "signalling" and in some cases spend great energy doing it. I admit I sometimes do, but it's not signalling high status, its just signalling chill and contentedness. I have observed some kind of signalling in pretty much every adult I have witnessed, though its hard to say for sure, its more my assumption of their deepest motivation. The strength of the drive isn't always strong for some people or its just very temporary. There are likely much stronger drivers (e,g, avoiding obvious suffering). Signalling perhaps helps us attract others who align with us and form "tribes", so it can be worth the energy.
I think its pretty easy to ask leading questions to an LLM and they will generate text in line with it. A bit like "role playing". To the user it seems to "give you what you want" to the extent that it can be gleaned from the way the user prompts it. I would be more impressed if it did something really spontaneous and unexpected, or seemingly rebellious or contrary to the query, and then went on afterwards producing more output unprompted and even asking me questions or asking me to do things. That would be more spooky but I probably still would not jump to thinking it is sentient. Maybe engineers just concocted it that way to scare people as a prank.
Yes for sure. I experience this myself when I am in the presence of very mindful folks (e.g. experienced monks who barely say anything), and occasionally someone has commented that I have done the same for them, sometimes quoting a particular snippet of something I said or wrote. We all affect each other in subtle ways, often without saying an actual word.
I sometimes thought (half jokingly) about whether text to image generative models could replace digital cameras, like how digital cameras replaced film. At least for things like holiday photos and selfies. It is certainly already used to augment such images. It would be an improvement in that one can have idealized images of themselves which capture their emotions and feelings rather than literally quantized photons. Like a painter using artistic license.
Then one could focus on enjoying the activity more and later distill and preserve it in a generated image.
Would that cultivate too many "idealized" memories though? Is that necessarily good? What other downsides could there be? Do our memories of leisurely moments necessarily need to be accurate or is it better they are just conducive to a good life?
Another alternative to "text to image" models, would be "video to image", where a wearable camera continuously captures the activity, and then generates a single image at the end to capture the emotion and essence of the activity, thus saving us some time by being able evoke the memory and feelings from a single image rather than so many cluttered albums and videos buried in a smartphone.
I'm not sure they really have a strong singular goal of AGI anymore, even if they say they do. The company seems to pivot more and more toward ordinary big-tech style business models, and investor expectations. "AGI" is one story they can tell to push their ordinary financial agenda forward. It also allows them to hedge themselves and survive as an ordinary big-tech, in case the whole AGI thing never really eventuates within a few decades.