creating real world events, ‘showing up as real humans’ and forming real relationships.
I've been saying for years (think I'm the original source actually):
How it started: pics or it didn't happen
How it's going: IRL or it didn't happen
There appears to be a window of opportunity for people to become known as legitimately human. Given the speed of improvement in AI influencers, actors, etc. I predict that window will close in less than 2 years at which point it may be all but impossible to "prove" you are human online.
I really wonder how much of a push towards recreating in person interactions will occur. Will there be a return to a variant of Google's original trust based search algorithm and who will be high trust and how will it be calculated? I am very interested in this particular aspect of the AI-driven changes to society.
What were the big hits for applied AI this year? Were any of the big medical discoveries helped by AI tools? Not theoretical or "new study shows this will maybe happen eventually" but we're there any actual, tangible, AI-driven life improvements for normies this year?
We can steal [China’s] very best manufacturing engineers, and put them to work here.
I am not confident this strategy is viable. Given the Wikipedia page has a list of over 40 convictions (the accusa ions list is far longer) for espionage primarily in tech, and the rate of discovery of such people is increasing, it seems more likely that a high risk is being taken by any company that tries to hire defectors from China. I am insufficiently skilled to weigh the risk against potential gains given the uncertainties involved, but from the scope of the problem and the risks that I do comprehend, I would err on the side of caution.
A perhaps superior strategy is to encourage Europe to hire these engineering experts, which keeps the spies away from frontier technologies, and still gets the honest developers away from China. Also has the knock on effect of supporting Europe's AI progress.
On #3, Not having read Mo's book what helps get me thinking the same way (though not as often as I would like) is "what does it feel like to be wrong. It doesn't feel bad, that's for when you know you are wrong. While you are still in the act of being wrong it feels exactly like being right." Which I first encountered as a meme and I don't know the source to appropriately attribute credit. But remembering the concept has helped me quite a bit. Mo's phrasing seems good, I shall add it to my box. The tool I am still working on is remembering to ask myself.
On a related note, I have discovered the useful trick that after writing a text message or comment or post, my brain cannot tell the difference between deleting it and posting/sending it.
Not the way you think. You are seeing the mask and trying to understand it, while ignoring the shoggoth underneath.
The topic has gotten a lot of discussion, but from the relevant context of the shoggoth, not the irrelevant point of the mask. Every post about how do we know when AI is telling the truth vs repeating what is in training data, the talk of p-zombies, etc. that is all directly struggling with your question.
Edit: to be clear, I am not saying the fact ais make mistakes shows they're inhuman, I am saying look at the mistake and ask yourself what does the fact that the ai made this specific mistake and not a different one tell you about why the ai thought the mistake was correct /edit
Here is an example. Alpha go beat all the best players in the world at go. They described its thinking as completely alien and they were emotionally distraught at how badly they lost. A couple months later multiple mediocre players obliterated it repeatedly and reliably. It turned out the AI didn't know it was playing a game that had a board with conditions that persisted from turn to turn, it didn't/doesn't understand that there are fields that are controlled with pieces inside. It can figure out the best next move without that understanding. It now wins all the time again. But it still doesn't understand the context. There is no way to confirm it knows it is playing a game.
Going directly at images, why are you so sure dalle knows what an image is or what it is doing? Why do you think it knows there is a person looking at its output vs an electric field pulsing in a non random way? Does it understand we see red but not infrared? Is it adding details only tetrachromate can see? No one has the answers to these questions. No one has a technique that has a plausible method of working to get the answers.
Your image prompt might be creating a pattern in binary, or hexadecimal color codes, but it is probably a bizarre set of tokens that fit like Cthulhu-esque jigsaw pieces in a mosaic using more relationships than human minds can comprehend. I saw a claim that chatgpt 4 broke language out into a 36,000+ dimensional table of relationships. It ain't using hooked on phonics but it certainly can trick you into believing it does. That tricking you is the mask. The 36,000 dimensional map of language is part of the shoggoth, not the whole thing.
To make the mask slip in images give it tasks that rely on relationships not facades. For example, and I apologize if you are easily offended, do a search on hyper muscle growth. You will get porn, sorry. But in it you will find images with impossible mass and realistic internal anatomy. The artists understand skeletons and where muscle attaches to limbs. Drop some of the most extreme art into sora or nano banana or grok and animate it. The skeletons lose all coherence. The facade, the skin and limbs move, but what is going on inside cannot be. Skeletons don't do what the image generators will try, because they can't see the invisible. And skeletons are invisible. For a normally proportioned human that's irrelevant, but for an impossible proportion it is. 3d artists draw skeleton wireframes then layer polygons above it so the range of motion fits plwhat is possible and correct. AI copies the training data and extrapolates. Impossible shapes cause the mask to slip, you sent doesn't know what a body is it thinks we're blobs that move and have precise shapes.
Monsters are another one. Try a displacer beast, this is a cat with six legs, four that comport to the shape of traditional feline front legs and two that are rear legs, plus it has two tentacles coming off its shoulders that are like the arms (not tentacles) of a squid with the barbed diamond shaped pads. The difference between tentacle motion and leg motion is unknown to AI because it relies on what is unseen, the skeleton underneath. Again, it think a monster is a blob that moves.
Getting to your architecture, this you see in window and door placement. There is no understanding of the relationship or the 3d nature of the space. Instead it knows walls have doors and windows and doors are more likely down low and windows are more likely up high. So it adds them. But it doesn't understand space or function.
When you see people talk about AI slop articles using the it's not x it's y pattern or triplets or em dashes, this is the same topic. This is how the AI knows what it knows. Why it thinks that is good writing, but uses it in ways humans don't even though it got the pattern out of human made training data. Same topic, different application.
People really are talking about it a lot
Yes, the data backs you up. In 2022 studies were showing that people had limited trust in AI, and even that varied by field. In 2023 the study came out showing that in blind trials patients overwhelmingly preferred AI chat it's over human doctors. (https://bytefeed.ai/ai-chatbots-bedside-manner-preferred-over-conventional-doctors-by-shocking-margin-according-to-blind-study/) In 2024 & 2025 we got the studies showing AI outperforms human doctors but patients still didn't trust it so long as they knew it was AI. Again, in blind studies patients prefer AI. (https://www.kcl.ac.uk/news/doctors-stay-ai-assists-new-study-examines-public-perceptions-of-ai-in-healthcare). Doctors don't like AI and don't like doctors who use AI (https://carey.jhu.edu/articles/doctors-who-use-ai-are-viewed-negatively-their-peers-new-study-shows).
What is the latest on how the enteric nervous system plays into cognition and memory (being relevant here to your topic)? I have seen a lot of research on the role in behavior, especially disorders like anorexia and bulemia and gambling addiction, etc.. Given it is half the number of neurons as the brain, one thing that makes me hesitant about cryonics and digital personalities is the thought that what if people are only getting 2/3 or less of themselves because only the brain is being considered. But data on it is not my specialty.
I see three issues with your argument, two that don't change anything meaningful and one that does.
The two minor points:
You presume a world without preference for human made works. I posit this is a world that cannot exist under any circumstances where humans also exist. We are a species that pays a premium for art made by elephants and other animals. That shows off photos of our children and their accomplishments to people who we know don't care. We had pet rocks. The drive to value that which is valueless is, for whatever reason, deeply embedded in us. It is not going anywhere. More importantly, acknowledging this does not in any way weaken your point, it does complicate the math a little, but the outcomes are all the same. Denying this point, however, makes you appear to be fundamentally off base on human psychology and that does weaken your persuasiveness.
Second, you posit the AI would rent the machine at the exact cost of its output making zero profit. That needs an explanation to me. Presuming the AI has a goal other than "use all current funds to make potatoes but don't grow the amount produced over time" it will want some level of profit to achieve that goal. Even maximizing potato output wants to save up for more machines in the future and be prepared for inflation, market changes etc. If it really has no profit and loses the ability to rent the machine the first time the market swings, and thus goes bankrupt, it's not a very smart super intelligence. I assume it will predict swings and keep the bare minimum needed for its goals, so razor thin margins that look crazy to us could be generous to it. That's fine. But zero needs justification. 49,500 is functionally the same as 50,000 for your core argument, but resolves this.
The one that seems to matter though is "eke out a living on 5 potatoes a day". The Amish are doing better than that. So are the Menonites, the Inuit, the Sentilinese, etc. People can and will carve out enclaves where life works. Special economic zones where AI doesn't exist. Maybe that looks like North Korea, and maybe it looks like Pennsylvania, and maybe it looks like a patchwork of everything in between. Also, energy and logistics are a hard problem. We could not implement full robotics today, even if the tech were 100% ready, because most of the world doesn't have access to reliable electricity. Even in the developed world, we don't have spare capacity. You seem to need additional bullets that cover: robotics and energy production are solved such that no part of the economy is constrained by either; enclaves like the Amish are not included in this assessment, etc. Your scenario only addresses those humans who try to compete with AI and not those who walk away and go off the grid making their own economy. They already exist, why do you assume they will stop existing? Maybe this is two issues also.
Thanks for the confirmation on the navy. But you really, really need to check your data on transit costs.
People have already admitted doing this. The popups requesting authorization came too fast so they stop reading and just grant authority. This includes executives at AI companies. Again, last year, not in the future.