Gavin Runeblade
Gavin Runeblade has not written any posts yet.

Gavin Runeblade has not written any posts yet.

creating real world events, ‘showing up as real humans’ and forming real relationships.
I've been saying for years (think I'm the original source actually):
How it started: pics or it didn't happen
How it's going: IRL or it didn't happen
There appears to be a window of opportunity for people to become known as legitimately human. Given the speed of improvement in AI influencers, actors, etc. I predict that window will close in less than 2 years at which point it may be all but impossible to "prove" you are human online.
I really wonder how much of a push towards recreating in person interactions will occur. Will there be a return to a variant of Google's original trust based search algorithm and who will be high trust and how will it be calculated? I am very interested in this particular aspect of the AI-driven changes to society.
What were the big hits for applied AI this year? Were any of the big medical discoveries helped by AI tools? Not theoretical or "new study shows this will maybe happen eventually" but we're there any actual, tangible, AI-driven life improvements for normies this year?
We can steal [China’s] very best manufacturing engineers, and put them to work here.
I am not confident this strategy is viable. Given the Wikipedia page has a list of over 40 convictions (the accusa ions list is far longer) for espionage primarily in tech, and the rate of discovery of such people is increasing, it seems more likely that a high risk is being taken by any company that tries to hire defectors from China. I am insufficiently skilled to weigh the risk against potential gains given the uncertainties involved, but from the scope of the problem and the risks that I do comprehend, I would err on the side of caution.
A perhaps superior strategy is to encourage Europe to hire these engineering experts, which keeps the spies away from frontier technologies, and still gets the honest developers away from China. Also has the knock on effect of supporting Europe's AI progress.
On #3, Not having read Mo's book what helps get me thinking the same way (though not as often as I would like) is "what does it feel like to be wrong. It doesn't feel bad, that's for when you know you are wrong. While you are still in the act of being wrong it feels exactly like being right." Which I first encountered as a meme and I don't know the source to appropriately attribute credit. But remembering the concept has helped me quite a bit. Mo's phrasing seems good, I shall add it to my box. The tool I am still working on is remembering to ask myself.
On a related note, I have discovered the useful trick that after writing a text message or comment or post, my brain cannot tell the difference between deleting it and posting/sending it.
Not the way you think. You are seeing the mask and trying to understand it, while ignoring the shoggoth underneath.
The topic has gotten a lot of discussion, but from the relevant context of the shoggoth, not the irrelevant point of the mask. Every post about how do we know when AI is telling the truth vs repeating what is in training data, the talk of p-zombies, etc. that is all directly struggling with your question.
Edit: to be clear, I am not saying the fact ais make mistakes shows they're inhuman, I am saying look at the mistake and ask yourself what does the fact that the ai made this specific mistake and... (read 686 more words →)
Yes, the data backs you up. In 2022 studies were showing that people had limited trust in AI, and even that varied by field. In 2023 the study came out showing that in blind trials patients overwhelmingly preferred AI chat it's over human doctors. (https://bytefeed.ai/ai-chatbots-bedside-manner-preferred-over-conventional-doctors-by-shocking-margin-according-to-blind-study/) In 2024 & 2025 we got the studies showing AI outperforms human doctors but patients still didn't trust it so long as they knew it was AI. Again, in blind studies patients prefer AI. (https://www.kcl.ac.uk/news/doctors-stay-ai-assists-new-study-examines-public-perceptions-of-ai-in-healthcare). Doctors don't like AI and don't like doctors who use AI (https://carey.jhu.edu/articles/doctors-who-use-ai-are-viewed-negatively-their-peers-new-study-shows).
What is the latest on how the enteric nervous system plays into cognition and memory (being relevant here to your topic)? I have seen a lot of research on the role in behavior, especially disorders like anorexia and bulemia and gambling addiction, etc.. Given it is half the number of neurons as the brain, one thing that makes me hesitant about cryonics and digital personalities is the thought that what if people are only getting 2/3 or less of themselves because only the brain is being considered. But data on it is not my specialty.
I see three issues with your argument, two that don't change anything meaningful and one that does.
The two minor points:
You presume a world without preference for human made works. I posit this is a world that cannot exist under any circumstances where humans also exist. We are a species that pays a premium for art made by elephants and other animals. That shows off photos of our children and their accomplishments to people who we know don't care. We had pet rocks. The drive to value that which is valueless is, for whatever reason, deeply embedded in us. It is not going anywhere. More importantly, acknowledging this does not in any way... (read 359 more words →)
Thanks for the confirmation on the navy. But you really, really need to check your data on transit costs.
People have already admitted doing this. The popups requesting authorization came too fast so they stop reading and just grant authority. This includes executives at AI companies. Again, last year, not in the future.