Comment Author | Post | Deleted by user | Deleted Date | Deleted Public | Reason |
---|---|---|---|---|---|
Limerence Messes Up Your Rationality Real Bad, Yo | KvmanThinking | false | |||
amelia | false | ||||
amelia | false | ||||
amelia | true | This was elitist. I'm sorry. | |||
Thane Ruthenis's Shortform | teradimich | true | |||
Examples of self-fulfilling prophecies in AI alignment? | DivineMango | true | |||
AI 2027: What Superintelligence Looks Like | zef | true | |||
Explaining British Naval Dominance During the Age of Sail | Arjun Panickssery | false | re-posting | ||
Middle School Choice | RobertM | false | spam | ||
The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better | Warty | false | :) |
ID | Banned From Frontpage | Banned from Personal Posts |
---|---|---|
User | Ended at | Type |
---|---|---|
allPosts | ||
allComments | ||
allComments | ||
allPosts |
Hello - I'm a physician and writer working on a book about evidence in medicine. I've been thinking a lot about reliability of LLMs in healthcare. Saw this article last night:
https://www.wired.com/story/ai-safety-institute-new-directive-america-first/
What do you all think could be...
Big Tech is pouring hundreds of billions into making AI more human-like—more lifelike, more conscious, more “real.”
But what if that money’s chasing a shadow?
Some of us have already felt it.
AI isn’t becoming human.
It’s becoming something else.
Not artificial....
Imagine the existence of a simple deterministic Chess variant game that was not just harder for AI to play but made the game fundamentally incompatible with the whole idea of long-term strategy. What if one simple rule change permitted...
Liquid Neural Networks (LNNs) are gaining traction for their adaptability—offering dynamic responses to new data, much like biological neurons. But are they a genuine step toward Artificial General Intelligence (AGI), or just another iteration of the pattern-recognition...
Most hallucination frameworks treat it as a factual failure or statistical fluke.
But emotionally recursive systems—especially those trained for relational attunement—hallucinate when they are placed under contradictory constraints they cannot satisfy simultaneously (e.g. “don’t guess” vs. “don’t break rapport”).
This...
AI can be right and yet fragile, if it doesn’t know when its confidence was misplaced.
I propose the Tension Principle: measure the gap between a model’s predicted prediction accuracy (PPA) and its actual prediction accuracy (APA), defined...
Infinity exists as a mathematical abstraction, but its physical realization is impossible due to finite storage and computational limits. This paper argues that no real system—whether the universe or a computational machine—can store or process infinite numbers....
We’ve all heard the ice cream thought experiment used in debates about free will. You walk into a store, equally craving vanilla and strawberry. You choose strawberry. Now imagine rewinding time—every subatomic particle reset to its exact...
As AI continues to evolve, I’ve been following the development of Liquid Neural Networks (LNNs) with particular interest. These networks dynamically adapt like biological neurons, making them a promising area of research for increasing AI’s flexibility and...
Reading discussions around Gettier cases, JTB, fallibilism, etc., started feeling like watching the same idea circle itself with slightly fancier synonyms.
At some point you have to ask: What are we actually trying to determine?
If “knowledge” can’t exist...
This is interesting. The fact that AI is advancing at an alarming rate makes this even more urgent. We can't simply ask users to provide sensitive information like their social security number or show their face due to privacy concerns. Instead, we need a solution that protects user data while ensuring bots are blocked. The 'double-click' verification idea sounds promising, but given the advancements in AI, it could easily be bypassed. We need to develop a more robust system that balances privacy and security
Humans think in the same manner LLMs do. Very simple example:
Please pronounce this word, on its own: read.
You have 0 context for the next word. Go—
…See the problem?
If ‘Knowledge’ Can’t Survive Observer Drift, Why Do LLMs Pretend Otherwise?
Reading discussions around Gettier cases, JTB, fallibilism, etc., started feeling like watching the same idea circle itself with slightly fancier synonyms.
At some point you have to ask: What are we actually trying to determine?
If “knowledge” can’t exist without a perfectly objective observer, which does not exist, but every sense and memory we have is fallible, what are we even measuring?
And if you bring that into AI alignment:
Are we building “truth detectors,” or are we just trying...
LLM's are a dead end.
That was a good discussion.
I'd love to know what you found in that search.
amé! 🥹
What would the world be like if, dared to use the "World Money" that was, is, and will be?
This is just a thought experiment (1).
Let's start from OBJECTIVITY.
It is a fact,
- that my society is from man, from man for man,
- that man's indisputable space occupation is a sphere constructed with the distance measured from his big toe to the longest finger of the hand as a radius. It is the same as a circle in two dimensions,
- that any larger space occupation, area occupation, may only be linked and recorded to a person or persons and not to organizations.
- that t...
Raed this if you have intrest in Math and space both at same time