If OP is trying to simulate a capable robot which Claude controls, then I think the benefit of the doubt should be pretty much non-existant. Even asking clarifying questions etc should be out in my opinion.
Position on the road and changing speed is a big one that not everyone notices. I have little faith in turn signals, given that people regularly fail to use them, and occasionally you see someone who has left their signal on, but isn't turning. Usually a driver will slow down a bit to make a turn, and shift their position on the road over slightly, even a very subtle change (4 inches one way or another) quite a long way ahead of the turn. I often notice it subconsciously rather than explicitly.
Your token system (and general approach) sounds a lot like Alpha School - is it influenced by them at all?
I found the claim that "Experts gave these methods a 40 percent chance of eventually enabling uploading..." was very surprising as I thought there were still some major issues with the preservation process, so I had a quick look at the study you linked.
From the study:
... (read more)For questions about the implications of static brain preservation for memory storage, we used aldehyde-stabilized cryopreservation (ASC) of a laboratory animal as a practical example of a preservation method that is thought to maintain ultrastructure with minimal distortions across the entire brain [24]. Additionally, we asked participants to imagine it was performed under ideal conditions and was technically successful, deliberately discarding the fact that procedural variation
The farmkind website you linked to is unable to provide a secure connection and both my browsers refuse to go to it. If you are involved in the setup of the site or know the people who are, it's worth trying to fix that.
I've been thinking about this mental shift recently using toy example - a puzzle game I enjoy. The puzzle game is similar to soduku, but involves a bit of simple mental math. The goal is to find all the numbers in the shortest time. Sometimes (rarely) I'm able to use just my quickest 2-3 methods for finding numbers and not have to use my slower, more mentally intensive methods. There's usually a moment in every game when I've probably found the low hanging fruit but I'm tempted to re-check to see if any of my quick methods can score me any more numbers, and I have to tell myself "Ok, I have... (read more)
It sounds like April first acted as a sense-check for Claudius to consider "Am I behaving rationally? Has someone fooled me? Are some of my assumptions wrong?".
This kind of mistake seems to happen in the AI village too. I would not be surprised if future scaffolding attempts for agents include a periodic prompt to check current information and consider the hypothesis that a large and incorrect assumption has been made.
I think partly what you're running into is that we live in a postmodern age of storytelling. The classic fairytales where the wicked wolf dies (three little pigs)(red riding hood) or the knight gets the princess after bravely facing down the dragon (George and the dragon) are being subverted because people got bored of those stories, and they wanted to see a twist, so we get something like Shrek - The ogre is hired by the king to rescue the princess from the dragon, but ends up rescuing the princess from the king.
The original archetypes DID exist in stories, but they are rarely used today without some kind of twist. This has... (read more)
I wouldn't class most of the examples given in this post as stereotypical male action heroes.
Rambo was the first example I thought of, and then most roles played by Jason Statham, Bruce Willis, Arnold Shwarzeneggar or Will Smith. I also don't think the stereotype is completely emotionless, just violent, tough and motivated, capable of anything. They tend to have fewer vulnerable moments and only cry when someone they love dies or something. They don't cry when they have setbacks to their plans or are upset by an insult someone shouts at them, like normal people might. They certainly don't cry when they lose their keys or forget somebody's birthday, or feel pressure to do well in an exam.
Has anyone here had therapy to help handle thoughts of AI doom? How did it go? What challenges did you face explaining it or being taken seriously, and what kind of therapy worked, if any?
I went to a therapist for 2 sessions and received nothing but blank looks when I tried to explain what I was trying to process. I think it was very unfamiliar ground for them and they didn't know what to do with me. I'd like to try again but if anyone here has guideance on what worked for them, I'd be interested.
I've also started basic meditation, which continues to be a little helpful.
I'm not sure if this is the right place to post, but where can I find details on the Petrov day event/website feature?
I don't want to sign up to participate if (for example) I am not going to be available during the time of the event, but I get selected to play a role.
Maybe the lack of information is intentional?
I feel that human intelligence is not the gold standard of general intelligence; rather, I've begun thinking of it as the *minimum viable general intelligence*.
In evolutionary timescales, virtually no time has elapsed since hominids began trading, utilizing complex symbolic thinking, making art, hunting large animals etc, and here we are, a blip later in high technology. The moment we reached minimum viable general intelligence, we started accelerating to dominate our environment on a global scale, despite increases in intelligence that are actually relatively megre within that time: evolution acts over much longer timescales and can't keep pace with our environment, which we're modifying at an ever-increasing rate.
Moravec's paradox suggests we are in fact highly adapted to the task of interacting with the physical world-as basically all animals are-and we have some half-baked logical thinking systems tacked on to this base.
You asked for predictions at the start, so here was mine:
"I expect it to have some difficulty recognising everything in the pictures, and to miss approximately 1 step in the process (like not actually turning the kettle on). Ultimately I expect it to succeed in a sub-par way. 90% chance."
It did worse than I predicted.
The image recognition was significantly worse than I imagined, and Claude had to be helped along at most stages of the process. The transcript reads like someone with vision problems trying to guide you. Claude was mostly ok in terms of creating a series of actions to take for the actual act of making coffee, but had a... (read more)