This is a very good point. It is strange, though, that the Board was able to fire Sam without the Chair agreeing to it. It seems like something as big as firing the CEO should have required at least a conversation with the Chair, if not the affirmative vote of the Chair. The way this was handled was a big mistake. There needs to be new rules in place to prevent big mistakes like this.
I appreciate this post. I did want to point out that Palantir is an appropriate descriptive name for what the company does. LOTR fans understand it.
I had a bit of a chuckle looking at the Pew Research Center results reporting that 27% of people age 18 to 29 thought that dating was easier today than it was 10 years ago. What percentage of people in that bracket were between 8 to 12 years old 10 years ago? 😆
Would a BCI-mediated hive mind just be hacked and controlled by AGIs? Why would human brains be better at controlling AGIs when directly hooked in to computers than they are when indirectly hooked in by keyboards? I understand that BCIs theoretically would lead to faster communication between humans, but the communication would go in both directions. How would you know when it was being orchestrated by humans and when it was being manipulated by an AGI?
That would make a fun sci-fi novel
Could someone please point me toward some more recent posts about the war in Ukraine? Did folks on this platform lose interest? I still think it is pretty perplexing and would love to read some rational discussion about it.
I’m a fan of there being many experiments, but I might be biased by my background in meta-analysis. Many good experiments are, of course, better than many poorly designed and/or executed experiments, but replication is important, even in good experiments. Even carefully controlled experiments have the potential of error. Also, having many experiments usually is a better test of the generalizability of the findings. Finally, having many experiments coming out of many different laboratories (independent of each other) increases confidence that the findings are not the result of the investigator’s preference for what the results should be. If there is conflict in findings it might be poor study design and/or execution or it might be that the field is missing something important about the truth.
This reminds me of mindfulness training. It also reminds me of behavioral chain awareness training and other methods of reducing automatic thoughts and behaviors that are taught in cognitive behavioral therapy for addictions. It’s great that you figured it out for yourself and shared what you learned in a way that may be accessible to many in this audience.
I’m new to this community, so I don’t know why you have DEACTIVATED in front of your name. I’m sad that you do, though, because maybe it means that you have given up on this community. This post is so painfully lonely. I’m sorry if you didn’t find what you needed here. You have touched me with the loneliness in your writing. I also value other posts that you have made and would welcome hearing more of your thoughts. Maybe, you won’t see this message, but I hope that, if you still have friends in this community, they are checking up on you.
I wonder how many AI experts hold back their thoughts because they remember what happened to Copernicus when he presented that the Earth was not the center of the universe.
Thank you for your post. I’m new here and am, therefore, not permitted to upvote it, but I would, if I could.