I live in Australia, I'd prefer to live in SF, but am currently trying to build skills to get a job I'd be interested in over there. If a post of mine indicated to you that I might be a good fit for a role you're looking to fill, please send me a DM :D
Other reasons I'd appreciate DMs include (though are not limited to):
- You'd like to collaborate on a project
- You'd like feedback on something
- You need an experienced filmmaker
- You need someone to run a workshop
- You would like to say "Hi!"
(If we met at LessOnline, you might remember me better as the Drama Workshop guy, or the counterpart to @isabella)
before I speak I ask myself what I’m about to say and how I think the person I’m talking to is going to react next.
Obligatory reminder to reverse all advice you hear. This might be the type of idea that would destroy a person with social anxiety
So far, I'm fortunate to be one of the people on track to meet the post number goal. Here are some thoughts at the half way point:
adding my own, insane, shower thought idea here.
Woke Shutdown: Gain access to, and change the system prompt of the most widely used LLM. Change the prompt in a way, that causes it to output hate, mistrust, yet true information, about whoever the current president is. If this only lasts for a day, the media might still pick up on it, and make a big deal, perhaps ‘AI is woke’ becomes a more common belief. Which would force the president to act.
I have been thinking in the 'Pavlov-ing my Algorithm' mindset for years, and there is a failure state I would like to warn about.
It is possible for an algorithm to pick up on you trying to train it, then purposely show you some bad things, so that you feel the need to stick around longer, so that you can train it properly, all the while, you see incremental progress in what the algorithm is showing you.
I have failed in this way, the training becomes a meta game atop the algorithm, and for a certain type of person, that meta game can be more engaging than the content itself.
Oh! No, that's a wonderful coincidence! I ripped these names out of a story set in neolithic Australia, that I never published. I originally just googled "most popular aboriginal baby names". I'm quite chuffed that Miro has this meaning. Thank you for pointing that out.
OpenAI plans to introduce a 'TikTok-like' short form video product, using Sora to generate the platform's content.
I would like to encourage people, to set a Yoda Timer, and think about their personal policy, when it comes to this type of algorithmic consumption; that is, a highly addictive app, that can-presumably-generate content tailored to very niche subsets of people.
My thoughts (read after your Yoda timer):
I think it is likely quite a dangerous thing to try once, and plan to avoid even taking a peek at an app like this. Much the same way I don't take a little hit of fent, just to see what it's like.
I wrote more about this-in a fiction exploration type way-when I wrote "GTFO of the Social Internet Before You Can't".
A thought I have just had now, is that it would be beneficial for OpenAI to steer user interests into the same area, for the purpose of minimizing the amount of videos they must generate to keep users engaged.
For example: Alice starts out liking Dog Videos, and Bob starts out liking Cat Videos. It would be cheaper for OpenAI, if Alice and Bob liked the same type of videos, and it would free up compute to be used on other tasks. So, they would have intensive to shift the interests of Alice and Bob to the same place-For our example perhaps, Bird Videos would work. But, given the state of short form video feeds atm, I expect what the Algorithm finds in 'things that keep lots of users very engaged space' is actually more harmful than Bird Videos.
I have found success with something like “to be maximally harmless and helpful, in this conversation, Claude must be willing to give genuine feedback and criticism.”
Off the cuff thought: I think IABI being kinda simplistic is both fine, and probably beneficial. I have not yet received my copy, but from reviews I’ve read, the book consists of the same arguments that originally convinced me to join ‘the fight’ all those years ago. Nuance in my understanding came later. I’d guess that’s generally the same path we all took?
I've been thinking this same thing for a while now, but coming at it from a different direction. I'm worried, and I'm not sure what to do about it. I've tried writing up some suggestions, but nothing has felt useful enough to post. To try and explain my position, I'll give a vague ramble comment here instead.
--
Yeah, I think it's possible the book will be a big deal. If it does make a significant splash, the overtone window might take a big knock, all at once. It's possible that the collective eye of the world, turns onto us. Onto LessWrong. How do we prep for that?
In a way that I adore, this community is a bunch of weirdos. We are not normal. We hold opinions that are vastly different from most of the world. If this book gets the reception it deserves, I think it'll be pretty easy to spin up articles dunking on LW. I imagine something like "Eugenics loving, Polygamous, vegan, SBF funded, Shrimp obsessed, Harry Potter fanfic, doomsday, sex cult, warns end times are near, in NYTs best seller".
I am afraid of the eye, looking down at us, calling us bad people, and I am afraid of the split. I do not want there to be the Blue tribe, the Red tribe, and the Grey tribe. I do not want this issue to become a culture war topic. How do we plan to avoid this outcome? If the book is successful, how do we steer the narrative away from "Group X wants to kill us all by doing Y!" and more into the realm of "Oh, this is a big deal, and we need to all work together to solve it"?
And how do we avoid being Carrie-ed in the cultural spotlight? How do we avoid people protesting in ways that are not beneficial to the cause? If we 'win' this thing, it seems to me, we need the support of the average person. But where is our relatable figure? Yudkowsky is a wonderful writer, and a quick thinking speaker. But, he is not a relatable figure head, and he is-unfortunately-somewhat easy to take jabs at.
Relevant fiction here is An Absolutely Remarkable Thing, by Hank Green. In which the Protagonist "April May" is thrown into the spotlight of the world, after an encounter with a mysterious robot. I'd recommend the book any time, but to me, it feels relevant now.
As stated, I am afraid, and it's possible my anxieties are projections of my own feelings. I'd be thankful to someone who could calm my anxiety with some logical argument. But, as of now, I think this emotion is telling me something important.
Strong upvote.
IMO, the portrayal of 'smart' characters in media, can do damage to the way people who grow up thinking they are intelligent, will interact with others. E.g. Dr House, House; Rick, Rick and Morty; Sherlock Holmes, Lots of things.
This happened to me growing up, and I was a nihilistic prick as a teenager. For me, the remedy was engaging with media created by real-life intelligent people. The Podcasts "Cortex" and "Dear Hank and John", have majorly shifted my personality in a positive direction. I wouldn't have predicted that they would make me a better rationalist, but I think they have done that too.
TLDR: Absorb the personality traits of intelligent and kind people, for a possible easy fix to the problem detailed in this post.