I live in Australia, I'd prefer to live in SF, but am currently trying to build skills to get a job I'd be interested in over there. If a post of mine indicated to you that I might be a good fit for a role you're looking to fill, please send me a DM :D
Other reasons I'd appreciate DMs include (though are not limited to):
- You'd like to collaborate on a project
- You'd like feedback on something
- You need an experienced filmmaker
- You need someone to run a workshop
- You would like to say "Hi!"
(If we met at LessOnline, you might remember me better as the Drama Workshop guy, or the counterpart to @isabella)
I have been thinking in the 'Pavlov-ing my Algorithm' mindset for years, and there is a failure state I would like to warn about.
It is possible for an algorithm to pick up on you trying to train it, then purposely show you some bad things, so that you feel the need to stick around longer, so that you can train it properly, all the while, you see incremental progress in what the algorithm is showing you.
I have failed in this way, the training becomes a meta game atop the algorithm, and for a certain type of person, that meta game can be more engaging than the content itself.
Oh! No, that's a wonderful coincidence! I ripped these names out of a story set in neolithic Australia, that I never published. I originally just googled "most popular aboriginal baby names". I'm quite chuffed that Miro has this meaning. Thank you for pointing that out.
OpenAI plans to introduce a 'TikTok-like' short form video product, using Sora to generate the platform's content.
I would like to encourage people, to set a Yoda Timer, and think about their personal policy, when it comes to this type of algorithmic consumption; that is, a highly addictive app, that can-presumably-generate content tailored to very niche subsets of people.
My thoughts (read after your Yoda timer):
I think it is likely quite a dangerous thing to try once, and plan to avoid even taking a peek at an app like this. Much the same way I don't take a little hit of fent, just to see what it's like.
I wrote more about this-in a fiction exploration type way-when I wrote "GTFO of the Social Internet Before You Can't".
A thought I have just had now, is that it would be beneficial for OpenAI to steer user interests into the same area, for the purpose of minimizing the amount of videos they must generate to keep users engaged.
For example: Alice starts out liking Dog Videos, and Bob starts out liking Cat Videos. It would be cheaper for OpenAI, if Alice and Bob liked the same type of videos, and it would free up compute to be used on other tasks. So, they would have intensive to shift the interests of Alice and Bob to the same place-For our example perhaps, Bird Videos would work. But, given the state of short form video feeds atm, I expect what the Algorithm finds in 'things that keep lots of users very engaged space' is actually more harmful than Bird Videos.
I have found success with something like “to be maximally harmless and helpful, in this conversation, Claude must be willing to give genuine feedback and criticism.”
Off the cuff thought: I think IABI being kinda simplistic is both fine, and probably beneficial. I have not yet received my copy, but from reviews I’ve read, the book consists of the same arguments that originally convinced me to join ‘the fight’ all those years ago. Nuance in my understanding came later. I’d guess that’s generally the same path we all took?
I've been thinking this same thing for a while now, but coming at it from a different direction. I'm worried, and I'm not sure what to do about it. I've tried writing up some suggestions, but nothing has felt useful enough to post. To try and explain my position, I'll give a vague ramble comment here instead.
--
Yeah, I think it's possible the book will be a big deal. If it does make a significant splash, the overtone window might take a big knock, all at once. It's possible that the collective eye of the world, turns onto us. Onto LessWrong. How do we prep for that?
In a way that I adore, this community is a bunch of weirdos. We are not normal. We hold opinions that are vastly different from most of the world. If this book gets the reception it deserves, I think it'll be pretty easy to spin up articles dunking on LW. I imagine something like "Eugenics loving, Polygamous, vegan, SBF funded, Shrimp obsessed, Harry Potter fanfic, doomsday, sex cult, warns end times are near, in NYTs best seller".
I am afraid of the eye, looking down at us, calling us bad people, and I am afraid of the split. I do not want there to be the Blue tribe, the Red tribe, and the Grey tribe. I do not want this issue to become a culture war topic. How do we plan to avoid this outcome? If the book is successful, how do we steer the narrative away from "Group X wants to kill us all by doing Y!" and more into the realm of "Oh, this is a big deal, and we need to all work together to solve it"?
And how do we avoid being Carrie-ed in the cultural spotlight? How do we avoid people protesting in ways that are not beneficial to the cause? If we 'win' this thing, it seems to me, we need the support of the average person. But where is our relatable figure? Yudkowsky is a wonderful writer, and a quick thinking speaker. But, he is not a relatable figure head, and he is-unfortunately-somewhat easy to take jabs at.
Relevant fiction here is An Absolutely Remarkable Thing, by Hank Green. In which the Protagonist "April May" is thrown into the spotlight of the world, after an encounter with a mysterious robot. I'd recommend the book any time, but to me, it feels relevant now.
As stated, I am afraid, and it's possible my anxieties are projections of my own feelings. I'd be thankful to someone who could calm my anxiety with some logical argument. But, as of now, I think this emotion is telling me something important.
I think this is a good idea. I commiserate with you, that it is a plan we must consider, but agree, that yes, we must consider these types of plans.
I dedicated an hour of thought to this last night. But couldn't come up with anything better than radio shot at precise locations. All my other ideas, were either based on theory, caused more risk than they'd be worth, or were more costly than radio waves.
The main problem with Radio Waves, is the inverse square law. Which can be fixed, as suggested, by beaming in certain directions, instead of as a sphere. But, that greatly decreases the probability of the message ever being received.
The ideal tech-I think-would have these properties:
The only option that sounds better than radio waves to me, would be gravitational waves. But I know basically nothing about them, and my understanding is that they are theoretical.
This is me saying "Ambient TV and Radio broadcasts", in a tongue and cheek way.
I know of one experiment like this, Duolicious. But other than the name, which gives me the feeling I imagine the word 'moist' gives to others. The app has... too niche a community.
Fig. The landing page for the Duolicious website. They claim to have 19,482 active members.
Fig. An example of the type of community Duolicious has.
Despite it's oddities, Duolicious is community funded, and open source. I think with a little work, it's interface could be really great! The "Clubs" system, which stands in for what would usually be "Interests" on another dating app, is fun, and highly customisable, letting you create your own Clubs, and easily tap other profile's Clubs to join them. It's a great customization loop!
Fig. Example of the tags system. Again, this is a pretty niche, heavily internet cultured community. Not shown here, is that all tags you have in common, are listed separately.
Same goes for the Bios, which look like they can be as long as you desire, and add character to a person's profile. I haven't seen many "Date Me Docs", but I imagine this is as close as you'll get to one, while keeping the modern dating app interface.
Fig. A lovely little bio I came across. I thought this was so pretty, I copied the whole thing, and added it to my Obsidian vault for use in future writing.
Based on estimated GPT5 gave me, I was expecting running a dating app to be hugely cost intensive, however, Duolicious makes this claim:
o:
I feel surprise. I guess I believed something wrong. Going to the donate page, it's costing them $360 a month for servers.
That's an insanely low cost! How are they doing this? I wonder how hard it would be to shift the culture of the app, to something friendlier to a wider audience. Because at this cost, and for what it is, Duolicious is doing an above average job!
I'm skipping the explanation of why dating apps are bad for their users.
To fix these problems, why wouldn't we make a not-for-profit dating app?
Questions
adding my own, insane, shower thought idea here.
Woke Shutdown: Gain access to, and change the system prompt of the most widely used LLM. Change the prompt in a way, that causes it to output hate, mistrust, yet true information, about whoever the current president is. If this only lasts for a day, the media might still pick up on it, and make a big deal, perhaps ‘AI is woke’ becomes a more common belief. Which would force the president to act.