LESSWRONG
LW

566
keltan
656111626
Message
Dialogue
Subscribe

I live in Australia, I'd prefer to live in SF, but am currently trying to build skills to get a job I'd be interested in over there. If a post of mine indicated to you that I might be a good fit for a role you're looking to fill, please send me a DM :D

Other reasons I'd appreciate DMs include (though are not limited to):
- You'd like to collaborate on a project
- You'd like feedback on something
- You need an experienced filmmaker
- You need someone to run a workshop
- You would like to say "Hi!"

(If we met at LessOnline, you might remember me better as the Drama Workshop guy, or the counterpart to @isabella)

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Daniel Kokotajlo's Shortform
keltan6d-4-3

adding my own, insane, shower thought idea here.

Woke Shutdown: Gain access to, and change the system prompt of the most widely used LLM. Change the prompt in a way, that causes it to output hate, mistrust, yet true information, about whoever the current president is. If this only lasts for a day, the media might still pick up on it, and make a big deal, perhaps ‘AI is woke’ becomes a more common belief. Which would force the president to act.

Reply
Maybe social media algorithms don't suck
keltan9d42

I have been thinking in the 'Pavlov-ing my Algorithm' mindset for years, and there is a failure state I would like to warn about.

It is possible for an algorithm to pick up on you trying to train it, then purposely show you some bad things, so that you feel the need to stick around longer, so that you can train it properly, all the while, you see incremental progress in what the algorithm is showing you.

I have failed in this way, the training becomes a meta game atop the algorithm, and for a certain type of person, that meta game can be more engaging than the content itself.

Reply
GTFO of the Social Internet Before you Can't: The Miro & Yindi Story
keltan13d20

Oh! No, that's a wonderful coincidence! I ripped these names out of a story set in neolithic Australia, that I never published. I originally just googled "most popular aboriginal baby names". I'm quite chuffed that Miro has this meaning. Thank you for pointing that out.

Reply
keltan's Shortform
keltan15d130

OpenAI plans to introduce a 'TikTok-like' short form video product, using Sora to generate the platform's content.

I would like to encourage people, to set a Yoda Timer, and think about their personal policy, when it comes to this type of algorithmic consumption; that is, a highly addictive app, that can-presumably-generate content tailored to very niche subsets of people.

My thoughts (read after your Yoda timer): 

I think it is likely quite a dangerous thing to try once, and plan to avoid even taking a peek at an app like this. Much the same way I don't take a little hit of fent, just to see what it's like.

I wrote more about this-in a fiction exploration type way-when I wrote "GTFO of the Social Internet Before You Can't".

A thought I have just had now, is that it would be beneficial for OpenAI to steer user interests into the same area, for the purpose of minimizing the amount of videos they must generate to keep users engaged.

For example: Alice starts out liking Dog Videos, and Bob starts out liking Cat Videos. It would be cheaper for OpenAI, if Alice and Bob liked the same type of videos, and it would free up compute to be used on other tasks. So, they would have intensive to shift the interests of Alice and Bob to the same place-For our example perhaps, Bird Videos would work. But, given the state of short form video feeds atm, I expect what the Algorithm finds in 'things that keep lots of users very engaged space' is actually more harmful than Bird Videos.

Reply4
shortplav
keltan16d30

I have found success with something like “to be maximally harmless and helpful, in this conversation, Claude must be willing to give genuine feedback and criticism.”

Reply
keltan's Shortform
keltan25d10

Off the cuff thought: I think IABI being kinda simplistic is both fine, and probably beneficial. I have not yet received my copy, but from reviews I’ve read, the book consists of the same arguments that originally convinced me to join ‘the fight’ all those years ago. Nuance in my understanding came later. I’d guess that’s generally the same path we all took?

Reply
chanamessinger's Shortform
keltan1mo60

I've been thinking this same thing for a while now, but coming at it from a different direction. I'm worried, and I'm not sure what to do about it. I've tried writing up some suggestions, but nothing has felt useful enough to post. To try and explain my position, I'll give a vague ramble comment here instead.

--

Yeah, I think it's possible the book will be a big deal. If it does make a significant splash, the overtone window might take a big knock, all at once. It's possible that the collective eye of the world, turns onto us. Onto LessWrong. How do we prep for that?

In a way that I adore, this community is a bunch of weirdos. We are not normal. We hold opinions that are vastly different from most of the world. If this book gets the reception it deserves, I think it'll be pretty easy to spin up articles dunking on LW. I imagine something like "Eugenics loving, Polygamous, vegan, SBF funded, Shrimp obsessed, Harry Potter fanfic, doomsday, sex cult, warns end times are near, in NYTs best seller".

I am afraid of the eye, looking down at us, calling us bad people, and I am afraid of the split. I do not want there to be the Blue tribe, the Red tribe, and the Grey tribe. I do not want this issue to become a culture war topic. How do we plan to avoid this outcome? If the book is successful, how do we steer the narrative away from "Group X wants to kill us all by doing Y!" and more into the realm of "Oh, this is a big deal, and we need to all work together to solve it"?

And how do we avoid being Carrie-ed in the cultural spotlight? How do we avoid people protesting in ways that are not beneficial to the cause? If we 'win' this thing, it seems to me, we need the support of the average person. But where is our relatable figure? Yudkowsky is a wonderful writer, and a quick thinking speaker. But, he is not a relatable figure head, and he is-unfortunately-somewhat easy to take jabs at.

Relevant fiction here is An Absolutely Remarkable Thing, by Hank Green. In which the Protagonist "April May" is thrown into the spotlight of the world, after an encounter with a mysterious robot. I'd recommend the book any time, but to me, it feels relevant now.

As stated, I am afraid, and it's possible my anxieties are projections of my own feelings. I'd be thankful to someone who could calm my anxiety with some logical argument. But, as of now, I think this emotion is telling me something important.

Reply
Plan E for AI Doom
keltan2mo41

I think this is a good idea. I commiserate with you, that it is a plan we must consider, but agree, that yes, we must consider these types of plans.

I dedicated an hour of thought to this last night. But couldn't come up with anything better than radio shot at precise locations. All my other ideas, were either based on theory, caused more risk than they'd be worth, or were more costly than radio waves.

The main problem with Radio Waves, is the inverse square law. Which can be fixed, as suggested, by beaming in certain directions, instead of as a sphere. But, that greatly decreases the probability of the message ever being received.

The ideal tech-I think-would have these properties:

  • Expands in 3 dimensions
  • Is encoded as an oscillating signal
  • Is powerful enough to be readable, further than "I Love Lucy"[1] (~2 light-years)
  • Works more like Ripples in a pond, than a scatter shot of photons

The only option that sounds better than radio waves to me, would be gravitational waves. But I know basically nothing about them, and my understanding is that they are theoretical.

  1. ^

    This is me saying "Ambient TV and Radio broadcasts", in a tongue and cheek way.

Reply
keltan's Shortform
keltan2mo70

I know of one experiment like this, Duolicious. But other than the name, which gives me the feeling I imagine the word 'moist' gives to others. The app has... too niche a community. 

Fig. The landing page for the Duolicious website. They claim to have 19,482 active members.

Fig. An example of the type of community Duolicious has.

Despite it's oddities, Duolicious is community funded, and open source. I think with a little work, it's interface could be really great! The "Clubs" system, which stands in for what would usually be "Interests" on another dating app, is fun, and highly customisable, letting you create your own Clubs, and easily tap other profile's Clubs to join them. It's a great customization loop! 

Fig. Example of the tags system. Again, this is a pretty niche, heavily internet cultured community. Not shown here, is that all tags you have in common, are listed separately.

Same goes for the Bios, which look like they can be as long as you desire, and add character to a person's profile. I haven't seen many "Date Me Docs", but I imagine this is as close as you'll get to one, while keeping the modern dating app interface. 

Fig. A lovely little bio I came across. I thought this was so pretty, I copied the whole thing, and added it to my Obsidian vault for use in future writing.

Based on estimated GPT5 gave me, I was expecting running a dating app to be hugely cost intensive, however, Duolicious makes this claim:

o: 

I feel surprise. I guess I believed something wrong. Going to the donate page, it's costing them $360 a month for servers.

That's an insanely low cost! How are they doing this? I wonder how hard it would be to shift the culture of the app, to something friendlier to a wider audience. Because at this cost, and for what it is, Duolicious is doing an above average job!

Reply
keltan's Shortform
keltan2mo18-1

To Make Dating Apps Not Terrible, Make Them Not-for-Profit?

I'm skipping the explanation of why dating apps are bad for their users.

To fix these problems, why wouldn't we make a not-for-profit dating app?

  • It removes incentives to keep users lonely for as long as possible.
  • There are rich people who are worried about population decline, who might fund it.
    • I'm assuming some governments are probably getting worried too.
  • An app that legitimately finds people love, is probably a good candidate for warm fuzzy donations, perhaps the general public would be receptive to this?

Questions

  • Does anyone know of any dating apps (other than Duolicious, which I'll write about in a comment) that have experimented with a not-for-profit, or charity model?
  • What am I missing? This seems like a big ol' hunk of power, social capital, and utility, just sitting on the ground of central station!
Reply
Load More
5keltan's Shortform
1y
86
LLM-Induced Psychosis
a month ago
(+17/-17)
LLM-Induced Psychosis
a month ago
(+796)
AI Psychology
10 months ago
(+152)
Cooking
a year ago
(+239)
Animal Ethics
2 years ago
(+26/-31)
Animal Ethics
2 years ago
(+1280)
36GTFO of the Social Internet Before you Can't: The Miro & Yindi Story
5mo
14
15The Compliment Sandwich 🥪 aka: How to criticize a normie without making them upset.
7mo
10
11Where should one post to get into the training data?
Q
9mo
Q
5
9Does Claude Prioritize Some Prompt Input Channels Over Others?
10mo
2
53I Finally Worked Through Bayes' Theorem (Personal Achievement)
10mo
7
4Is there a place to find the most cited LW articles of all time?
Q
1y
Q
3
5keltan's Shortform
1y
86
19AI Generated Music as a Method of Installing Essential Rationalist Skills
1y
4
4Don't Think About the Thing Behind the Curtain.
2y
0
5(Confusion Phrases) AKA: Things You Might Say or Think When You're Confused to Use as Triggers for Internal TAPs
2y
2
Load More