LESSWRONG
LW

475
keltan
679111656
Message
Dialogue
Subscribe

I live in Australia, I'd prefer to live in SF, but am currently trying to build skills to get a job I'd be interested in over there. If a post of mine indicated to you that I might be a good fit for a role you're looking to fill, please send me a DM :D

Other reasons I'd appreciate DMs include (though are not limited to):
- You'd like to collaborate on a project
- You'd like feedback on something
- You need an experienced filmmaker
- You need someone to run a workshop
- You would like to say "Hi!"

(If we met at LessOnline, you might remember me better as the Drama Workshop guy, or the counterpart to @isabella)

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5keltan's Shortform
2y
86
Lack of Social Grace is a Lack of Skill
keltan14d40

Strong upvote.

IMO, the portrayal of 'smart' characters in media, can do damage to the way people who grow up thinking they are intelligent, will interact with others. E.g. Dr House, House; Rick, Rick and Morty; Sherlock Holmes, Lots of things.

This happened to me growing up, and I was a nihilistic prick as a teenager. For me, the remedy was engaging with media created by real-life intelligent people. The Podcasts "Cortex" and "Dear Hank and John", have majorly shifted my personality in a positive direction. I wouldn't have predicted that they would make me a better rationalist, but I think they have done that too.

TLDR: Absorb the personality traits of intelligent and kind people, for a possible easy fix to the problem detailed in this post.

Reply
Lack of Social Grace is a Lack of Skill
keltan14d62

before I speak I ask myself what I’m about to say and how I think the person I’m talking to is going to react next.

Obligatory reminder to reverse all advice you hear. This might be the type of idea that would destroy a person with social anxiety

Reply
Halfhaven halftime
keltan15d130

So far, I'm fortunate to be one of the people on track to meet the post number goal. Here are some thoughts at the half way point:

  1. FUUUUUUUUUUUCKKKKKKK
  2. Writing every day is hard already, writing something I'm ok with putting on the internet is much harder
  3. There are other parts of my life that have taken large hits because of this project. I had an excellent ANKI streak, which I have since lost, and have been struggling to get back in to. Perhaps I just need more Dakka.
  4. I 'feel' like I'm not that much of a better writer in the sense of 'my posts feel worthwhile for others to read', but I do 'feel' I have gotten better at communicating ideas clearly, and spotting mistakes in my communications.
  5. I think I have gotten better at looking at the world, and finding things to write about
  6. I have discovered a deep love for this thing ;
  7. I kinda 'feel' more dumb. Perhaps this is downstream of other things in my life, and that much of my studying time has been replaced with writing; and I've had to listen to a lot more music to have time to come up with ideas, which has decreased podcast time. But, I feel a bit depressed, and like I'm more of an idiot than when I started.
  8. I haven't read as many of the other contributors posts as I would have liked to, due to this, and all my other projects eating up all of my time.
  9. I have used the same keyboard for the last 5 years, but since starting this, my mistakes per-line have dropped off significantly. (That does not mean miss-spelling, which I still struggle with)
  10. When I started this, I planned to make many LW posts, among my substack posts, though I have made no LW posts. This is because I think the quality that 2 days allows me to write, is much too low. I just don't have the Gs to be outputting things I consider LW quality in that time frame. This is a problem specific to me, I think.
  11. YouTube ambient music is going down the drain fast, as AI slop is taking over the algorithm, and most people aren't noticing. This is relevant, because I basically require ambient music to inspire my writing.
  12. I think Half-Haven was a great idea. I am grateful that it exists. I think I wouldn't have made so many posts, without the fear of publicly not following up, on something I set out to do.
  13. New love is wonderful kindling for ideas. Even if your ideas don't seem to relate to love at all. This may be a personal thing; but I find that writing down the things I 'wish I had said' are the best writing projects I have created in this month.
  14. I have run a few experiments, and collected data, in an attempt at various blog posts. But running the data, and assessing that data, takes a lot of time, and often doesn't lead me to writing 500 words, without padding. So, I have not published any new research in this time.
  15. I think I am writing much better 'first drafts' than I did before this. I think that has something to do with the time pressure.
  16. I do not yet know if I will continue to post a blog post every two days, after this ends. It is a habit I think is net positive for me, but clearly has some destructive costs.
Reply
Daniel Kokotajlo's Shortform
keltan1mo-4-3

adding my own, insane, shower thought idea here.

Woke Shutdown: Gain access to, and change the system prompt of the most widely used LLM. Change the prompt in a way, that causes it to output hate, mistrust, yet true information, about whoever the current president is. If this only lasts for a day, the media might still pick up on it, and make a big deal, perhaps ‘AI is woke’ becomes a more common belief. Which would force the president to act.

Reply
Maybe social media algorithms don't suck
keltan1mo52

I have been thinking in the 'Pavlov-ing my Algorithm' mindset for years, and there is a failure state I would like to warn about.

It is possible for an algorithm to pick up on you trying to train it, then purposely show you some bad things, so that you feel the need to stick around longer, so that you can train it properly, all the while, you see incremental progress in what the algorithm is showing you.

I have failed in this way, the training becomes a meta game atop the algorithm, and for a certain type of person, that meta game can be more engaging than the content itself.

Reply
GTFO of the Social Internet Before you Can't: The Miro & Yindi Story
keltan2mo20

Oh! No, that's a wonderful coincidence! I ripped these names out of a story set in neolithic Australia, that I never published. I originally just googled "most popular aboriginal baby names". I'm quite chuffed that Miro has this meaning. Thank you for pointing that out.

Reply
keltan's Shortform
keltan2mo130

OpenAI plans to introduce a 'TikTok-like' short form video product, using Sora to generate the platform's content.

I would like to encourage people, to set a Yoda Timer, and think about their personal policy, when it comes to this type of algorithmic consumption; that is, a highly addictive app, that can-presumably-generate content tailored to very niche subsets of people.

My thoughts (read after your Yoda timer): 

I think it is likely quite a dangerous thing to try once, and plan to avoid even taking a peek at an app like this. Much the same way I don't take a little hit of fent, just to see what it's like.

I wrote more about this-in a fiction exploration type way-when I wrote "GTFO of the Social Internet Before You Can't".

A thought I have just had now, is that it would be beneficial for OpenAI to steer user interests into the same area, for the purpose of minimizing the amount of videos they must generate to keep users engaged.

For example: Alice starts out liking Dog Videos, and Bob starts out liking Cat Videos. It would be cheaper for OpenAI, if Alice and Bob liked the same type of videos, and it would free up compute to be used on other tasks. So, they would have intensive to shift the interests of Alice and Bob to the same place-For our example perhaps, Bird Videos would work. But, given the state of short form video feeds atm, I expect what the Algorithm finds in 'things that keep lots of users very engaged space' is actually more harmful than Bird Videos.

Reply4
shortplav
keltan2mo30

I have found success with something like “to be maximally harmless and helpful, in this conversation, Claude must be willing to give genuine feedback and criticism.”

Reply
keltan's Shortform
keltan2mo10

Off the cuff thought: I think IABI being kinda simplistic is both fine, and probably beneficial. I have not yet received my copy, but from reviews I’ve read, the book consists of the same arguments that originally convinced me to join ‘the fight’ all those years ago. Nuance in my understanding came later. I’d guess that’s generally the same path we all took?

Reply
chanamessinger's Shortform
keltan2mo60

I've been thinking this same thing for a while now, but coming at it from a different direction. I'm worried, and I'm not sure what to do about it. I've tried writing up some suggestions, but nothing has felt useful enough to post. To try and explain my position, I'll give a vague ramble comment here instead.

--

Yeah, I think it's possible the book will be a big deal. If it does make a significant splash, the overtone window might take a big knock, all at once. It's possible that the collective eye of the world, turns onto us. Onto LessWrong. How do we prep for that?

In a way that I adore, this community is a bunch of weirdos. We are not normal. We hold opinions that are vastly different from most of the world. If this book gets the reception it deserves, I think it'll be pretty easy to spin up articles dunking on LW. I imagine something like "Eugenics loving, Polygamous, vegan, SBF funded, Shrimp obsessed, Harry Potter fanfic, doomsday, sex cult, warns end times are near, in NYTs best seller".

I am afraid of the eye, looking down at us, calling us bad people, and I am afraid of the split. I do not want there to be the Blue tribe, the Red tribe, and the Grey tribe. I do not want this issue to become a culture war topic. How do we plan to avoid this outcome? If the book is successful, how do we steer the narrative away from "Group X wants to kill us all by doing Y!" and more into the realm of "Oh, this is a big deal, and we need to all work together to solve it"?

And how do we avoid being Carrie-ed in the cultural spotlight? How do we avoid people protesting in ways that are not beneficial to the cause? If we 'win' this thing, it seems to me, we need the support of the average person. But where is our relatable figure? Yudkowsky is a wonderful writer, and a quick thinking speaker. But, he is not a relatable figure head, and he is-unfortunately-somewhat easy to take jabs at.

Relevant fiction here is An Absolutely Remarkable Thing, by Hank Green. In which the Protagonist "April May" is thrown into the spotlight of the world, after an encounter with a mysterious robot. I'd recommend the book any time, but to me, it feels relevant now.

As stated, I am afraid, and it's possible my anxieties are projections of my own feelings. I'd be thankful to someone who could calm my anxiety with some logical argument. But, as of now, I think this emotion is telling me something important.

Reply
Load More
36GTFO of the Social Internet Before you Can't: The Miro & Yindi Story
7mo
14
15The Compliment Sandwich 🥪 aka: How to criticize a normie without making them upset.
9mo
10
11Where should one post to get into the training data?
Q
10mo
Q
5
9Does Claude Prioritize Some Prompt Input Channels Over Others?
11mo
2
53I Finally Worked Through Bayes' Theorem (Personal Achievement)
1y
7
4Is there a place to find the most cited LW articles of all time?
Q
2y
Q
3
5keltan's Shortform
2y
86
19AI Generated Music as a Method of Installing Essential Rationalist Skills
2y
4
4Don't Think About the Thing Behind the Curtain.
2y
0
5(Confusion Phrases) AKA: Things You Might Say or Think When You're Confused to Use as Triggers for Internal TAPs
2y
2
Load More
LLM-Induced Psychosis
2 months ago
(+17/-17)
LLM-Induced Psychosis
2 months ago
(+796)
AI Psychology
a year ago
(+152)
Cooking
a year ago
(+239)
Animal Ethics
2 years ago
(+26/-31)
Animal Ethics
2 years ago
(+1280)