Background

It isn’t obvious what contributing to The Good Future from a non-standard background looks like.

Especially on a short timeline.

I am a first time poster with a background in the arts. I became aware of AI safety as a cause area relatively recently, and despite it possibly being the 11th hour, I want to contribute. I would consider myself a baby rationalist - there is years worth of content to digest still. This feels like relevant information to this post because I expect the results of my Babble session and reflection to be very different than (what I perceive to be) the average less wrong poster.

Why even try?

Because humans are worth it, and if I can tip the scale even a little bit then I should.

I am motivated to post now for the first time because of the Good Hearts experiment. One of the desirable outcomes seems to be pulling us lurkers out of woodwork to see what the results of the week look like, and as someone who fits that description I feel that I can contribute to the experiment. Normally I feel like I should not participate in posting because of the very high standard of discourse, but this experiment feels like permission to post and try for that high standard anyway (while being okay with being corrected on wrongness).

Process

My intention is to write in the ballpark of 50 items of ways I can contribute to obtaining The Good Future without auto-pruning, especially on my instincts of “reasonableness” and “feasibility.” I will then go through and analyze my thinking by categorizing the items with the intention of learning what pathways are under-represented in my thoughts, or that I may be completely blind to. I may find other ways to evaluate the Babble that aren’t obvious to me beforehand.

I also hope to find something actionable.

Babble

50 Ways To Contribute 

  1. Optimize our household so that it is the most productive research environment possible for my researcher partner 
  2. Host a local meetup 
  3. Commit to spending the time to obtain the necessary background knowledge to become an expert 
  4. Provide low-skill task completion services for researchers to free up their time 
  5. Connect researchers who need some-skill level tasks done with those who can do so to free up their time 
  6. Create art using AI as some sort of awareness project - use opportunities for discourse to talk about cause areas 
  7. Optimize my income earning in a non-related way in order to donate more to research orgs 
  8. Write/lobby policy makers regarding issues I think will be most relavent (agi; Cryo)
  9. Create art with the intention of eliciting emotional reactions to the cause area to spur action 
  10. Mobilize an existing group who isn’t currently working on solving the problems 
  11. Fundraise directly through streaming or other awareness campaigns directly 
  12. Identify the individual who is most likely to solve the problem, learn what incentive or amount of money would cause them to dedicate themselves to the problem, and figure out how to connect them  
  13. Attempt to think about solutions with my current knowledge anyway 
  14. Destroy every computer to delay the future 
  15. Draw a webcomic to communicate thought experiments, biases, and other ideas I’m learning about to reinforce it for myself while teaching to others 
  16. Same, but Tiktok 
  17. Same, but Youtube 
  18. Write/illustrate a children’s book to help gear up the next generation to fight the problems 
  19. Become a rationalist house mom to meet researcher needs 
  20. Create dialogues with tech journalists who can influence public opinions/policy 
  21. Design an AI safety card or board game 
  22. Collect information from communities who could be contributing but who reject the need for this type of research and figure out ways to bridge that gap 
  23. Address proxy existential questions (ie aging) 
  24. Concentrate on empathy-building projects as a way to get people to value human life more 
  25. Check in with the researchers already in my circle to be a supportive friend to them 
  26. Concentrate on life-extension issues, because more brains to solve problems 
  27. Host contests to solve pieces of the problem 
  28. Identify people who would already be working on this problem if they only had x, figure out how to connect them with x 
  29. Concentrate on math and science literacy issues as a proxy 
  30. Organize an art show with AI safety as the focus to involve a wider range of people in thinking about the problem (or at least how it will effect us) 
  31. Write posts of LW on elsewhere as I’m learning concepts to better communicate/engage with those at my current level of understanding 
  32. Ask the dumb-feeling questions in the comments section; it may benefit more than just me 
  33. Create printable resources for teachers re:growth mindset, science, math concepts 
  34. Create a plan for how humans can survive a paper-clip maximizer  
  35. Ratfic 
  36. Be a data point - test out rationalist practices (such as babble and prune) and relay my experience 
  37. Use high-risk investment strategies to generate money for alignment research 
  38. Learn a second language, especially the relevant language to translate 
  39. Transcribe audio for a researcher who thinks better by speaking than writing 
  40. Get involved with rationalist homeschool groups, both to free up the time of the adults and be a positive influence in the lives of the next problem solvers. 
  41. Proselytize like a crazy person on the street about how we need to be working on this 
  42. Make fun rationalist-signaling and alignment research-signaling apparel designs to invite in person interaction 
  43. Figure out what would motivate the teenagers I know to be interested in the problem/ devote time to the problem (undervalued resource) 
  44. Host a researcher at our non-group house who may desire an environment change to think 
  45. Compile resources for those who wish to learn more about the field (same, productivity, etc) 
  46. Test out productivity methods and relay my experience 
  47. Actively work to challenge the ideas being presented 
  48. Show up somewhere (perhaps one of the orgs) and make myself useful. Figure it out of the fly. 
  49. Type hand written research notes for someone who may find that service useful. 
  50. Offer virtual assistant type services (such as appointment making) for researchers. 

Reflection

Since time is off the essence, one obvious way to evaluate this challenge is to look at what is actionable now vs in the future. When I reorder this list with that in mind, there is are a few items I can do immediately (optimizing at home); a large group of items that are right outside of my scope of abilities and could be executed in the near future (learning relavent math words for transcription); and few things that are true shots in the dark and outside of by current abilities (becoming an expert). That middle section is what interests me most, because I feel like I’m juuuust lacking the right information to evaluate what would be most effective. I am unsure how to gain that information.

Around half of the items require heavy communication with others. This isn’t something I would have named as skill I could utilize before hand, and so feels like a surprising result! When I reflect on why this may be a common theme for me, I want to say it is because I have time to try and make connections; I identify lack of communication as an obvious gap; and my model of myself includes “great supporting character”. I have some evidence that this model is true, and could lean into that skill.

I would evaluate my babble items as being 80% usual and 20% unusual  (for me) thinking. The things that fall into the usual category include things that my model of a typical LWer might say (write ratfic; be a datapoint in tests productivity methods). When I was able to push more toward unusual thinking, they are also ideas I would evaluate as much less feasible (survive paperclip maximization). I expect that pushing more into the unusual is where a truely good idea hides; I will need to more work to get there, however. That work may include longer babble, because the end of the list feels like it pushed my thinking more than the beginning.

Having free time on my hands is one thing I would say is my most valuable available resource. I can tackle many of these items from utilizing that resource. Perhaps this is an area to concentrate on.

It is more difficult to identify what it is that I’m not seeing as categories of possibilities; perhaps this feels obvious in retrospect, but I did state at the beginning that I expected to gain some information about this. However, I’m still expecting this to crop up at some point while reflecting in the future, since I will be on the look out for ideas that don’t fit those common categories. Maybe that is naive, and I’m taking this singular exercise as weak evidence that I shouldn’t expect better ideas through pure reflection (rather than action).

Did I find this exercise useful?

Yes! I am surprised at how many of these ideas seem both actually useful and doable. I feel capable of making actual if tiny progress - and when it comes to x-risk, I am completely biased to say it is worth adding my little bit of contribution to the pile. (Are our actions additive when we compile them? Gosh, so much to learn.)

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 11:40 AM

Have you considered booking a call with AI Safety support or applying to speak to 80,000 hours?

You can also express interest for the next round of the AGI Safety Fundamentals course.

BTW, great job with the babble! Just have to make sure you do a good job with the prune.

One thing I lack is perspective, and the linked resources are offering just that! 

As a Babble this is excellent, and many of these (e.g. optimizing income streams, motivating/participating-in groups) seem to be necessary prerequisites for being in a position to make progress on X-risk problems. 

But I think the nature of such problems (as ones that have been attempted by many other individuals with at least some centralized organizations where these individuals share their experiences to avoid duplication of effort, that is) means that any undirected Babble will primarily encounter lines of inquiry that have already been addressed, as many of the more direct (non-resource-gathering) suggestions seem to be.

As a point of methodology, I would suggest trying for much larger Babble lists when approaching these problems, perhaps on the scale of a few hundred ideas, or alternatively making multiple recursive layers of Babbles for each individual point at every recursive level (e.g. 100 points, each with 100 points, each with 100 points...), so that the process is more likely to produce unique [and thus useful] approaches.

I became aware of AI safety as a cause area relatively recently, and despite it likely being the 11th hour, I want to contribute.

PSA: Lots of people disagree with Eliezer about timelines, and Eliezer famously does not want you to adopt his positions without questioning.

Great context that I wasn’t aware of! Changed language to reflect a level of uncertainty, since I’ve yet to form my own solid timeline. Also because likely has an actual meaning as a word that I didn’t consider.

Hi Edric. How is your quest evolving?