Tristan Williams

While I'm currently working as a Community Based Counselor and Tea Slinger, I would generally describe myself as a modern day renaissance man, having slaughtered pigs on my family farm and become a vegan, having done HVAC work and academic research, having been a member of both the Republican and Democratic clubs at my university. I've sought to experience much in life, and on the horizon I wish to deepen my experiences within EA, currently having gone from a fellow to a facilitator to a prospective employee looking for work in the space while simultaneously doing personal cause prioritization work. Ill include below a list of some of my deep interests, and would be happy to connect over any of these areas, specifically as they may intersect with EA.

Philosophy, Psychology, Music (have deep interests in all genres but especially electronic and indie), Politics (especially American), Drugs (mostly psychs), Gaming (mostly League these days), Cooking (have been a head chef),  Photography, Meditation (specifically mindfulness).

How you can help me: I'm in the process right now of decided which cause area to focus my future work on (nuclear, mental health, EA community building, politics, animal welfare, AI, and criminal justice reform) so any compelling reasons to go (or not to go) into any of these would be really helpful at this point.

How I can help others: While I can't really offer any expertise in EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. 

Posts

Sorted by New

Wiki Contributions

Comments

How exactly do you come to "up to and including acts of war"? His writing here was concise due to it being TIME, which meant he probably couldn't caveat things in the way that protects him against EAs/Rationalists picking apart his individual claims bit by bit. But from what I understand of Yudkowsky, he doesn't seem to in spirit necessarily support an act of war here, largely I think for similar reasons as you mention below for individual violence, as the negative effects of this action may be larger than the positive and thus make it somewhat ineffective. 

2. What is Overton's window? Otherwise I think I probably agree, but one question is, once this non-x-risk campaign is underway, how to you keep it on track and prevent value drift? Or do you not see that as a pressing worry?

3. Cool, will have to check that out.

4. Completely agree, and just wonder what the best way to promote less distancing is. 

Yeah, I suppose I'm just trying to put myself in the shoes of the FHI people here that coordinated this and feel like many comments here are a bit more lacking in compassion than I'd like, especially for more half baked negative takes. I also agree that we want to put attention into detail and timing, but there is also the world in which too much of this leads to nothing getting done, and it's highly plausible to me that this had probably been an idea for long enough already to make that the case here.

Thanks for responding though! Much appreciated :)

The LessWrong comments here are generally (quite) (brutal), and I think I disagree, which I'll try to outline very briefly below. But I think it may be generally more fruitful here to ask some questions I had to break down the possible subpoints of disagreement as to the goodness of this letter. 

I expected some negative reaction because I know that Elon is generally looked down upon by the EAs that I know, with some solid backing to those claims when it comes to AI given that he cofounded OpenAI, but with the (immediate) (press) (attention) it's getting in combination with some heavy hitting signatures (including Elon Musk, Stuart Russel, Steve Wozniak (Co-founder, Apple), Andrew Yang, Jaan Tallinn (Co-Founder, Skype, CSER, FLI), Max Tegmark (President, FLI), and Tristan Harris (from The Social Dilemma) among many others) I kind of can't really see the overall impact of this letter being net negative. At worst it seems mistimed and with technical issues, but at best it seems one of the better calls to action (or global moratoriums as Greg Colbourn put it) that could have happened, given AI's current presence in the news and much of the world's psyche.  

But I'm not super certain in anything, and generally came away with a lot of questions, here's a few:

  1. How convergent is this specific call for pause on developing strong language models with how AI x-risk people would go about crafting a verifiable, tangible metric for AI labs to follow to reduce risk? Is this to be seen as a good first step? Or something that might actually be close enough to what we want that we could rally around this metric given its endorsement by this influential group?
    1. This helps clarify the "6 months isn't enough to develop the safety techniques they detail" objection which was fairly well addressed here as well as the "Should Open AI be at the front" objection.
  2. How much should we view messages that are a bit more geared towards non x-risk AI worries than the community seems to be? They ask a lot of good questions here, but they are also still asking "Should we let machines flood our information channels with propaganda and untruth?" an important question, but one that to me seems to deviate away from AI x-risk concerns.  
    1. This is at least tangential to the "This letter felt rushed" objection, because even if you accept it was rushed, the next question is "Well, what's our bar for how good something has to be before it is put out into the world?" 
  3. Are open letters with influential signees impactful? This letter at the very least to me seems to be a neutral at worst, quite impactful at best sort of thing, but I have very little to back that, and honestly can't recall any specific time I know of where open letters cause significant change at the global/national level. 
  4. Given the recent desire to distance from potentially fraught figures, would that mean shying away from a group wide EA endorsement of such a letter because a wild card like Elon is a part of it? I personally don't think he's at that level, but I know other EAs who would be apt to characterize him that way.
  5. Do I sign the post? What is the impact of adding signatures with significantly less professional or social clout to such an open letter? Does it promote the message of AI risk as something that matters to everyone? Or would someone look at "Tristan Williams, Tea Brewer" and think "oh, what is he doing on this list?"