Tristan Williams

I hope you've smiled today :) 

I really want to experience and learn about as much of the world as I can, and pride myself on working to become a sort of modern day renaissance man, a bridge builder between very different people if you will. Some not-commonly-seen-in-the-same-person things: I've slaughtered pigs on my family farm and become a vegan, done HVAC (manual labor) work and academic research, been a member of both the Republican and Democratic clubs at my university. 

Discovering EA has been one of the best things to happen to me in my life. I think I likely share something really important with all the people that consider themselves under this umbrella. EA can be a question, sure, but I hope more than that that EA can be a community, one that really works towards making the world a little better than it was. 

Below are some random interests of mine. I'm happy to connect over any of them, and over anything EA, please feel free to book a time whenever is open on my calendly

  • Philosophy (anything Plato is up my alley, but also most interested in ethical and political texts)
  • Psychology (not a big fan of psychotropic medication, also writing a paper on a interesting, niche brand of therapy called logotherapy that analyses its overlap with religion and thinking about how religion, specifically Judaism, could itself be considered a therapeutic practice)
  • Music (Lastfm, Spotify, Rateyourmusic; have deep interests in all genres but especially electronic and indie, have been to Bonnaroo and have plans to attend more festivals)
  • Politics (especially American)
  • Drug Policy (current reading Drugs Without the Hot Air by David Nutt)
  • Gaming (mostly League these days, but shamefully still Fortnight and COD from time to time)
  • Cooking (have been a head chef, have experience working with vegan food too and like to cook a lot)
  • Photography (recently completed a project on community with older people (just the text), arguing that the way we treat the elderly in the US is fairly alarming)
  • Meditation (specifically mindfulness, which I have both practiced and looked at in my RA work, which involved trying to set forth a categorization scheme for the meditative literature)
  • Home (writing a book on different conceptions of it and how relationships intertwine, with a fairly long side endeavor into what forms of relationships should be open to us)
  • Speaking Spanish (Voy a Espana por un ano a dar clases de ingles, porque quiero hablar en Espanol con fluidez)
  • Traveling (have hit a fair bit of Europe and the US, as well as some random other places like Morocco)
  • Reading (Goodreads; I think I currently have over 200 books to read, and have been struggling getting through fantasy recently finding myself continually pulled to non-fiction, largely due to EA reasoning I think)

How you can help me: I've done some RA work in AI Policy now, so I'd be eager to try to continue that moving forward in a more permanent position (or at least a longer period funded) and any help better myself (e.g. how can I do research better?) or finding a position like that would be much appreciated. Otherwise I'm on the look for any good opportunities in the EA Community Building or General Longtermism Research space, so again any help upskilling or breaking into those spaces would be wonderful. 

Of a much lower importance, I'm still not for sure on what cause area I'd like to go into, so if you have any information on the following, especially as to a career in it, I'd love to hear about it: general longtermism research, EA community building, nuclear, AI governance, and mental health.

How I can help others: I don't have domain expertise by any means, but I have thought a good bit about AI policy and next best steps that I'd be happy to share about (i.e. how bad is risk from AI misinformation really?). Beyond EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. I can also offer my experience with personal cause prioritization, and help others on that journey (as well as connect with those trying to find work).

Wiki Contributions


Garrett responded to the main thrust well, but I will say that watermarking synthetic media seems fairly good as a next step for combating misinformation from AI imo. It's certainly widely applicable (not really even sure what the thrust of this distinction was) because it is meant to apply to nearly all synthetic content. Why exactly do you think it won't be helpful?

Yeah, I think the reference class for me here is other things the executive branch might have done, which leads me to "wow, this was way more than I expected". 

Worth noting is that they at least are trying to address deception by including it in the full bill readout. The type of model they hope to regulate here include those that permit "the evasion of human control or oversight through means of deception or obfuscation". The director of the OMB also has to come up with tests and safeguards for "discriminatory, misleading, inflammatory, unsafe, or deceptive outputs".

(k)  The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

          (i)    substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;

          (ii)   enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or

          (iii)  permitting the evasion of human control or oversight through means of deception or obfuscation.

Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities. 

Hmm, I get the idea that people value succinctness a lot with these sorts of things, because there's so much AI information to take in now, so I'm not so sure about the net effect, but I'm wondering maybe if I could get at your concern here by mocking up a percentage (i.e. what percentage of the proposals were risk oriented vs progress oriented)?

It wouldn't tell you the type of stuff the Biden administration is pushing, but it would tell you the ratio which is what you seem perhaps most concerned with.

[Edit] this is included now

What alternative would you propose? I don't really like mundane risk but agree that an alternative would be better. For now I'll just change to "non-existential risk actions" 

This is where I'd like to insert a meme with some text like "did you even read the post?" You:

  • Make a bunch of claims that you fail to support, like at all
  • Generally go in for being inflammatory by saying "it's not a priority in any meaningful value system" i.e. "if you value this then your system of meaning in the world is in fact shit and not meaningful"
  • Pull the classic "what I'm saying is THE truth and whatever comes (the downvotes) will be a product of peoples denial of THE truth" which means anyone who responds you'll likely just point to and say something like "That's great that you care about karma, but I care about truth, and I've already reveled that divine truth in my comment so no real need to engage further here"

If I were to grade comments on epistemic hygiene (or maybe hygiene more generally), this would get something around a "actively playing in the sewer water" rating. 

I don't think we can rush to judgement on your character so quick. My ability to become a vegan, or rather to at least take this step in trying to be that sort of person, was heavily intertwined with some environmental factors. I grew up on a farm, so I experienced some of what people talk about first hand. Even though I didn't process it as something overall bad at the time, a part of me was unsettled, and I think I drew pretty heavily on that memory and being there in my vegan transition period. 

I guess the point is something like you can't just become that person the day after you decide you want to be. Sometimes the best thing you can do is try to learn and engage more and see where that gets you. With this example that would mean going to a slaughterhouse yourself and participating, which maybe isn't a half bad idea (though I haven't thought this through at all, so I may be missing something). 

Also giving up chicken is not a salve, its a great first step, a trial period that can serve as a positive exemplar of what's possible for the version of yourself that might wish to fully revert back one day. I believe in you, and wish you the best of luck with your journey :)

Have no idea what it entails but I enjoy conversing and learning more about the world, so I'd happy do a dialogue! Happy to keep it in the clouds too.

But yeah you make a good point. I mean, I'm not convinced what the proper schelling point is, and would eagerly eat up any research on this. Maybe what I think is that for a specific group of people like me (no idea what exactly defines that group) it makes sense, but that generally what's going to make sense for a person has to be quite tailored to their own situation and traits. 

I would push back on the no animal products through the mouth bit. Sure, it happens to include lesser forms of suffering that might be less important than changing other things in the world (and if you assumed that this was zero sum that may be a problem, but I don't think it is). But generally it focuses on avoiding suffering that you are in control of, in a way that updates in light of new evidence. Vegetarianism in India is great because it leads to less meat consumption, but because it involves specific things to avoid instead of setting a basis as suffering it becomes much harder to convincingly explain why they should update to avoid eggs for example. So yeah, protesting rat poison factories may not be a mainstream vegan thing, but I'd be willing to bet vegans are less apt to use it. And sure, vegans may be divided on what to do about sugar, but I'd be surprised if any said "it doesn't involved an animal going in my mouth so it's okay with me". I don't think it's arbitrary but find it rather intentional.

I could continue on here but I'm also realizing some part of you wanted to avoid debates about vegan stuffs, so I'll let this suffice and explicitly say if you don't want to respond I fully understand (but would be happy to hear if you do!). 

Thanks for such an in depth and wonderful response, I have a couple of questions.

On 1. Perhaps the biggest reason I've stayed away from Pomodors is the question of how much time for breaks you can take before you need to start logging it as a reduction in time worked. Where have you come out on that debate? I.e. maybe you've found increased productivity makes the breaks totally worth it and this hasn't really been an issue for you.

On 3. How are you strict with your weekends? The vibe I get from the rest is that normally you make sure what you're doing is restful?

On 3.5. Adding to the anecdata, I keep a fairly sporadic schedule that often extends past normal hours, and I've found that it works pretty well for me. I do find that when I'm feeling a bit down that switching back to normal hours is better for me though, because I'm apt to start playing video games in the middle of the day because I think "ah, I'm remote and have a flexible schedule, so I can do what I want!" when in reality playing video games during the day is usually just me doing a poor job of dealing with something that then ends up not resolved later and leaves me in a tricky spot to get work done. 

On 4, I'd love to hear more about your targets: are they like just more concrete than goals? Do you have some sort of accountability system that you keep yourself from overriding? I think I'm coming to realize I work better with deadlines, but I'm still really unsure how to implement them in a way that forces me to stick to them but also that allows me to override it in circumstances where I'd be better off if I could push something back a bit.

Load More