I've read many of the posts here over the years. A lot of the ideas I first met here seem to be coming up again in my work now. I think the most important work in the world today is figuring out how to make sure AI continues to be something we control, and I find most of the people I meet in SF still think AI safety means not having a model say something in publc that harms a corporate brand.
I'm here to learn and bounce some ideas off of people who are comfortable with Bayesian reasoning and rational discussion, and interested in similar topics.
I'm a programmer by trade, and got serious about understanding AI and ML while working on a semi-supervised data labeling product (similar to Snorkel). That let me back to linear algebra, probability theory and all the rest.
Hi everyone,
I've read many of the posts here over the years. A lot of the ideas I first met here seem to be coming up again in my work now. I think the most important work in the world today is figuring out how to make sure AI continues to be something we control, and I find most of the people I meet in SF still think AI safety means not having a model say something in publc that harms a corporate brand.
I'm here to learn and bounce some ideas off of people who are comfortable with Bayesian reasoning and rational discussion, and interested in similar topics.
I'm a programmer by trade, and got serious about understanding AI and ML while working on a semi-supervised data labeling product (similar to Snorkel). That let me back to linear algebra, probability theory and all the rest.