User Profile

star112
description37
message87

Research scientist at DeepMind working on AI safety, and cofounder of the Future of Life Institute. http://sites.google.com/site/victoriakrakovna

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

Specification gaming examples in AI

17d
1 min read
Show Highlightsubdirectory_arrow_left
2

Using humility to counteract shame

2y
Show Highlightsubdirectory_arrow_left
15

To contribute to AI safety, consider doing AI research

2y
Show Highlightsubdirectory_arrow_left
39

[LINK] OpenAI doing an AMA today

2y
Show Highlightsubdirectory_arrow_left
3

[LINK] The Top A.I. Breakthroughs of 2015

2y
Show Highlightsubdirectory_arrow_left
1

Future of Life Institute is hiring

2y
Show Highlightsubdirectory_arrow_left
2

Negative visualization, radical acceptance and stoicism

3y
Show Highlightsubdirectory_arrow_left
11

Future of Life Institute existential risk news site

3y
Show Highlightsubdirectory_arrow_left
2

Open and closed mental states

3y
Show Highlightsubdirectory_arrow_left
7

Recent Comments

I think the DeepMind founders care a lot about AI safety (e.g. Shane Legg is a coauthor of the paper). Regarding the overall culture, I would say that the average DeepMind researcher is somewhat more interested in safety than the average ML researcher in general.

(paper coauthor here) When you ask whether the paper indicates that DeepMind is paying attention to AI risk, are you referring to DeepMind's leadership, AI safety team, the overall company culture, or something else?

The distinction between papers and blog posts is getting weaker these days - e.g. distill.pub is an ML blog with the shining light of Ra that's intended to be well-written and accessible.

Yes. He runs AI safety meetups at MILA, and played a significant role in getting Yoshua Bengio more interested in safety.

Thanks for the link to your post. I also think we only disagree on definitions.

I agree that self-compassion is a crucial ingredient. This is the distinction I was pointing at with "while focusing on imperfections without compassion can lead to beating yourself up". Humility says "I am flawed and ...(read more)

I would recommend doing a CS PhD and take statistics courses, rather than doing a statistics PhD.

For examples of promising research areas, I recommend taking a look at the [work of FLI grantees](http://futureoflife.org/first-ai-grant-recipients/). I'm personally working on the interpretability of...(read more)

The above-mentioned researchers are skeptical in different ways. Andrew Ng thinks that human-level AI is ridiculously far away, and that trying to predict the future more than 5 years out is useless. Yann LeCun and Yoshua Bengio believe that advanced AI is far from imminent, but approve of people th...(read more)

There are a lot of good online resources on deep learning specifically, including deeplearning.net, deeplearningbook.org, etc. As a more general ML textbook, Pattern Recognition & Machine Learning does a good job. I second the recommendation for Andrew Ng's course as well.