Wiki Contributions


I wonder how this played out? Specifically:

  • Did you get people showing interest in collaborating? People that let you know they're already working on these problems?
  • Did any collaboration take place? Why, why not?
  • Was there any progress on any of the projects? 

Book: "Statistical Rethinking" by Richard McElreath
Description: in-depth introduction to Bayesian statistical analysis

- Each chapter begins with an intuitive short story of what the meaning of the statistical concept is in real life
- Contains exercises in the end of the chapters
- Wide variety of example types, not constrained to just one field

- It is pretty much a textbook otherwise - it's the narrative style stories in the beginnings of each chapter that makes me share this. For all intents and purposes you may ignore the parts where he tries to teach you Bayesian stats via R :)

This is a cool attempt to get some insight into what's going on in AI safety! It makes sense we have had so few surveys directly about this, as they are seriously difficult to do correctly - especially in a field like AI safety where there is a lot of vagueness in how people think about positive and negative outcomes.

Most of the comments are and will be about how the survey could be better and more elaborate in what it's asking, which would mean that the survey would probably ideally be several pages long and take about 30min to answer - at the least. And there's a bit of a cut-off where if a survey is too long, a large part of the community might not answer it anyway (the more time you have to spend on something, the more sure you wanna be it's worth it), but with surveys it can be hard to know if they're worth doing, unless you specifically trust the person doing them to do them well. Plus you'd need to have a good enough sample of the AI community answering the questions, which in itself will be difficult because you'll have more people responding who have time, compared to the ones who actively work on AI safety and probably are lower on time for surveys (that being said - I think AI safety researchers would probably appreciate an AI survey as well?).

All this being said, depending on the response rates and how well we can trust their accuracy, it could be cool to see what we can get out of a survey like this. Even if it's just the knowledge about what kinds of questions are more useful to ask compared to other questions. It could be that, we could technically have a survey that just asks people to write down their AGI timeline estimate and it would still be several pages long, or at least include a lot of description of different types of AGI and what kinds of probabilities people put on those timelines. Points for the effort in tackling such a difficult problem!

Full disclosure: I am in a relationship with the author of the post and thus have my own biases and also additional knowledge. This mainly means that I'm leaving a comment in the first place, because if it would be someone else doing something like this and most of the comments are on the critical side (note that it's not bad to be on the critical side here because often the criticism is necessary and important), I would be more likely to default to silence and hope that the criticism is received in a constructive way that ends up helping the project launch properly instead of the project dying out.
I think it's important to recognise people for trying to do something difficult, and support them in a way that if what they are doing could be net positive, they end up doing it properly - unless we have good reason to think that doing a survey like this is actually certainly bad, in which case it should be clearer in the comments that that is the case. 

This is beautiful! Does come off as a utopia. There are student cities which are special because they have actually spaced out the university buildings across the entire town - it makes education more visible, but not necessarily more accessible (I'd be curious as to what metric could be used to compare a university town to a university that is more closed off in a location outside of where people normally go). 

Denmark has a way of opening up public libraries for everyone - they have made libraries where books are almost hidden away - you can order them to be able to lend them, and a combination of man and machine will retrieve the books for you from the book vaults; the rest of the library is dedicated for having space for kids to play (where they don't have to be quiet), there are study spaces, art and design spaces, a corner for PlayStation games, offices for culture workers. Libraries are traditionally places for knowledge and stepping into a library is a pretty big deal - you wanna live up to the image of the library when you're there. Back when I was younger I was intimidated to go to the university library in my home town - so many smart books and smart people, I must not be ready yet to go in there! But the Danish library invites everyone - you can browse those quantum physics books from age 5 without feeling like you have to work on your knowledge first.

Some upsides and downsides for both types of libraries - I imagine a combination of old and sacredness together with sensible openness makes sense. Some feeling of having to have the right to be somewhere makes you take it seriously that you're there - and that's what universities and libraries do by making themselves more closed off, separated, asking you to have a reason to be there. It would be interesting to have a way to open these places up in ways that still make people take it seriously.

This is an interesting way to depict how different scenarios of AI takeover would look like. There's a lot of uncertainty around how the scenarios will play out but if they're written out like this (with the epic links as well), then it'd be much easier for people to understand the different failure modes. Great work! ^^