Anon User

Wiki Contributions

Comments

Sorted by

What about trying to use the existing infrastructure in Russia, e.g.

  • Donating to school libraries of math magnet schools (starting with "usual suspects" of 57, 2, 43 in Moscow, 239 in St Petersburg, etc, and then going down the list)?
  • Contacting a competition organizers (e.g. for тургор - турнир городов which tends to have a higher diversity of participants compared to the Olympiad system) and coordinating to use the books as prises for finalists?

Besides not having to reinvent the wheel, kids might be more open to the ideas if the book comes from a local, more readily trusted party.

Ah, OK, then would suggest adding it to both title and body to make it clear, and to not waste time of people what are not the audience for this.

Sorry, feedback on what? Where is your resume/etc - what information to you expect the feedback to be based on?

But here is actional feedback - when asking people to help you for free out of goodness of their hearts (including this post!), you need to get out of your way to make it as easy and straightforward for them as possibl. When asking for feedback provide all the relevant information collected in an easy to navigate package,with TLDR summaries, etc. When asking for a recommendation, introduction, etc provide brief talking points, with more detailed iinformation provided for context (and make it clear you do not expect them to need to review it, and it is provided "just in case you would find it helpful".

Interesting - your 40/20/40 is a great toy example to think about, thanks! And it does show that a simple instant runoff schema for RCV should not necessarily help that much...

Anon User188

I am not sure about the median researcher. Many fields have a few "big names" that everybody knows and who's opinions have disproportionate weight.

  • Finally, we wouldn't get a second try - any bugs in your AIs, particularly the 2nd one, are very likely to be fatal. We do not know how to create your 2nd AI in such a way that the very first time we turn it on, all the bugs were already found and fixed.
  • Also, human values, at least the ones we know how to consciously formulate, are pretty fragile - they are things that we want weak/soft optimization for, but would actually be very bad if a superhuman AI would hard-optimize. We do not know how to capture human values in a way that things would not go terribly wrong if the optimization is cranked to the max, and your Values AI is likely to not help enough, as we would not know what missing inputs we are failing to provide it (because they are aspects of our values that would only become important in some future circumstances we cannot even imagine today).
Answer by Anon User10
  • We do not know how to create an AI that would not regularly hallucinate. The Values AI hallucinating would be a bad thing.
  • In fact, training AI to closer follow human values seems to just cause it to say what humans want to hear, while being objectively incorrect more often.
  • We do not know how to create an AI that reliability follows the programed values outside of a training set. Your 2nd AI going off the rails outside of the training set would be bad.
Load More