Excited about this!
Points of feedback:
Overall, feeling optimistic though, and will probably use this.
I think your argument is wrong, but interestingly so. I think DL is probably doing symbolic reasoning of a sort, and it sounds like you think it is not (because it makes errors?)
Do you think humans do symbolic reasoning? If so, why do humans make errors? Why do you think a DL system won't be able to eventually correct its errors in the same way humans do?
My hypothesis is that DL systems are doing a sort of fuzzy finite-depth symbolic reasoning -- it has capacity to understand the productions at a surface level and can apply them (subject to contextual clues, in an error-prone way) step by step, but once you ask for sufficient depth it will get confused and fail. Unlike humans, feedforward neural nets can't think for longer and churn step by step yet; but if someone were to figure out a way to build a looping option into the architecture then I won't be surprised to see DL systems which can go a lot further on symbolic reasoning than they currently do.
What is Pop Warner in this context? I have googled it and it sounds like he was one of the founders of modern American football, but I don't understand what it is in contrast to. Is there some other (presumably safer) ruleset?
(Inside-of-door-posted hotel room prices are called "rack rates" and nobody actually pays those. This is definitely a miscommunication.)
I am guilty of being a zero-to-one, rather than one-to-many, type person. It seems far easier and more interesting to me, to create new forms of progress of any sort, rather than convincing people to adopt better ideas.
I guess the project of convincing people seems hard? Like, if I come up with something awesome that's new, it seems easier to get it into people's hands, rather than taking an existing thing which people have already rejected and telling them "hey this is actually cool, let's look again".
All that said, I do find this idea-space intriguing partly thanks to this post - it makes me want to think of ways of doing more one-to-many type stuff. I've been recently drawn into living in DC and I think the DC effective altruism folks are much more on the one-to-many side of the world.
Upvoted for raising something to conscious attention, that I have never previously considered might be worth paying attention to.
(Slightly grumpy that I'm now going to have a new form of cognitive overhead probably 10+ times per day... these are the risks we take reading LW :P)
Look, I don’t know you at all. So please do ignore me if what I’m saying doesn’t seem right, or just if you want to, or whatever.
I’m a bit worried that you’re seeking approval, not advice? If this is so, know that I for one approve of your chosen path. You are allowed to spend a few years focusing on things that you are passionate about, which (if it works out) may result in you being happy and productive and possibly making the world better.
If you are in fact seeking advice, you should explain what your goal is. If your goal is to make the maximum impact possible — it’s worth at least hundreds of hours trying to see if you can learn more & motivate yourself along a path which seems like it combines high impact with personal resonance. I wouldn’t discount philosophy along this angle, but (for example) it sounds like you may not know that much about the potential of policy careers; there are plenty that do not require particularly strong mathematical skills (… or even any particularly difficult skills beyond some basic extraversion, resistance to boredom and willingness to spend literal decades grinding away within bureaucracies).
If your goal is to be happy, I think you will be happy doing philosophy, and I think you have a potential to make a huge impact that way. Certainly there are a decent number of full-time philosophers within effective altruism who I have huge respect for (Macaskill, Ord, Bostrom, Greaves, and Trammell jump to mind). Plus, you can save a few hundred hours, which seems pretty important if you might already know the outcome of your experimentation!
Thanks! This is very helpful, and yes, I did mean to refer to grokking! Will update the post.
One of my fears is that the True List is super long, because most things-being-tracked are products of expertise in a particular field and there are just so many different fields.
Here's my attempt. I haven't read any of the other comments or the tag yet. I probably spent ~60-90m total on this, spread across a few days.
On kill switches
On the AI accurately knowing what it is doing, and pointing at things in the real world
On responding predictably
On epistemology and ontology