All Posts

Sorted by Magic (New & Upvoted)

Wednesday, January 8th 2020
Wed, Jan 8th 2020

Personal Blogposts
Shortform [Beta]
7George17dHaving read more AI alarmist literature recently, as someone who strongly disagrees with the subject, I think I've come up with a decent classification for them based on the fallacies they commit. There's the kind of alarmist that understands how machine learning works but commits the fallacy of assuming that data-gathering is easy and that intelligence is very valuable. The caricature of this position is something along the lines of "PAC learning basically proves that with enough computational resources AGI will take over the universe". <I actually wrote an article trying to argue against this position [https://www.lesswrong.com/posts/brYdjKffszjuyzb9c/artificial-general-intelligence-is-here-and-it-s-useless] , the LW corsspost of which gave me the honor of having the most down-voted featured article in this forum's history> But I think that my disagreement with this first class of alarmist is not very fundamental, we can probably agree on a few things such as: 1. In principle, the kind of intelligence needed for AGI is a solved problem, all that we are doing now is trying to optimize for various cases. 2. The increase in computational resources is enough to get us closer and closer to AGI even without any more research effort being allocated to the subject. These types of alarmists would probably agree with me that, if we found out a way to magically multiply two arbitrary tensors 100x times faster than we do now, for the same electricity consumption, that would constitute a great leap forward. But the second kind are the ones that scare/annoy me most, because they are the kind that don't seem to really understand machine learning. Which results in them being surprised by the fact that machine learning models are able to do, what has been uncontroversially established that machine learning models could do for decades. The not-so-caricatured representation of this position is: "Oh no, a 500,000,000 parameters models designed for {X} can outperform a 20KB de
2ozziegooen17dPrediction evaluations may be best when minimally novel Imagine a prediction pipeline is resolved with a human/judgemental evaluation. For instance, a group today starts predicting what a trusted judge 10 years from now will say for the question, "How much counterfactual GDP benefit did policy X make, from 2020-2030?" So, there are two stages: 1. Prediction 2. Evaluation One question for the organizer of such a system is how many resources to delegate to the prediction step vs. the evaluation step. It could be expensive to both pay for predictors and evaluators, so it's not clear how to weigh these steps against each other. I've been suspecting that there are methods to be stingy with regards to the evaluators, and I have a better sense now why that is the case. Imagine a model where the predictors gradually discover information I_predictors about I_total, the true ideal information needed to make this estimate. Imagine that they are well calibrated, and use the comment sections to express their information when predicting. Later the evaluator comes by. Because they could read everything so far, they start with I_predictors. They can use this to calculate Prediction(I_predictors), although this should have already been estimated from the previous predictors (a la the best aggregate). At this point the evaluator can choose to attempt to get more information, I_evaluation > I_predictors. However, if they do, the resulting probability distribution would be predicted by Prediction(I_predictors). Insofar as the predictors are concerned, the expected value of Prediction(I_evaluation) should be the same as that of Prediction(I_predictors), assuming that Prediction(I_predictors) is calibrated; except for the fact that it will be have more risk/randomness. Risk is generally not a desirable property. I've written about similar topics in this post [https://www.lesswrong.com/posts/Df2uFGKtLWR7jDr5w/ozziegooen-s-shortform?commentId=qFNMQJTYzfTYJakbM] . Therefor, the p
1Hysteria17dI'm still mulling over the importance of Aesthetics. Raemon's writing really set me on a path I should've explored much much earlier. And since all good paths come with their fair share of coincidences, I found this essay [https://meltingasphalt.com/a-natural-history-of-beauty/] to also mull over. Perhaps we can think of Aesthetics as the grouping of desires and things we find beautiful(and thus we desire and work towards), in a spiritual/emotional/inner sense?