All Posts

Sorted by Top

Thursday, January 16th 2020
Thu, Jan 16th 2020

Shortform [Beta]
6MichaelA1dWAYS OF DESCRIBING THE “TRUSTWORTHINESS” OF PROBABILITIES While doing research for a post on the idea of a distinction between “risk” and “(Knightian) uncertainty [https://en.wikipedia.org/wiki/Knightian_uncertainty]”, I came across a surprisingly large number of different ways of describing the idea that some probabilities may be more or less “reliable”, “trustworthy”, “well-grounded”, etc. than others, or things along those lines. (Note that I’m referring to the idea of different degrees of trustworthiness-or-whatever, rather than two or more fundamentally different types of probability that vary in trustworthiness-or-whatever.) I realised that it might be valuable to write a post collecting all of these terms/concepts/framings together, analysing the extent to which some may be identical to others, highlighting ways in which they may differ, suggesting ways or contexts in which some of the concepts may be superior to others, etc.[1] [#fn-wGnf2warekZDiMkWj-1] But there’s already too many things I’m working on writing at the moment, so this is a low effort version of that idea - this is basically just a collection of the concepts, relevant quotes, and links where readers can find more. Comments on this post will inform whether I take the time to write something more substantial/useful on this topic later (and, if so, precisely what and how). Note that this post does not explicitly cover the “risk vs uncertainty” framing itself, as I’m already writing a separate, more thorough post on that. EPISTEMIC CREDENTIALS Dominic Roser [https://link.springer.com/article/10.1007%2Fs11948-017-9919-x] speaks of how “high” or “low” the epistemic credentials of our probabilities are. He writes: He further explains what he means by this in a passage that also alludes to many other ways of describing or framing an idea along the lines of the trustworthiness of given probabilities: RESILIENCE (OF CREDENCES) Amanda Askell discusses the idea that we can have “more” or “less” res

Tuesday, January 14th 2020
Tue, Jan 14th 2020

Shortform [Beta]
12jacobjacob3dI saw an ad for a new kind of pant: stylish as suit pants, but flexible as sweatpants. I didn't have time to order them now. But I saved the link in a new tab in my clothes database -- an Airtable that tracks all the clothes I own. This crystallised some thoughts about external systems that have been brewing at the back of my mind. In particular, about the gears-level principles that make some of them useful, and powerful, When I say "external", I am pointing to things like spreadsheets, apps, databases, organisations, notebooks, institutions, room layouts... and distinguishing those from minds, thoughts and habits. (Though this distinction isn't exact, as will be clear below, and some of these ideas are at an early stage.) Externalising systems allows the following benefits... 1. GATHERING ANSWERS TO UNSEARCHABLE QUERIES There are often things I want lists of, which are very hard to Google or research. For example: * List of groundbreaking discoveries that seem trivial in hindsight * List of different kinds of confusion, identified by their phenomenological qualia * List of good-faith arguments which are elaborate and rigorous, though uncertain, and which turned out to be wrong etc. Currently there is no search engine (but the human mind) capable of finding many of these answers (if I am expecting a certain level of quality). But for that reason researching the lists is also very hard. The only way I can build these lists is by accumulating those nuggets of insight over time. And the way I make that happen, is to make sure to have external systems which are ready to capture those insights as they appear. 2. SEIZING SERENDIPITY Luck favours the prepared mind. Consider the following anecdote: I think this is true far beyond beyond intellectual discovery. In order for the most valuable companies to exist, there must be VCs ready to fund those companies when their founders are toying with the ideas. In order for the best jokes to exist, the
4ozziegooen3dOne question around the "Long Reflection" or around "What will AGI do?" is something like, "How bottlenecked will be by scientific advances that we'll need to then spend significant resources on?" I think some assumptions that this model typically holds are: 1. There will be decision-relevant unknowns. 2. Many decision-relevant unkowns will be EV-positive to work on. 3. Of the decision-relevant unknowns that are EV-positive to work on, these will take between 1% to 99% of our time. (3) seems quite uncertain to me in the steady state. I believe it makes an intuitive estimate between 2 orders of magnitude, while the actual uncertainty is much higher than that. If this were the case, it would mean: 1. Almost all possible experiments are either trivial (<0.01% of resources, in total), or not cost-effective. 2. If some things are cost-effective and still expensive (they will take over 1% of the AGI lifespan), it's likely that they will take 100%+ of the time. Even if they would take 10^10% of the time, in expectation, they could still be EV-positive to pursue. I wouldn't be surprised if there were one single optimal thing like this in the steady-state. So this strategy would look something like, "Do all the easy things, then spend a huge amount of resources on one gigantic-sized, but EV-high challenge." (This was inspired by a talk that Anders Sandberg gave)

Monday, January 13th 2020
Mon, Jan 13th 2020

Shortform [Beta]
15TurnTrout5dWhile reading Focusing today, I thought about the book and wondered how many exercises it would have. I felt a twinge of aversion. In keeping with my goal of increasing internal transparency, I said to myself: "I explicitly and consciously notice that I felt averse to some aspect of this book". I then Focused on the aversion. Turns out, I felt a little bit disgusted, because a part of me reasoned thusly: (Transcription of a deeper Focusing on this reasoning) I'm afraid of being slow. Part of it is surely the psychological remnants of the RSI I developed in the summer of 2018. That is, slowing down is now emotionally associated with disability and frustration. There was a period of meteoric progress as I started reading textbooks and doing great research, and then there was pain. That pain struck even when I was just trying to take care of myself, sleep, open doors. That pain then left me on the floor of my apartment, staring at the ceiling, desperately willing my hands to just get better. They didn't (for a long while), so I just lay there and cried. That was slow, and it hurt. No reviews, no posts, no typing, no coding. No writing, slow reading. That was slow, and it hurt. Part of it used to be a sense of "I need to catch up and learn these other subjects which [Eliezer / Paul / Luke / Nate] already know". Through internal double crux, I've nearly eradicated this line of thinking, which is neither helpful nor relevant nor conducive to excitedly learning the beautiful settled science of humanity. Although my most recent post [https://www.lesswrong.com/posts/eX2aobNp5uCdcpsiK/on-being-robust] touched on impostor syndrome, that isn't really a thing for me. I feel reasonably secure in who I am, now (although part of me worries that others wrongly view me as an impostor?). However, I mostly just want to feel fast, efficient, and swift again. I sometimes feel like I'm in a race with Alex2018, and I feel like I'm losing.
4toonalfrink4dSo here's two extremes. One is that human beings are a complete lookup table. The other one is that human beings are perfect agents with just one goal. Most likely both are somewhat true. We have subagents that are more like the latter, and subsystems more like the former. But the emphasis on "we're just a bunch of hardcoded heuristics" is making us stop looking for agency where there is in fact agency. Take for example romantic feelings. People tend to regard them as completely unpredictable, but it is actually possible to predict to some extent whether you'll fall in and out of love with someone based on some criteria, like whether they're compatible with your self-narrative and whether their opinions and interests align with yours, etc. The same is true for many intuitions that we often tend to dismiss as just "my brain" or "neurotransmitter xyz" or "some knee-jerk reaction". There tends to be a layer of agency in these things. A set of conditions that makes these things fire off, or not fire off. If we want to influence them, we should be looking for the levers, instead of just accepting these things as a given. So sure, we're godshatter, but the shards are larger than we give them credit for.
3Hazard5dSo a thing Galois theory does is explain: Which makes me wonder; would there be a formula if you used more machinery that normal stuff and radicals? What does "more than radicals" look like?
1rmoehn4dUpdated the Predicted AI alignment event/meeting calendar [https://www.lesswrong.com/posts/h8gypTEKcwqGsjjFT/predicted-ai-alignment-event-meeting-calendar] . * Application deadline for the AI Safety Camp Toronto extended. If you've missed it so far, you still have until 19th to apply. * Apparently no AI alignment workshop at ICLR, but another somewhat related one.

Friday, January 10th 2020
Fri, Jan 10th 2020

Personal Blogposts
4[Event]Halifax SSC Meetup -- Saturday 11/1/20OB6, 1451 South Park Street suite 103, HalifaxJan 11th
0
Shortform [Beta]
15tragedyofthecomments7dI often see people making statements that sound to me like . . . "The entity in charge of bay area rationality should enforce these norms." or "The entity in charge of bay area rationality is bad for allowing x to happen." There is no entity in charge of bay area rationality. There's a bunch of small groups of people that interact with each other sometimes. They even have quite a bit of shared culture. But no one is in charge of this thing, there is no entity making the set of norms for rationalists, there is no one you can outsource the building of your desired group to.
2romeostevensit8dYou can't straightforwardly multiply uncertainty from different domains to propagate uncertainty through a model. Point estimates of differently shaped distributions can mean very different things i.e. the difference between the mean of a normal, bimodal, and fat tailed distribution. This gets worse when there are potential sign flips in various terms as we try to build a causal model out of the underlying distributions.

Thursday, January 9th 2020
Thu, Jan 9th 2020

Shortform [Beta]
4George8dI'm wondering if, in a competitive system with intelligent agents, regression to the mean is to be expected when one accumulates enough power. Thinking about the business and investment strategies that a lot of rich people advocate, they seem kinda silly to me. In that they match the mental model of the economy of someone who never really bothered studying the market would have. It's stuff like "just invest in safe index funds", and other such strategies that will never get you rich (nowadays) if you start out poor. Indeed, you'd find more creativity and have better luck getting rich in the theories of a random shitposters on /r/wallstreetbets Or take zero-sum~ish system, something like dating. I hear the wildest of ideas, models and plans from unusual wardrobe choices to texting strategies from people that are.... less than successful at attracting the other gender. But then when you talk to people that never in their life had an issue getting laid (i.e. pretty&charismatic people), they seem not to have spared a though about how to be attractive to the other gender or how to "pick up" someone or anything around those lines. They operate on a very "standard" model that's basically "don't worry to much about it, you'll end up finding the right person". I think you can find many such example, to put a post-structuralist spin on it: "People with power in a given system will have a very standard view of said system". In a lot of systems the more power you hold, the easier it is to make the system work for you. The easier it is to make the system work for you, the less sophisticated or counter-intuitive your model of the system has to be, since you're not looking for any "exploits", you can just let things take their course and you will be well-positioned barring any very unusual events. Whereas the less power you have, the more complex and unique your model of the system will have to be, since you are actively looking for said exploit to gain power in the system.

Wednesday, January 8th 2020
Wed, Jan 8th 2020

Personal Blogposts
Shortform [Beta]
7George9dHaving read more AI alarmist literature recently, as someone who strongly disagrees with the subject, I think I've come up with a decent classification for them based on the fallacies they commit. There's the kind of alarmist that understands how machine learning works but commits the fallacy of assuming that data-gathering is easy and that intelligence is very valuable. The caricature of this position is something along the lines of "PAC learning basically proves that with enough computational resources AGI will take over the universe". <I actually wrote an article trying to argue against this position [https://www.lesswrong.com/posts/brYdjKffszjuyzb9c/artificial-general-intelligence-is-here-and-it-s-useless] , the LW corsspost of which gave me the honor of having the most down-voted featured article in this forum's history> But I think that my disagreement with this first class of alarmist is not very fundamental, we can probably agree on a few things such as: 1. In principle, the kind of intelligence needed for AGI is a solved problem, all that we are doing now is trying to optimize for various cases. 2. The increase in computational resources is enough to get us closer and closer to AGI even without any more research effort being allocated to the subject. These types of alarmists would probably agree with me that, if we found out a way to magically multiply two arbitrary tensors 100x times faster than we do now, for the same electricity consumption, that would constitute a great leap forward. But the second kind are the ones that scare/annoy me most, because they are the kind that don't seem to really understand machine learning. Which results in them being surprised by the fact that machine learning models are able to do, what has been uncontroversially established that machine learning models could do for decades. The not-so-caricatured representation of this position is: "Oh no, a 500,000,000 parameters models designed for {X} can outperform a 20KB de
2ozziegooen9dPrediction evaluations may be best when minimally novel Imagine a prediction pipeline is resolved with a human/judgemental evaluation. For instance, a group today starts predicting what a trusted judge 10 years from now will say for the question, "How much counterfactual GDP benefit did policy X make, from 2020-2030?" So, there are two stages: 1. Prediction 2. Evaluation One question for the organizer of such a system is how many resources to delegate to the prediction step vs. the evaluation step. It could be expensive to both pay for predictors and evaluators, so it's not clear how to weigh these steps against each other. I've been suspecting that there are methods to be stingy with regards to the evaluators, and I have a better sense now why that is the case. Imagine a model where the predictors gradually discover information I_predictors about I_total, the true ideal information needed to make this estimate. Imagine that they are well calibrated, and use the comment sections to express their information when predicting. Later the evaluator comes by. Because they could read everything so far, they start with I_predictors. They can use this to calculate Prediction(I_predictors), although this should have already been estimated from the previous predictors (a la the best aggregate). At this point the evaluator can choose to attempt to get more information, I_evaluation > I_predictors. However, if they do, the resulting probability distribution would be predicted by Prediction(I_predictors). Insofar as the predictors are concerned, the expected value of Prediction(I_evaluation) should be the same as that of Prediction(I_predictors), assuming that Prediction(I_predictors) is calibrated; except for the fact that it will be have more risk/randomness. Risk is generally not a desirable property. I've written about similar topics in this post [https://www.lesswrong.com/posts/Df2uFGKtLWR7jDr5w/ozziegooen-s-shortform?commentId=qFNMQJTYzfTYJakbM] . Therefor, the p
1Hysteria9dI'm still mulling over the importance of Aesthetics. Raemon's writing really set me on a path I should've explored much much earlier. And since all good paths come with their fair share of coincidences, I found this essay [https://meltingasphalt.com/a-natural-history-of-beauty/] to also mull over. Perhaps we can think of Aesthetics as the grouping of desires and things we find beautiful(and thus we desire and work towards), in a spiritual/emotional/inner sense?

Load More Days