Shortform Content [Beta]

Toon Alfrink's sketchpad

So here's two extremes. One is that human beings are a complete lookup table. The other one is that human beings are perfect agents with just one goal. Most likely both are somewhat true. We have subagents that are more like the latter, and subsystems more like the former.

But the emphasis on "we're just a bunch of hardcoded heuristics" is making us stop looking for agency where there is in fact agency. Take for example romantic feelings. People tend to regard them as completely unpredictable, but it is actually possible to predict to some extent whether y

... (read more)
Showing 3 of 12 replies (Click to show all)

Well, it sounds to me like it's more of a heterarchy than a hierarchy, but yeah.

5Mark_Friedenbach16hThat's a fully generic response though. Any combination of goals/drives could have a (possibly non-linear) mapping which turns them into a single unified goal in that sense, or vice versa. Let me put it more simply: can achieving "self-determination" alleviate your need to eat, sleep, and relieve yourself? If not, then there are some basic biological needs (maintenance of which is a goal) that have to be met separately from any "ultimate" goal of self-determination. That's the sense in which I considered it obvious we don't have singular goal systems.
2mr-hire15hYeah, I think that if the brain in fact is mapped that way it would be meaningful to say you have a single goal. Maybe, it depends on how the brain is mapped. I know of at least a few psychology theories which would say things like avoiding pain and getting food are in the service of higher psychological needs. If you came to believe for instance that eating wouldn't actually lead to those higher goals, you would stop. I think this is pretty unlikely. But again, I'm not sure.
tragedyofthecomments's Shortform

I often see people making statements that sound to me like . . . "The entity in charge of bay area rationality should enforce these norms." or "The entity in charge of bay area rationality is bad for allowing x to happen."

There is no entity in charge of bay area rationality. There's a bunch of small groups of people that interact with each other sometimes. They even have quite a bit of shared culture. But no one is in charge of this thing, there is no entity making the set of norms for rationalists, there is no one you can outsource the building of your desired group to.

Showing 3 of 4 replies (Click to show all)
8Raemon1dI assume tragedy is referring to roughly that sort of statement, and inferring something about how the statement comes across or what it sounds like the person is imagining. I think 'the bay area should' is a somewhat confused statement, or one that comes from a mistaken sense of what's going on. And there's a particular flavor of frustration that comes from thinking that there's actually some entity that has the power to do stuff, which doesn't exist, and I think if you properly understood that the entity doesn't exist you'd do some combination of "redirecting your energy towards things that are more likely to fix the problem" or "realize that being frustrated in the particular way that you are isn't actually helping." (where I think "things that might actually work" are "refactor your social environment into something that has boundaries and goals, and figure out how to be a leader." The main problem is that the Bay Area is leadership bottlenecked, and that generally competent people are rare and the world is big, with many problems competing for their attention)
9mr-hire1dI actually think it's quite useful to make a statement like "Man, it would be great if the community would." I think its' a strawman to translate this to "I want the all powerful entity that runs the community to..." And I think it stems from an attitude that "You shouldn't complain about problems if you don't have real solutions." Which seems wrong to me. People pointing out problems even when they don't have solutions is useful. People pointing out better equilibria even if they don't have plans to get there is also useful. A lot of time this complaint seems to be hiding a deeper complaint which is "You pointing out problems without solutions makes me stressed and frustrated." - Which is OK to state, but also I get this sense of like "OK, but that's not really the person's problem who pointed it out, learn to handle your own emotional reactions."

Raemon is correct in surmising the thing I was pointing to.

mr-hire, I think both kinds of statements exist and I agree it can sometimes be useful to imagine what things a community as a group can do.

I wasn't complaining about pointing-out-problems-without-solutions. Not everyone who makes "The community should . . ." statements are making a top down argument, but I think some are and I expect people thinking of entities in charge to become increasingly frustrated by the lack of top down coordination.

Recognizing the lack of top down coordination won't solve

... (read more)
MichaelA's Shortform

Ways of describing the “trustworthiness” of probabilities

While doing research for a post on the idea of a distinction between “risk” and “(Knightian) uncertainty”, I came across a surprisingly large number of different ways of describing the idea that some probabilities may be more or less “reliable”, “trustworthy”, “well-grounded”, etc. than others, or things along those lines. (Note that I’m referring to the idea of different degrees of trustworthiness-or-whatever, rather than two or more fundamentally different types of probability that vary in trustwo

... (read more)
jacobjacob's Shortform Feed

I saw an ad for a new kind of pant: stylish as suit pants, but flexible as sweatpants. I didn't have time to order them now. But I saved the link in a new tab in my clothes database -- an Airtable that tracks all the clothes I own.

This crystallised some thoughts about external systems that have been brewing at the back of my mind. In particular, about the gears-level principles that make some of them useful, and powerful,

When I say "external", I am pointing to things like spreadsheets, apps, databases, organisations, notebooks, institutions... (read more)

ozziegooen's Shortform

One question around the "Long Reflection" or around "What will AGI do?" is something like, "How bottlenecked will be by scientific advances that we'll need to then spend significant resources on?"

I think some assumptions that this model typically holds are:

  1. There will be decision-relevant unknowns.
  2. Many decision-relevant unkowns will be EV-positive to work on.
  3. Of the decision-relevant unknowns that are EV-positive to work on, these will take between 1% to 99% of our time.

(3) seems quite uncertain to me in the steady state. I believe it makes an intuitiv

... (read more)
Hazard's Shortform Feed

So a thing Galois theory does is explain:

Why is there no formula for the roots of a fifth (or higher) degree polynomial equation in terms of the coefficients of the polynomial, using only the usual algebraic operations (addition, subtraction, multiplication, division) and application of radicals (square roots, cube roots, etc)?

Which makes me wonder; would there be a formula if you used more machinery that normal stuff and radicals? What does "more than radicals" look like?

I think people usually just use “the number is the root of this polynomial” in and of itself to describe them, which is indeed more than radicals. There probably are more round about ways to do it, though.

rmoehn's Shortform

Updated the Predicted AI alignment event/meeting calendar.

  • Application deadline for the AI Safety Camp Toronto extended. If you've missed it so far, you still have until 19th to apply.
  • Apparently no AI alignment workshop at ICLR, but another somewhat related one.
ofer's Shortform

It seems likely that as technology progresses and we get better tools for finding, evaluating and comparing services and products, advertising becomes closer to a zero-sum game between advertisers and advertisees.

The rise of targeted advertising and machine learning might cause people who care less about their privacy (e.g. people who are less averse to giving arbitrary apps access to a lot of data) to be increasingly at a disadvantage in this zero-sum-ish game.

Also, the causal relationship between 'being a person who is likely to pay above-market prices' and 'being offered above-market prices' may gradually become stronger.

I crossed out the 'caring about privacy' bit after reasoning that the marginal impact of caring more about one's privacy might depend on potential implications of things like "quantum immortality" (that I currently feel pretty clueless about).

TurnTrout's shortform feed

While reading Focusing today, I thought about the book and wondered how many exercises it would have. I felt a twinge of aversion. In keeping with my goal of increasing internal transparency, I said to myself: "I explicitly and consciously notice that I felt averse to some aspect of this book".

I then Focused on the aversion. Turns out, I felt a little bit disgusted, because a part of me reasoned thusly:

If the book does have exercises, it'll take more time. That means I'm spending reading time on things that aren't math textbooks. That means I'm slowing d

... (read more)
Hazard's Shortform Feed

Looking at my calendar over the last 8 months, it looks like my attention span for a project is about 1-1.5 weeks. I'm musing on what it would like to lean into that. Have multiple projects at once? Work extra hard to ensure I hit save points before the weekends? Only work on things in week long bursts?

Showing 3 of 6 replies (Click to show all)

I'm noticing an even more granular version of this. Things that I might do casually (reading some blog posts) have a significant effect on what's loaded into my mind the next day. Smaller than the week level, I'm noticing a 2-3 day cycle of "the thing that was most recently in my head" and how it effects the question of "If I could work on anything rn what would it be?"

This week on Tuesday I picked Wednesday as the day I was going to write a sketch. But because of something I was thinking before going to bed, on Wednesday... (read more)

3Raemon5moThe target audience for Hazardous Guide is friends of yours, correct? (vaguely recall that) A thing that normally works for writing is that after each chunk, I get to publish a thing and get comments. One thing about Hazardous Guide is that it mostly isn't new material for LW veterans, so I could see it getting less feedback than average. Might be able to address by actually showing parts to friends if you haven't
2Hazard5moOoo, good point. I was getting a lot less feedback form than then from other things. There's one piece of feedback which is "am I on the right track?" and another that's just "yay, people are engaging!" both of which seem relevant to motivation.
eigen's Shortform

I've heard some critiques to the part of the sequences concerning Quantum Mechanics and Conscience; but I always considered those as a demonstration of applied rationality, say, “How do we get to the correct answer by applying what we've learned?”

This is way more obvious and way more clear in Inadequate Equilibria. Take a problem, a question and deconstruct it completely. It was concise and to the point, I think it's one of the best things Eliezer has written; I cannot recommend it enough.

0agai24dComment removed for posterity.

Comment removed for posterity.

[This comment is no longer endorsed by its author]Reply
George's Shortform

Having read more AI alarmist literature recently, as someone who strongly disagrees with the subject, I think I've come up with a decent classification for them based on the fallacies they commit.

There's the kind of alarmist that understands how machine learning works but commits the fallacy of assuming that data-gathering is easy and that intelligence is very valuable. The caricature of this position is something along the lines of "PAC learning basically proves that with enough computational resources AGI will take over the universe".... (read more)

Showing 3 of 20 replies (Click to show all)
2TAG6dAnd the thing I said that isn't factually correct is...
derived from fairy tales.

(This is arguably testable.)

1Mark_Friedenbach6dThe only thing factually incorrect is your implied assumption that voting has anything to do with truth assessment here ;)
romeostevensit's Shortform

You can't straightforwardly multiply uncertainty from different domains to propagate uncertainty through a model. Point estimates of differently shaped distributions can mean very different things i.e. the difference between the mean of a normal, bimodal, and fat tailed distribution. This gets worse when there are potential sign flips in various terms as we try to build a causal model out of the underlying distributions.

2romeostevensit8d(I guess this is why guesstimate exists)
1Pattern6dHow does guesstimate help?

guesstimate propagates full distributions for you

ozziegooen's Shortform

Would anyone here disagree with the statement:

Utilitarians should generally be willing to accept losses of knowledge / epistemics for other resources, conditional on the expected value of the trade being positive.

Showing 3 of 22 replies (Click to show all)

Non-Bayesian Utilitarian that are ambiguity averse sometimes need to sacrifice "expected utility" to gain more certainty (in quotes because that need not be well defined).

1AprilSR8dIf you have epistemic terminal values then it would not be a positive expected value trade, would it? Unless "expected value" is referring to the expected value of something other than your utility function, in which case it should've been specified.
2ozziegooen7dYep, I would generally think so. I was doing what may be a poor steelman of my assumptions of how others would disagree; I don't have a great sense of what people who would disagree would say at this point.
George's Shortform

I'm wondering if, in a competitive system with intelligent agents, regression to the mean is to be expected when one accumulates enough power.

Thinking about the business and investment strategies that a lot of rich people advocate, they seem kinda silly to me. In that they match the mental model of the economy of someone who never really bothered studying the market would have. It's stuff like "just invest in safe index funds", and other such strategies that will never get you rich (nowadays) if you start out poor. Indeed, you'd fi... (read more)

regression to the mean is going to happen in any system with a large random (or anti-inductive and unpredictable) component. That doesn't seem to be what you're talking about. You seem to be talking about variance and declining marginal utility (or rather, exception cases where marginal utility is actually increasing).

Nobody got rich in retail investing. A lot of people stayed comfortable for longer than they otherwise would have, but to paraphrase the old saying, the best way to make a million in the stock market is to start with 50 million... (read more)

Hysteria's Shortform

I'm still mulling over the importance of Aesthetics. Raemon's writing really set me on a path I should've explored much much earlier.

And since all good paths come with their fair share of coincidences, I found this essay to also mull over.


Perhaps we can think of Aesthetics as the grouping of desires and things we find beautiful(and thus we desire and work towards), in a spiritual/emotional/inner sense?

2Dagon9dWhy limit it to a spiritual/emotional/inner realm? Except to the extent that all values are (a perfectly reasonable belief, IMO). Are aesthetic preferences any different from other desiderata? If I don't like people's suffering, is that not an aesthetic choice? What choices and terminal values would you call NOT aesthetic?

I prefer to think of Aesthetic as a less rational, more monkey-brain part of us. A lot of the things we find beautiful come from basic instincts of what is good/bad for our survival and reproduction. Healthy food, safe places, good partners, etc.

I would rationalize that finding suffering ugly is in a similar vein as finding skin boils ugly; they're indicators of diseases, unsafe land, unsafe conditions, bad things et al.

Going with the "people's suffering" take, perhaps wanting to act on immediate, in-your-eyes suffering is an aesthet... (read more)

ozziegooen's Shortform

Perhaps resolving forecasts with expert probabilities can be better than resolving them with the actual events.

The default in literature on prediction markets and decision markets is to expect that resolutions should be real world events instead of probabilistic estimates by experts. For instance, people would predict "What will the GDP of the US in 2025 be?”, and that would be scored using the future “GDP of the US.” Let’s call these empirical resolutions.

These resolutions have a few nice properties:

  1. We can expect expect them to be roughly calibrated. (
... (read more)
Showing 3 of 12 replies (Click to show all)

Here is another point by @jacobjacob, which I'm copying here in order for it not to be lost in the mists of time:

Though just realised this has some problems if you expected predictors to be better than the evaluators: e.g. they’re like “one the event happens everjacobyone will see I was right, but up until then no one will believe me, so I’ll just lose points by predicting against the evaluators” (edited)

Maybe in that case you could eventually also score the evaluators based on the final outcome… or kind of re-compensate people who were wronged the first time…
2ozziegooen9dGood points! Also, thanks for the link, that's pretty neat. I think that an efficient use of expert assessments would be for them to see the aggregate, and then basically adjust that as is necessary, but to try to not do much original research. I just wrote a more recent shortform post about this. I think that we can get calibration to be as good as experts can figure out, and that could be enough to be really useful.
2ozziegooen9dI'm not sure. The reasons things happen at the tails typically fall into categories that could be organized to be a small set. For instance: * The question wasn't understood correctly. * A significant exogenous event happened. But, as we do a bunch of estimates, we could get empirical data about these possibilities, and estimate the potentials for future tails. This is a bit different to what I was mentioning, which was more about known but small risks. For instance, the "amount of time I spend on my report next week" may be an outlier if I die. But the chance of serious accident or death can be estimated decently well enough. These are often repeated known knowns.
FactorialCode's Shortform

I've been thinking about arxiv-sanity lately and I think it would be cool to have a sort of "LW papers" where we share papers that are relevant to the topics discussed on this website. I know that's what link posts are supposed to be for, but I don't want to clutter up the main feed. Many of the papers I want to share are related to the topics we discuss, but I don't think they're worthy of their own posts.

I might start linking papers in my short-form feed.

I've also been thinking about this. I think link-posts are a good first-step and maybe we should make more link-posts for papers we find interesting. But one issue that I have with LW is that it's pretty blog-like (similar to Reddit and HackerNews); so for some of these things it could be difficult for old papers to accumulate reviews and comments over a long period of people reading them.

5Raemon11dFWIW I think link-posts are just fine for this. Although I'm not sure I understand exactly what your goal is
ozziegooen's Shortform

Prediction evaluations may be best when minimally novel

Imagine a prediction pipeline is resolved with a human/judgemental evaluation. For instance, a group today starts predicting what a trusted judge 10 years from now will say for the question, "How much counterfactual GDP benefit did policy X make, from 2020-2030?"

So, there are two stages:

  1. Prediction
  2. Evaluation

One question for the organizer of such a system is how many resources to delegate to the prediction step vs. the evaluation step. It could be expensive to both pay for predictors and evaluators,

... (read more)
Load More