All of Ben Pace's Comments + Replies

Alex Flint on "A software engineer's perspective on logical induction"

I would say that, if-and-only-if it's still alive for Alex, I'd enjoy him just writing down the basic things he said in his talk in like a couple of paragraphs, both the preamble at the top and his 4 slides or so.

Against "Context-Free Integrity"

If we make strong claims driven by emotions, then we should make sure to also defend them in less emotionally loaded ways, in a way which makes them compelling to someone who doesn't share these particular emotions.

Restating this in the first person, this reads to me as ”On the topics where we strongly disagree, you’re not supposed to say how you feel emotionally about the topic if it’s not compelling to me.” This is a bid you get to make and it will be accepted/denied based on the local social contract and social norms, but it’s not a “core skill of ratio... (read more)

What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)

At this point, my plan is try to consolidate what I think the are main confusions in the comments of this post, into one or more new concepts to form the topic of a new post.

Sounds great! I was thinking myself about setting aside some time to write a summary of this comment section (as I see it).

Against "Context-Free Integrity"

I don’t know why you want ‘disspassion’, emotions are central to how I think and act and reason, and this is true for most rationalists I know. I mean, you say it’s mindkilling, and of course there’s that risk, but you can’t just cut off the domain of emotion, and I will not pander to readers who cannot deal with their own basic emotions.

When I say Facebook is evil, I straightforwardly mean that it is trying to hurt people. It is intentionally aiming to give millions of people an addiction that makes their lives worse and their communities worse. Zuckerber... (read more)

8Richard_Ngo3dI should be clear here that I'm talking about a broader phenomenon, not specifically your writing. As I noted above, your post isn't actually a central example of the phenomenon. The "tribal loyalties" thing was primarily referring to people's reactions to the SSC/NYT thing. Apologies if it seemed like I was accusing you personally of all of these things. (The bits that were specific to your post were mentions of "evil" and "disgust".) Nor am I saying that we should never talk about emotions; I do think that's important. But we should try to also provide argumentative content which isn't reliant on the emotional content. If we make strong claims driven by emotions, then we should make sure to also defend them in less emotionally loaded ways, in a way which makes them compelling to someone who doesn't share these particular emotions. For example, in the quotation you gave, what makes science's principles "fake" just because they failed in psychology? Is that person applying an isolated demand for rigour because they used to revere science? I can only evaluate this if they defend their claims more extensively elsewhere. On the specific example of facebook, I disagree that you're using evil in a central way. I think the central examples of evil are probably mass-murdering dictators. My guess is that opinions would be pretty divided about whether to call drug dealers evil (versus, say, amoral); and the same for soldiers, even when they end up causing a lot of collateral damage. Your conclusion that facebook is evil seems particularly and unusually strong because your arguments are also applicable to many TV shows, game producers, fast food companies, and so on. Which doesn't make those arguments wrong, but it means that they need to meet a pretty high bar, since either facebook is significantly more evil than all these other groups, or else we'll need to expand the scope of words like "evil" until they refer to a significant chunk of society (which would be quite dif
Against "Context-Free Integrity"

I had an interesting conversation with Zvi about in which societies it was easiest to figure out whether the major societal narratives were false. It seemed like there was only a few major global narratives in times back then, whereas today I feel like there’s a lot more narratives flying around me.

Against "Context-Free Integrity"

"Constant vigilance, eh, lad?" said the man.

"It's not paranoia if they really are out to get you," Harry recited the proverb.

The man turned fully toward Harry; and insofar as Harry could read any expression on the scarred face, the man now looked interested.

Though, my point is that just like Moody, a person who is (correctly) constantly looking out for power-plays and traps, and will end up seeing many that aren’t there, because it’s a genuinely hard problem to figure out whether specific people are plotting against you.

Against "Context-Free Integrity"

You should totally be less careful. On Twitter, if you say something that can be misinterpreted, sometimes over a million people see it and someone famous tells them you're an awful person. I say sometimes, I more mean "this is the constant state and is happening thousands of times per day". Yes, if you're not with your friends and allies and community, if you're in a system designed to take the worst interpretation of what you say and amplify it in the broader culture with all morality set aside, be careful.

Here on LW, I don't exercise that care to anythi... (read more)

I think we're talking past each other a little, because we're using "careful" in two different senses. Let's say careful1 is being careful to avoid reputational damage or harassment. Careful2 is being careful not to phrase claims in ways that make it harder for you or your readers to be rational about the topic (even assuming a smart, good-faith audience).

It seems like you're mainly talking about careful1. In the current context, I am not worried about backlash or other consequences from failure to be careful1. I'm talking about careful2. When you "aim to ... (read more)

(I have some disagreements with this. I think there's a virtue Ben is pointing at (and which Zvi and others are pointing at), which is important, but I don't think we have the luxury of living in the world where you get to execute that virtue without also worrying about the failure modes Richard is worried about)

Against "Context-Free Integrity"

There are many forces and causes that lead use of deontology and virtue ethics to be misunderstood and punished on Twitter, and this is part of the reason that I have not participated in Twitter these past 3-5 years. But don't confuse these with the standards for longer form discussions and essays. Trying to hold your discussions to Twitter standards is a recipe for great damage to one's ability to talk, and ability to think.

8Richard_Ngo4dI'm saying we should strive to do better than Twitter on the metric of "being careful with strongly valenced terminology", i.e. being more careful. I'm not quite sure what point you're making - it seems like you think it'd be better to be less careful? In any case, the reference to Twitter was just a throwaway example; my main argument is that our standards for longer form discussions on Lesswrong should involve being more careful with strongly valenced terminology than people currently are.
Against "Context-Free Integrity"

I wanted to convey (my feeling of) the standard use of the word.

(of a person or action) showing a lack of experience, wisdom, or judgment.

"the rather naive young man had been totally misled"

I actually can imagine a LWer making that same argument but not out of naivete, because LWers argue earnestly for all sorts of wacky ideas. But what I meant was it also feels to me like the sort of thing I might've said in the past when I had not truly seen the mazes in the world, not had my hard work thrown in my face, or some other experience like that where my standard tools had failed me.

4Hazard4dDope, it was nice to check and see that contrary to what I expect, it's not always being used that way :) Some idle musings on using naive to convey specific content. Sometimes I might want to communicate that I think someone's wrong, and I also think they're wrong in a way that's only likely to happen if they lack experience X. Or similar, they are wrong because they haven't had experience X. That's something I can imagine being relevant and something I'd want to communicate. Though I'd specifically want to mention the experience that I think they're lacking. Otherwise it feels like I'm asserting "there just is this thing that is being generally privy to how things work" and you can be privy or not, which feels like it would pull me away from looking at specific things and understanding how they work, and instead towards trying to "figure out the secret". (This is less relevant to your post, because you are actually talking about things one can do) There's another thing which is in between what I just mentioned, and "naive" as a pure intentional put-down. It's something like "You are wrong, you are wrong because you haven't had experience X, and everyone who has had experience X is able to tell that you are wrong and haven't had experience X." The extra piece here is the assertion that "there are many people who know you are wrong". Maybe those many people are "us", maybe not. I'm having a much harder time thinking of an example where that's something that's useful to communicate, and is too close asserting group pressure for my liking.
Against "Context-Free Integrity"

I hadn't thought of that. Not sure whether it's the same thing, but thanks for the comment. 

Against "Context-Free Integrity"

Just after posting this on "Context-Free Integrity", I checked Marginal Revolution and saw Tyler's latest post was on "Free-Floating Credibility". These two terms feel related...

The kabbles are strong tonight.

"Taking your environment as object" vs "Being subject to your environment"

I reflected on it some more, and decided to change the title.

Open and Welcome Thread - April 2021

Welcome Max :) I hope you find deeply worthwhile things to read.

1Max Warburton7dThank you ! I already have :) Maybe some day, who knows, I'll be able to add something to the conversation that has been going on here for years.
My Current Take on Counterfactuals

I've felt like the problem of counterfactuals is "mostly settled" for about a year, but I don't think I've really communicated this online.

Wow that's exciting! Very interesting that you think that.

4abramdemski6dNow I feel like I should have phrased it more modestly, since it's really "settled modulo math working out", even though I feel fairly confident some version of the math should work out.
I'm from a parallel Earth with much higher coordination: AMA

Yes, all societies are identical except insofar as what the officials pretend about it. People in very religious societies are having just as much sex as in modern secular societies, they just do it in a way that allowed officials to pretend it didn’t exist.

I'm from a parallel Earth with much higher coordination: AMA

Welcome! (For my part. Eliezer can say “you’re welcome” for the blessing.)

Rationalism before the Sequences

I've curated this essay[1].

Getting a sense of one's own history can be really great for having perspective. The primary reason I've curated this is because the post really helped give me perspective on the history of this intellectual community, and I imagine also for many other LWers.

I wouldn't have been able to split it into "General Semantics, analytic philosophy, science fiction, and Zen Buddhism" as directly as you did, nor would I know which details to pick out. (I would've been able to talk about sci-fi, but I wouldn't quite know how to relate the r... (read more)

9Eric Raymond11dEliezer was more influenced by probability theory, I by analytic philosophy, yes. These variations are to be expected. I'm reading Jaynes now and finding him quite wonderful. I was a mathematician at one time, so that book is almost comfort food for me - part of the fun is running across old friends expressed in his slightly eccentric language. I already had a pretty firm grasp on Feynman's "first-principles approach to reasoning" by the time I read his autobiographical stuff. So I enjoyed the books a lot, but more along the lines of "Great physicist and I think alike! Cool!" than being influenced by him. If I'd been able to read them 15 years earlier I probably would have been influenced. One of the reasons I chose a personal, heavily narratized mode to write the essay in was exactly so I could use that to organize what would otherwise have been a dry and forbidding mass of detail. Glad to know that worked - and, from what you don't say, that I appear to have avoided the common "it's all about my feelings" failure mode of such writing.
I'm from a parallel Earth with much higher coordination: AMA

I feel like the first two are enforceable with culture. For example I think many Muslim countries have a lot of success at preventing pornography (or at least, they did until the internet, which notably dath ilan seems to not quite have). I also have a sense that many people with severe mental/physical disabilities are implicitly treated as though they won't have children in our culture, and as a result often do not. But I agree it's hard to do it ethically, and both of the aforementioned ways aren't done very ethically in our civilization IMO.

For the latt... (read more)

1Olivier Faure11dCitation needed. My default assumption for any claims of that sort is "they had a lot of success at concealing the pornography that existed in such a way that officials can pretend it doesn't exist".
Rationalism before the Sequences

(Here are some of my thoughts, reading through.)

Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora.

It's strange, I don't feel the fog much in my life. I wonder if this a problem. It doesn't seem like I should feel like "I and everyone around me basically know what's going on".

I can imagine certain people for whom talking to them would feel like a flash of light in the fog. I probably ... (read more)

Reflective Bayesianism

The rules say we must use consequentialism, but good people are deontologists, and virtue ethics is what actually works.

—Eliezer Yudkowsky, Twitter

I'm from a parallel Earth with much higher coordination: AMA

"We have trained GPT-3 on all of reddit, and unleashed it for the population to use. Here are the freaking weird and beautiful and terrifying things that happened." vs "No we didn't do that because we're more careful and sensible."

I'm from a parallel Earth with much higher coordination: AMA

Yeah, I'm confused to think about who has got the weirder society. dath ilan has more global guardrails yet invests more into experiments. We've got fewer guardrails, but also a load of random "you can't sell that" rules. I think (?) that reddit doesn't exist in their world, or the free sex movement you mention, etc. So in some ways we're less ethically constrained, allowing us to find weird niches.

8Rob Bensinger12d'You're allowed to do what you want, provided you don't learn anything or produce wealth in the process', vs. 'You're allowed to do what you want, provided you learn something or produce wealth in the process'.
Dark Matters

Curated. This was an accessible yet technically precise overview of the evidence surrounding an open research area in physics / cosmology, and I'd like to see more of this sort of post on LW. I think had almost anyone tried this they would have made a really long post with lots of hard technical math and it wouldn't have been understood by many, so thanks.

Why We Launched LessWrong.SubStack

LessWrong IPO... nice idea.

<puts it in a folder for safekeeping>

Why We Launched LessWrong.SubStack

(and brilliant point about cell phone bills)

Why We Launched LessWrong.SubStack

(absolutely great use of that link)

9Ben Pace17d(and brilliant point about cell phone bills)
Why We Launched LessWrong.SubStack

Tut tut. It seems SubStack is in on the collusion too.

1rasavatta17dhaha, I guess that would be the most reasonable conclusion. :D
Why We Launched LessWrong.SubStack

Let’s see where we are in 24 hours.

It's April 2nd now. What was actually in the posts? (And did anyone actually pay the 1 BTC?)

3rasavatta18dThank you, Ben, for your swift reply. I noticed that you mentioned that a monthly subscription costs about 1 bitcoin or about 13 dollars, but according to Google [https://www.google.com/search?q=bitcoin+to+dollar&client=firefox-b-d&sxsrf=ALeKk01z0kqjSUU5ZDej80saqZOFUtJlew%3A1617280032250&ei=ILxlYJfEDq_1qwH3gYvwBQ&oq=bitcoin+to+dollar&gs_lcp=Cgdnd3Mtd2l6EAMyAggAMgIIADICCAAyBQgAEMkDMgIIADICCAAyAggAMgIIADICCAAyAggAOgcIABCwAxBDUOUhWI0nYJQpaAFwAngBgAHdDYgBxRiSAQszLTEuMS4xLjgtMZgBAKABAaoBB2d3cy13aXrIAQrAAQE&sclient=gws-wiz&ved=0ahUKEwiX1MvOhd3vAhWv-ioKHffAAl4Q4dUDCAw&uact=5] and Coindesk [https://www.coindesk.com/calculator]1 bitcoin is about 58,000 dollars today. It is possible that I may have misunderstood your words or it is due to my lack of knowledge about cryptocurrency, I would appreciate it If you could dispel my doubts.
Why We Launched LessWrong.SubStack

I’m glad to hear that we’re such reliable executors :)

Why We Launched LessWrong.SubStack

Thank you. I would greatly enjoy more people sharing their takeaways from reading the posts.

I'm deeply confused by the cycle of references. What order were these written in?

In the HPMOR epilogue, Dobby (and Harry to a lesser extent) solve most of the worlds' problems using the 7 step method Scott Alexander outlines in "Killing Moloch" (ending with of course with the "war to end all wars"). This strongly suggests that the HPMOR epilogue was written after "Killing Moloch".

However, "Killing Moloch" extensively quotes Muehlhauser's "Solution to the Hard Problem of Consciousness". (Very extensively. Yes Scott, you solved coordination problems, and des... (read more)

I only read the HPMOR epilogue because - let's be honest - HPMOR is what LessWrong is really for. 

(HPMOR spoilers ahead)

  • Honestly, although I liked the scene with Harry and Dumbledore, I would have preferred Headmaster Dobby not be present.
  • I now feel bad for thinking Ron was dumb for liking Quidditch so much. But with hindsight, you can see his benevolent influence guiding events in literally every single scene. Literally. It was like a lorry missed you and your friends and your entire planet by centimetres - simply because someone threw a Snitch at so
... (read more)
7Viliam17dJust a public warning that the version of Scott's article that was leaked at SneerClub was modified to actually maximize human suffering. But I guess no one is surprised. Read the original version [https://lesswrong.substack.com/p/killing-moloch-much-more-than-you]. HPMOR -- obvious in hindsight; the rational Harry Potter has never [EDIT: removed spoiler, sorry]. Yet, most readers somehow missed the clues, myself included. I laughed a lot at Gwern's "this rationality technique does not exist" examples. On the negative side, most of the comment sections are derailed into discussing Bitcoin prices. Sigh. Seriously, could we please focus on the big picture for a moment? This is practically LessWrong 3.0, and you guys are nitpicking as usual.
Why We Launched LessWrong.SubStack

I had hoped the cheap price of bitcoin would allow everyone who wanted to to be a part of it, but I seem to have misjudged the situation!

Disentangling Corrigibility: 2015-2021

You're welcome. Yeah "invented the concept" and "named the concept" are different (and both important!).

Disentangling Corrigibility: 2015-2021

Here it is: https://www.facebook.com/yudkowsky/posts/10152443714699228?comment_id=10152445126604228

Rob Miles (May 2014):

Ok, I've given this some thought, and I'd call it:

"Corrigible Reasoning"

using the definition of corrigible as "capable of being corrected, rectified, or reformed". (And of course AIs that don't meet this criterion are "Incorrigible")

Thank you very much!  It seems worth distinguishing the concept invention from the name brainstorming, in a case like this one, but I now agree that Rob Miles invented the word itself.

The technical term corrigibility, coined by Robert Miles, was introduced to the AGI safety/alignment community in the 2015 paper MIRI/FHI paper titled Corrigibility.

Eg I'd suggest that to avoid confusion this kind of language should be something like "The technical term corrigibility, a name suggested by Robert Miles to denote concepts previously discussed at MIRI, was introduced..." &c.

Disentangling Corrigibility: 2015-2021

I'm 94% confident it came from a Facebook thread where you blegged for help naming the concept and Rob suggested it. I'll have a look now to find it and report back.

Edit: having a hard time finding it, though note that Paul repeats the claim at the top of his post on corrigibility in 2017.

5Robert Miles18dNote that the way Paul phrases it in that post is much clearer and more accurate: > "I believe this concept was introduced in the context of AI by Eliezer and named by Robert Miles"

Here it is: https://www.facebook.com/yudkowsky/posts/10152443714699228?comment_id=10152445126604228

Rob Miles (May 2014):

Ok, I've given this some thought, and I'd call it:

"Corrigible Reasoning"

using the definition of corrigible as "capable of being corrected, rectified, or reformed". (And of course AIs that don't meet this criterion are "Incorrigible")

Rationalism before the Sequences

Hah, I was thinking of replying to say I was largely just repeating things you said in that post.

Nonetheless, thanks both Kaj and Eric, I might turn it into a little post. It's not bad to have two posts saying the same thing (slightly differently).

Rationalism before the Sequences

The most important traits of the new humans are that... they prize rationality under all circumstances - to be accepted by them you have to retain clear thinking and problem-solving capability even when you're stressed, hungry, tired, cold, or in combat

Interestingly, as a LessWronger, I don't think of myself in quite this way. I think there's a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments. Knowing your limits, and using that knowledge when making plans.

One that I... (read more)

7Robert Miles11dIt would certainly be a mistake to interpret your martial art's principle of "A warrior should be able to fight well even in unfavourable combat situations" as "A warrior should always immediately charge into combat, even when that would lead to an unfavourable situation", or "There's no point in trying to manoeuvre into a favourable situation"
3ryan_b12dI agree with this comment. There is one point that I think we can extend usefully, which may dissolve the distinction with Homo Novis: While I agree, I also fully expect the list of environments in which we are able to think clearly should expand over time as the art advances. There are two areas where I think shaping the environment will fail as an alternative strategy: first is that we cannot advance the art's power over a new environment without testing ourselves in that environment; second is that there are tail risks to consider, which is to say we inevitably will have such environments imposed on us at some point. Consider events like car accidents, weather like tornadoes, malevolent action like a robbery, or medical issues like someone else choking or having a seizure. I strongly expect that the ability to think clearly in extreme environments would have payoffs in less extreme environments. For example, a lot of the stress in a bad situation comes from the worry that it will turn into a worse situation; if we are confident of the ability to make good decisions in the worse situation, we should be less worried in the merely bad one, which should allow for better decisions in the merely bad one, thus making the worse situation less likely, and so on. Also, consider the case of tail opportunities rather than tail risks; it seems like a clearly good idea to work extending rationality to extremely good situations that also compromise clear thought. Things like: winning the lottery; getting hit on by someone you thought was out of your league; landing an interview with a much sought after investor. In fact I feel like all of the discussion around entrepreneurship falls into this category - the whole pitch is seeking out high-risk/high-reward opportunities. The idea that basic execution becomes harder when the rewards get huge is a common trope, but if we apply the test from the quote it comes back as avoid environments with huge upside which clearly doesn't sca
4Alex_Altair16dSimilarly, for instrumental rationality, I've been trying to lean harder on putting myself in environments that induce me to be more productive, rather than working on strategies to stay productive when my environment is making that difficult.

Great point. A few (related) examples come to mind:

  • Paul Graham's essay The Top Idea in Your Mind. "I realized recently that what one thinks about in the shower in the morning is more important than I'd thought. I knew it was a good time to have ideas. Now I'd go further: now I'd say it's hard to do a really good job on anything you don't think about in the shower."
  • Trying to figure out dinner is the worst when I'm already hungry. I still haven't reached a level of success where I'm satisfied, but I've had some success with 1) planning out meals for the n
... (read more)
6Kaj_Sotala19dI think this comment would make for a good top-level post almost as it is.
Rationalism before the Sequences

The way I have set this up for writers in the past has been to setup crossposting from an RSS feed under a tag (e.g. crossposting all posts tagged 'lesswrong').

I spent a minute trying and failed to figure out how to make an RSS feed from your blog under a single category. But if you have such an rss feed, and you make a category like 'lesswrong' then I'll set up a simple crosspost, and hopefully save you a little time in expectation. This will work if you add the category old posts as well as new ones.

1localdeity19dI recently learned of a free (donation-funded) service, siftrss.com, wherein you can take an RSS feed and do text-based filtering on any of its fields to produce a new RSS feed. (I've made a few feeds with it and it seems to work well.) I suspect you could filter based on the "category" field.
3Eric Raymond19dThere's a technical problem. My blog is currently frozen due to a stuck database server; I'm trying to rehost it. But I agree to your plan in principle and will discuss it with you when the blog is back up.
[REPOST] The Demiurge’s Older Brother

I'm pretty sure we back-dated it in a mass import at the start of LW 2.0, and that in never had its day on the frontpage (or its day on LW 1.0), and that's why it has low engagement. There's like 100 comments on the original.

"Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party

Oh woops, I realize I ended the call for everyone when I left. I'm sorry.

3duck_master21dDon't worry, it was kind of a natural stopping point anyways, as the discussion was winding down.
"Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party

I understand that Infra-Bayesianism wants to be able to talk about hypotheses that do not describe the entire environment. (Like logical induction.) Something that just says “I think this particular variable is going to go up, but I don’t know how the rest of the world works.”

To do this, somehow magically using intervals over probabilities helps us. I understand it's trying to define a measure over multiple probability distribution, but I don't know quite how that maps to these convex sets, and would be interested in the basic relationship being outlined, or a link to the section that does it. (The 8 posts of math were scary and I didn't read them.)

4DanielFilan21dA convex set is like a generalization of an interval.
"Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party

I am quite interested to get a first-person sense of what it feels like from the inside to be an Infra-Bayesian. In particular, is there a cognitive thing I already do, or should try, that maps to this process for dealing with having measure over different probability distributions?

1duck_master21dI think that if you imagine the deity Murphy trying to foil your plans whatever you do, that gives you a pretty decent approximation to true infraBayesianism.
"Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party

Here are some questions and confusions we had during the event.

1f____21dConfusion about what Solomonoff priors can’t do: * “Even bits are all zero, odd bits are random”: The Turing machine that writes zero to all even bits and writes some hardcoded string to all odd bits is simpler than the Turing machine that writes one long hardcoded string, so it seems to me that the Solomonoff prior should learn that the even bits are all zero * The discussion there seemed to bleed into "what if the string of odd bits is uncomputable", which I think of as a separate field of confusion, so I'm still confused what intuition this example is supposed to be pumping exactly. * “Uncomputable priors”: The simplest uncomputable prior I can think of would be “the nth bit is 1 iff the nth Turing machine halts”. But the Turing machine that tries to runs the nth Turing machine for 10^10 steps and writes 1 if it halts, and otherwise writes 0 unless n is in some hardcoded list is reasonably simple, so it seems to me that the Solomonoff prior should learn this kind of thing to a reasonable degree * This works finitely long but eventually the Solomonoff prior won't be able to be confident in what the next bit is. But to me it's not obvious how we could do better than that, given that this is inherently computationally expensive * Priors like “Omega predicts my action”: I have no idea what a solomonoff prior does, but I also have no idea what infra-Bayesianism does. Specifically, I'm not sure if there's some specific way that infra-Bayesianism learns this hypothesis (and whether it can infer it from observations or whether you have to listen to Omega telling you that they predict your action)
3AlexMennen21dThe Nirvana trick seems like a cheap hack, and I'm curious if there's a way to see it as good reasoning. One response to this was that predicting Nirvana in some circumstance is equivalent to predicting that there are no possible futures in that circumstance, which is a sensible thing to say as a prediction that that circumstance is impossible.
1LawChan21dConfusion that was (partially, maybe?) resolved: how exactly does infra-Bayesianism help with realizability. Since hypotheses in Bayesian-sim are convex sets over probability distributions, hypotheses can cover non-computable cases, even though each individual probability distribution is computable. For example, you might be able to have a computable hypothesis that all the even bits are 0s, which covers the case where the odd bits are uncomputable, even though no computable probability distribution in the hypotheses can predict the odd bits (and will thus eventually be falsified).
6DanielFilan21dThere are a few equivalent ways to view infra-distributions: * single infra-distribution * mixture of infra-distributions * concave functional So far, only the 'mixture of infra-distributions' view really makes sense to me in my head. Like, I don't know how else I'd design/learn an infra-distribution. So that's a limitation of my understanding.
2Ben Pace21dI understand that Infra-Bayesianism wants to be able to talk about hypotheses that do not describe the entire environment. (Like logical induction.) Something that just says “I think this particular variable is going to go up, but I don’t know how the rest of the world works.” To do this, somehow magically using intervals over probabilities helps us. I understand it's trying to define a measure over multiple probability distribution, but I don't know quite how that maps to these convex sets, and would be interested in the basic relationship being outlined, or a link to the section that does it. (The 8 posts of math were scary and I didn't read them.)
3duck_master21dGoogle doc where we posted our confusions/thoughts earlier: https://docs.google.com/document/d/1lKG_y_Voe02OkRGG9yaxtMuGM_dQBUKjj9DXTA8rMxE/edit [https://docs.google.com/document/d/1lKG_y_Voe02OkRGG9yaxtMuGM_dQBUKjj9DXTA8rMxE/edit] My ongoing confusions/thoughts: * What if the super intelligent deity is less than maximally evil or maximally good? (E.g. the deity picking the median-performance world) * What about the dutch-bookability of infraBayesians? (the classical dutch-book arguments seem to suggest pretty strongly that non-classical-Bayesians can be arbitrarily exploited for resources) * Is there a meaningful metaphysical interpretation of infraBayesianism that does not involve Murphy? (similarly to how Bayesianism can be metaphysically viewed as "there's a real, static world out there, but I'm probabilistically unsure about it")
2Charlie Steiner21dI'm still trying to wrap my head around how the update rule deals with hypotheses (a-measures) that have very low expected utility. In order for them to eventually stop dominating calculations, presumably their affine term has to get lifted as evidence goes against them? Edit: I guess I'm real confused about the function called "g" in basic inframeasure theory. I think that compactness (mentioned... somewhere) forces different hypotheses to be different within some finite time. But I don't understand the motivations for different g.
1Elisey Gretchko21dI'm not a mathematician, so it all remains very abstract for me. I'm curious if someone could explain it like I'm five. Is there some useful, concrete application to illustrate the theory?
1LawChan21dI'm curious what other motivating examples there are for getting a grasp of this infra-Bayesianism stuff. The example with computable even bits and uncomputable/random odd bits was really helpful for intuition building, and so was the Nirvana example. But I still don't really get what's up with the whole convex set thing, or what sa-measures are, or why it's okay to include an off-history term into your belief update rule.
2Ben Pace21dI am quite interested to get a first-person sense of what it feels like from the inside to be an Infra-Bayesian. In particular, is there a cognitive thing I already do, or should try, that maps to this process for dealing with having measure over different probability distributions?
Toward A Bayesian Theory Of Willpower

(a quick experiment: wiggle your index finger for one second. Now wave your whole arm in the air for one second. Now jump up and down for one second. Now roll around on the floor for one second. If you're like me, you probably did the index finger one, maybe did the arm one, but the thought of getting up and jumping - let alone rolling on the floor - sounded like too much work, so you didn't. These didn't actually require different amounts of useful resources from you, like time or money or opportunity cost. But the last two required moving more and bigger

... (read more)
deluks917's Shortform

I’m sorry to hear that :(

Load More