All of abramdemski's Comments + Replies

Dutch-Booking CDT: Revised Argument

Isn't your Dutch-book argument more recursive than standard ones? Your contract only pays out if you act, so the value of the dutch book causally depends on the action you choose.

Sure, do you think that's a concern? I was noting the similarity in this particular respect (pretending that bets are independent of everything), not in all respects.

Note, in particular, that traditional dutch book arguments make no explicit assumption one way or the other about whether the propositions have to do with actions under the agent's control. So I see two possible inter... (read more)

Dutch-Booking CDT: Revised Argument

I thought about these things in writing this, but I'll have to think about them again before making a full reply.

We could modify the epsilon exploration assumption so that the agent also chooses between  and  even while its top choice is . That is, there's a lower bound on the probability with which the agent takes an action in , but even if that bound is achieved, the agent still has some flexibility in distributing probability between  and .

Another similar scenario would be: we assume the probabi... (read more)

A New Center? [Politics] [Wishful Thinking]

1a. The proposal here is not to get rid of the two-party system, but rather, to reduce polarization. My view here is that polarization is harmful.

1b. The proposal attempts to work within the two-party system, rather than create a true third party.

1c. Why do you think a two-party system has to do with a strong executive? Mathematical arguments suggest that plurality voting eventually results in a two-party system, because you're usually wasting your vote if you vote for anyone other than the two candidates with the highest probability of winning. Similarly,... (read more)

2Pattern6hWhat do you do if both defect?
Dutch-Booking CDT: Revised Argument

I agree with this, but I was assuming the CDT agent doesn't think buying B will influence the later decision. This, again, seems plausible if the payoff is made sufficiently small. I believe that there are some other points in my proof which make similar assumptions, which would ideally be made clearer in a more formal write-up.

However, I think CDT advocates will not generally take this to be a sticking point. The structure of my argument is to take a pre-existing scenario, and then add bets. For my argument to work, the bets need to be "independent" of cr... (read more)

1tailcalled6hHow do you make the payoff small? Isn't your Dutch-book argument more recursive than standard ones? Your contract only pays out if you act, so the value of the dutch book causally depends on the action you choose.
My Current Take on Counterfactuals

Now I feel like I should have phrased it more modestly, since it's really "settled modulo math working out", even though I feel fairly confident some version of the math should work out.

My Current Take on Counterfactuals

I'm not really sure what you're getting at.

Causal interventions are supposed to be interventions that "affect nothing but what's explicitly said to be affected".

This seems like a really bad description to me. For example, suppose we have the causal graph . We intervene on . We don't want to "affect nothing but y" -- we affect z, too. But we don't get to pick and choose; we couldn't choose to affect x and y without affecting z.

So I'd rather say that we "affect nothing but what we intervene on and what's downstream of what we intervened on".

N... (read more)

1TekhneMakre2dA fair clarification. My point is very tangential to your post: you're talking about decision theory as top-level naturalized ways of making decisions, and I'm talking about some non-top-level intuitions that could be called CDT-like. (This maybe should've been a comment on your Dutch book post.) I'm trying to contrast the aspirational spirit of CDT, understood as "make it so that there's such a thing as 'all of what's downstream of what we intervened on' and we know about it", with descriptive CDT, "there's such a thing as 'all of what's downstream of what we intervened on' and we can know about it". Descriptive CDT is only sort of right in some contexts, and can't be right in some contexts; there's no fully general Arcimedean point from which we intervene. We can make some things more CDT-ish though, if that's useful. E.g. we could think more about how our decisions have effects, so that we have in view more of what's downstream of decisions. Or e.g. we could make our decisions have fewer effects, for example by promising to later reevaluate some algorithm for making judgements, instead of hiding within our decision to do X also our decision to always use the piece-of-algorithm that (within some larger mental context) decided to do X. That is, we try to hold off on decisions that have downstream effects we don't understand well yet.
My Current Take on Counterfactuals

Is there a way to operationalize "respecting logic"? For example, a specific toy scenario where an infra-Bayesian agent would fail due to not respecting logic?

"Respect logic" means either (a) assigning probability one to tautologies (at least, to those which can be proved in some bounded proof-length, or something along those lines), or, (b) assigning probability zero to contradictions (again, modulo boundedness). These two properties should be basically equivalent (ie, imply each other) provided the proof system is consistent. If it's inconsistent, they i... (read more)

4Vanessa Kosoy2dI guess we can try studying Troll Bridge using infra-Bayesian modal logic, but atm I don't know what would result. Ah, but there is a sense in which it doesn't. The radical update rule is equivalent to updating on "secret evidence". And in TRL we have such secret evidence. Namely, if we only look at the agent's beliefs about "physics" (the environment), then they would be updated radically, because of secret evidence from "mathematics" (computations).
Reflective Bayesianism

This post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit.

Agreed. Simple Bayes is the hero of the story in this post, but that's more because the simple bayesian can recognize that there's something beyond.

Phylactery Decision Theory

I'm using talk about control sometimes to describe what the agent is doing from the outside, but the hypothesis it believes all have a form like "The variables such and such will be as if they were set by BDT given such and such inputs".

Right, but then, are all other variables unchanged? Or are they influenced somehow? The obvious proposal is EDT -- assume influence goes with correlation. Another possible answer is "try all hypotheses about how things are influenced."

1Bunthut3dI'm not sure why you think there would be a decision theory in that as well. Obviously when BDT decides its output, it will have some theory about how its output nodes propagate. But the hypothesis as a whole doesn't think about influence. Its just a total probability distribution, and it includes that some things inside it are distributed according to BDT. It doesn't have beliefs about "if the output of BDT were different". If BDT implements a mixed strategy, it will have beliefs about what each option being enacted correlates with, but I don't see a problem if this doesn't track "real influence" (indeed, in the situations where this stuff is relevant it almost certainly won't) - its not used in this role.
Phylactery Decision Theory

One problem with this is that it doesn't actually rank hypotheses by which is best (in expected utility terms), just how much control is implied. So it won't actually converge to the best self-fulfilling prophecy (which might involve less control).

Another problem with this is that it isn't clear how to form the hypothesis "I have control over X".

1Bunthut4dYou don't. I'm using talk about control sometimes to describe what the agent is doing from the outside, but the hypothesis it believes all have a form like "The variables such and such will be as if they were set by BDT given such and such inputs". For the first setup, where its trying to learn what it has control over, thats true. But you can use any ordering of hypothesis for the descent, so we can just take "how good that world is" as our ordering. This is very fragile of course. If theres uncountably many great but unachievable worlds, we fail, and in any case we are paying for all this with performance on "ordinary learning". If this were running in a non-episodic environment, we would have to find a balance between having the probability of hypothesis decline according to goodness, and avoiding the "optimistic humean troll" hypothesis by considering complexity as well. It really seems like I ought to take "the active ingredient" of this method out, if I knew how.
Reflective Bayesianism

I wanted to separate what work is done by radicalizing probabilism in general, vs logical induction specifically. 

From my perspective, Radical Probabilism is a gateway drug. Explaining logical induction intuitively is hard. Radical Probabilism is easier to explain and motivate. It gives reason to believe that there's something interesting in the direction. But, as I've stated before, I have trouble comprehending how Jeffrey correctly predicted that there's something interesting here, without logical uncertainty as a motivation. In hindsight, I feel hi... (read more)

3Bunthut5dThis post seemed to be praising the virtue of returning to the lower-assumption state. So I argued that in the example given, it took more than knocking out assumptions to get the benefit. It wasn't meant to be. I agree that logical inductors seem to de facto implement a Virtuous Epistemic Process, with attendent properties, whether or not they understand that. I just tend to bring up any interesting-seeming thoughts that are triggered during conversation and could perhaps do better at indicating that. Whether its fine to set it aside provisionally depends on where you want to go from here.

Fixed, sorta, but now this tag needs to be merged with "humility". (I've named it "epistemic humility" in the meantime, but I think it should just be called "humility" -- no one says "epistemic humility" I think.)

Reflective Bayesianism

So, let's suppose for a moment that ZFC set theory is the one true foundation of mathematics, and it has a "standard model" that we can meaningfully point at, and the question is whether our universe is somewhere in the standard model (or, rather, "perfectly described" by some element of the standard model, whatever that means).

In this case it's easy to imagine that the universe is actually some structure not in the standard model (such as the standard model itself, or the truth predicate for ZFC; something along those lines).

Now, granted, the whole point ... (read more)

Reflective Bayesianism

What is actually left of Bayesianism after Radical Probabilism? Your original post on it was partially explaining logical induction, and introduced assumptions from that in much the same way as you describe here. But without that, there doesn't seem to be a whole lot there. The idea is that all that matters is resistance to dutch books, and for a dutch book to be fair the bookie must not have an epistemic advantage over the agent. Said that way, it depends on some notion of "what the agent could have known at the time", and giving a coherent account of thi

... (read more)
1Bunthut5dOk. I suppose my point could then be made as "#2 type approaches aren't very useful, because they assume something thats no easier than what they provide". Well, you certainly know more about that than me. Where did the criterion come from in your view? Quite possibly. I wanted to separate what work is done by radicalizing probabilism in general, vs logical induction specifically. That said, I'm not sure logical inductors properly have beliefs about their own (in the de dicto sense) future beliefs. It doesn't know "its" source code (though it knows that such code is a possible program) or even that it is being run with the full intuitive meaning of that, so it has no way of doing that. Rather, it would at some point think about the source code that we know is its, and come to believe that that program gives reliable results - but only in the same way in which it comes to trust other logical inductors. It seems like a version of this [] in the logical setting. By "knowing where they are", I mean strategies that avoid getting dutch-booked without doing anything that looks like "looking for dutch books against me". One example of that would be The Process That Believes Everything Is Independent And Therefore Never Updates, but thats a trivial stupidity.
Predictive Coding has been Unified with Backpropagation

--I wouldn't characterize my own position as "we know a lot about the brain." I think we should taboo "a lot."

To give my position somewhat more detail:

  • I think the methods of neuroscience are mostly not up to the task. This is based on the paper which applied neuroscience methods to try to reverse-engineer the CPU.
  • I think what we have are essentially a bunch of guesses about functionality based on correlations and fairly blunt interventional methods (lesioning), combined with the ideas we've come up with about what kinds of algorithms the brain might be run
... (read more)
Reflective Bayesianism

Truth aside there's issues with the implication part. Will people reach the conclusion? There's a lot of math problems where the answer is a consequence of the properties of numbers. Does that mean you'll know the answer some time before you die? You might be able to pick out a given one where you will find out before you die if you take the time to solve it. Ethics though, doesn't seem to have the same guarantees, especially not around the correctness of general theories.

This is part of why there could be a lot of different formalizations of the simple/re... (read more)

2Pattern17hThe second one I think. The epiphany is sometimes characterized by frustration 'why didn't I think of that sooner?' The optimal chess game (assuming it's unique) might proceed from the rules, but we might never know it. Even if I have the algorithm (say in pseudocode) * If I don't have it in code, I might not run it * If I have it code, but don't have the compute (or sufficiently efficient techniques) I might not find out what happens when I run it for long enough * If I have the code, and the compute, then it's just a matter of running it.* But do I get around to it? Understanding implication isn't usually as simple as I made it out to be above. People can work hard on a problem, and not find the answer for a lot of reasons - even if they have everything they need to know to solve it. Because they also have a lot of other information, and before they have the answer, they don't know what is, and what isn't relevant. In other words, where implication is trivial and fast, reflection may be trivial and fast. If not... The proof I never find does not move me. *After getting the right version of the programming language downloaded, and working properly, just to do this one thing.
Reflective Bayesianism

I had it in mind as a possible topic when writing, but it didn't make it into the post. I think I might be able to put together a model that makes more sense than the original version, but I haven't done it yet.

Reflective Bayesianism

I don't know what it would look like, but that isn't an argument that the universe is mathematical.

Frankly, I think there's something confused about the way I'm/we're talking about this, so I don't fully endorse what I'm saying here. But I'm going to carry on.

I guess I'm confused because in my head "mathematical" means "describable by a formal system", and I don't know how a thing could fail to be so describable.

So, the kind of thing I have in mind is the claim that reality is precisely and completely described by some particular mathematical object. 

4DanielFilan5dIn my head, the argument goes roughly like this, with 'surely' to be read as 'c'mon I would be so confused if not': 1. Surely there's some precise way the universe is. 2. If there's some precise way the universe is, surely one could describe that way using a precise system that supports logical inference. I guess it could fail if the system isn't 'mathematical', or something? Like I just realized that I needed to add 'supports logical inference' to make the argument support the conclusion.
Predictive Coding has been Unified with Backpropagation

You are objecting to the "brains use predictive coding" step? Or are you objecting that only one particular version of predictive coding is basically backprop?

Yeah, somewhere along that spectrum. Generally speaking, I'm skeptical of claims that we know a lot about the brain.

Are you referring to Solomonoff Induction and the like?

I was more thinking of genetic programming.

I think the "brains use more data-efficient algorithms" is an obvious hypothesis but not an obvious conclusion--there are several competing hypotheses, outlined above.

I agree with this.&nbs... (read more)

2Daniel Kokotajlo5d--I wouldn't characterize my own position as "we know a lot about the brain." I think we should taboo "a lot." --We are at an impasse here I guess--I think there's mounting evidence that brains use predictive coding and mounting evidence that predictive coding is like backprop. I agree it's not conclusive but this paper seems to be pushing in that direction and there are others like it IIRC. I'm guessing you just are significantly more skeptical of both predictive coding and the predictive coding --> backprop link than I am... perhaps because the other hypotheses on my list are less plausible to you?
Reflective Bayesianism

Thanks! I assume you're referring primarily to the way I made sure footnotes appear in the outline by using subheadings, and perhaps secondarily to aesthetics.

2DanielFilan6dJust referring to the primary thing.
Predictive Coding has been Unified with Backpropagation

How does it "rule out" the last one??

It does provide a small amount of evidence against it, because it shown one specific algorithm is "basically backprop". Maybe you're saying this is significant evidence, because we have some evidence that predictive coding is also the algorithm the brain actually uses.

But we also know there are algorithms which are way more data-efficient than NNs (while being more processing-power intensive). So wouldn't the obvious conclusion from our observations be: humans don't use backprop, but rather, use more data-efficient algo... (read more)

5Daniel Kokotajlo6dI guess I was thinking: Brains use predictive coding, and predictive coding is basically backprop, so brains can't be using something dramatically better than backprop. You are objecting to the "brains use predictive coding" step? Or are you objecting that only one particular version of predictive coding is basically backprop? Are you referring to Solomonoff Induction and the like? I think the "brains use more data-efficient algorithms" is an obvious hypothesis but not an obvious conclusion--there are several competing hypotheses, outlined above. (And I think the evidence against it is mounting, this being one of the key pieces.) In terms of bits/pixels/etc., humans see plenty of data in their lifetime, a bit more than the scaling laws would predict IIRC. But the scaling laws (as interpreted by Ajeya, Rohin, etc.) are about the amount of subjective time the model needs to run before you can evaluate the result. If we assume for humans it's something like 1 second on average (because our brains are evaluating-and-updating weights etc. on about that timescale) then we have a mere 10^9 data points, which is something like 4 OOMs less than the scaling laws would predict. If instead we think it's longer, then the gap in data-efficiency grows. Some issues though. One, the scaling laws might not be the same for all architectures. Maybe if your context window is bigger, or your use recurrency, or whatever, the laws are different. Too early to tell, at least for me (maybe others have more confident opinions, I'd love to hear them!) Two, some data is higher-quality than other data, and plausibly human data is higher-quality than the stuff GPT-3 was fed--e.g. humans deliberately seek out data that teaches them stuff they want to know, instead of just dully staring at a firehose of random stuff. Three, it's not clear how to apply this to humans anyway. Maybe our neurons are updating a hundred times a second or something. I'd be pretty surprised if a human-brain-sized Transf
Predictive Coding has been Unified with Backpropagation

I have not dug into the math in the paper yet, but the surprising thing from my current perspective is: backprop is basically for supervised learning, while Hebbian learning is basically for unsupervised learning. In particular, Hebbian learning has been touted as an (inefficient but biologically plausible) algorithm for PCA. How can you chain a bunch of PCAs together and get gradient descent?

Aside from that, here's what I understood from the paper so far.

  • By predictive coding, they basically mean: take the structure of the computation graph (eg, the struct
... (read more)

Thinking about this a bit more, I think a natural usage, which is pretty compatible with what I've experienced, would be:

  • A path affords walking.
  • A person has an affordance for walking down the path.

My understanding is that the concept is supposed to be a fully relational one, indicating something about the interaction of a subject and object. So I would say a door "has an affordance" (for a person to open it) and a person "has an affordance" (to open the door), much like I would say both people in a romantic relationship "have a romantic relationship" (with each other). 

In the original usage of the term, and affordance is something that the object has which both signals to the agent that an action is possible and makes that action easy to carry

... (read more)
3jaspax8dThe concept is definitely relational; no disagreement there. My objection is more narrowly linguistic: the syntactic structure used to describe the "affordance" relationship is Object affords Action to Agent. All of your quotes from Wikipedia follow this example, eg. the "set of steps... does not afford climbing to the crawling infant" (emphasis mine). I find no examples of this syntactic structure being inverted to allow Agent affords Action. Consequently, it seems that the noun "affordance" is best applied to the Object's side of this relationship, and not the Agent side, since the Object is the syntactic subject. Conceptually, this does matter because the affordance relationship is non-symmetric: what the Object does ("affords") is very different from what the Agent does! Aside from the syntactic objection, I think that it obscures the topic to have the same word used for both sides of a non-symmetrical relation. Your suggestion of using "have an affordance" is possibly usable though I still think that it invites confusion. I do like the phrase " behavioural repertoire", mentioned in another comment, but it does not lend itself to being verbed very well. Another suggestion might be "reciprocate" or "engage": an Agent engages the affordance by carrying out the Action in the manner intended. (Does the existing literature have a verb that slots into this construction?) I don't know. Words are hard. I still think that it's important to have different words for the Object's and the Agent's respective contributions to the activity.

Wikipedia gives a pretty different history, according to which the term comes originally from psychology, not design.

2Raemon8dWelp, today I learned. ("It originated in psychology" feels consistent with my previous beliefs, but I didn't know about all this history of it)

Even if so, I miiight contend that there's an important distinction between (a) affordance as a concept which is itself relational, vs (b) affordance as a predicate on objects, where objects are understood to be subjective. In the guest case, it's possible for agents to have a shared model in which an object has different affordances for different people. In the second case, if agents try to have a shared model but end up disagreeing about affordances, it's not clear what they should do.


I don't think this is quite the same. An affordance is relational and subjective. From Wikipedia:

For instance, a set of steps which rises four feet high does not afford climbing to the crawling infant, yet might provide rest to a tired adult or the opportunity to move to another floor for an adult who wished to reach an alternative destination.

Hence an affordance depends both on the subject and the object. A disposition seems like a concept which is supposed to apply to the object in itself, independent of subject.

2G Gordon Worley III10dI wouldn't say there's any existence of an object (as an object) independent of a subject; there's instead just stuff that's not differentiated from other stuff because something had to tell the difference, hence I don't see a real difference here, although the theory of dispositions is jumbled up with lots of philosophy that supposes some kind of essentialism, so it's reasonable that there might seem to be some difference from affordances under certain assumptions.
Voting-like mechanisms which address size of preferences?

Hm, I just noticed that I didn't really get your whole proposal in the first place -- I latched onto "full refund for losing positions", but ignored the rest.

[...] only the winners of a quadratic vote actually pay an average of the tokens, and everyone else gets a refund - sort of like a blind Dutch auction of the decision.

For example, a quadratic vote is taken between two binary options A and B. A receives 400 votes, B receives 500. B wins the vote, so an average of 450 is taken from the voting token pool of B and 50 tokens are redistributed equally among

... (read more)
1sxae13dAh yes, you're right in redistributing the 50 tokens when refunding the winners in the same proportion is tricky. Probably necessitates being able to have fractional tokens so you can refund someone 0.1 token or something like that. I imagine it will be very simple for the losing choices. Also, I don't mean a regular Dutch auction, I mean a blind one where all bidders submit their bid at once (like an election). My understanding of a blind Dutch auction is that it resolves this "people don't bid because they don't think they could win" result in general auctions. This was absolutely an intuitive suggestion from reading about voting theory and auctions, you've got a much deeper understanding of the VT maths than I do. I do think that thinking about elections like an auction for a decision can be a useful way of thinking about it, but I don't have professional experience with this beyond helping to design some videogame economies. Don't take this as any kind of standard suggestion - just mine :)
Voting-like mechanisms which address size of preferences?

Interesting, thanks!

A big part of the motivation for this question was that I've had a longstanding anti-two-party stance, due to the apparent dysfunction of two-party politics in America. But I was talking with some people about it recently, who were of the opinion that many-party systems in other countries were not much more sane/effective. This got me thinking about ways in which my ideal could be compromised. Although my question mainly talked about a two-party scenario, the real motivation was to "avoid shenanigans" more generally.

The time-traveler ex... (read more)

2Kaj_Sotala13dMostly, I think, voting systems designed to ensure that parties get a share of seats that's proportional to their number of votes ("party-list proportional representation []" is what Wikipedia calls it). E.g. the D'Hondt method [] seems pretty popular (and is used in Finland as well as several other countries). As for whether it's actually better overall - well, I grew up with it and am used to it so I prefer it over something that would produce a two-party system. ;) But I don't have any very strong facts to present over which system is actually best.
Voting-like mechanisms which address size of preferences?

I originally wrote this with an example of farmers vs fishers, where the two groups had some different legislative preferences, but the example just didn't have strong internal logic (I didn't come up with very plausible differences of opinion for the two groups).

The important thing is the payoff matrix. Clearly the two groups have a mutually beneficial agreement which they could reach, if they would look past their animosity.

Troll Bridge

This sounds like PA is not actually the logic you're using.

Maybe this is the confusion. I'm not using PA. I'm assuming (well, provisionally assuming) PA is consistent.

If PA is consistent, then an agent using PA believes the world is consistent -- in the sense of assigning probability 1 to tautologies, and also assigning probability 0 to contradictions.

(At least, 1 to tautologies it can recognize, and 0 to contradictions it can recognize.)

Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don't know whe... (read more)

1Bunthut11dTheres two ways to express "PA is consistent". The first is∀A¬(A∧¬A). The other is a complicated construct about Gödel-encodings. Each has a corresponding version of "the world is consistent" (indeed, this "world" is inside PA, so they are basically equivalent). The agent using PA will believe only the former. The Troll expresses the consistency of PA using provability logic, which if I understand correctly has the gödelization in it.
The best frequently don't rise to the top

This seems to assume roughly linear relationship between quality and reception. As I mentioned in my other comment, this seems far from necessary. We can have something like an exponential relationship between the two, in which case a small difference in quality can create a massive difference in reception.

I just don't get why you think moderately bad camera work should a priori best be thought of as some percentage of views lost (taking something from 800k to 700k or 600k) rather than in orders of magnitude (taking something from 800k to 80k or 8k).

2adamzerner18dGood point, exponential does seem like it could sense. I'm not sure.
The best frequently don't rise to the top

Funny enough, I've recently been contemplating asking a question here on LW like "what actually-quite-good youtube channels are out there?" precisely because I suspected there were a lot of hidden gems like you mention! I didn't get around to it, though, perhaps because watching youtube feels low status so I felt like it would be embarassing to visibly put effort into optimizing my youtube-watching.

Anyway, there are a few reasons why I thought this was a priori plausible:

  1. First and foremost, I see this as an example of the tails come apart (ie Goodhart). Ev
... (read more)
2adamzerner18dKenji []! Also I think you might really like Eric Normand []. In particular the videos/podcasts where he spends 60-120 minutes exploring, distilling and discussing famous papers in computer science. It makes me sad that these videos only get a couple hundred views. And if you happen to be into basketball/are curious about what the smart basketball people sound like, I have a special place in my heart for Thinking Basketball []. If you end up watching any videos, I'd be interested to hear what you think of them! Especially if you aren't a sports person, and especially if you didn't like them :) Overall, I get the impression that we don't disagree about anything, and that I was a little misleading in my OP. I don't think that's what it is. Let me try to clarify. I agree that the "noise" factors you mention all matter in addition to "raw quality". Things like click-baitiness, production quality, personality, etc. Let's operationally define "quality" to be some metric that encapsulates "noise factors" + "raw quality". My judgement is that... let's say DHH's videos are a 7/10 in this metric. He's getting 30k views when others who I judge to be significantly worse in this "quality" metric, maybe a 3/10, that encapsulates a variety of factors, are getting 300k views or even 3M views. 7/10 quality → 3/10 reception, and 3/10 quality → 7/10 reception. I think that what you propose here is a very plausible explanation for this: I regret using the YouTube examples. I think a much better example is what I've seen watching Chefs Table where one lucky break such as being ranked in a magazine produces a snowball effect that takes someone like Massimo from obscurity to international acclaim shockingly quickly. I suspect that similar things happen all the time in fields like music, writing, arts, and business. Ba
Troll Bridge

Sure, thats always true. But sometimes its also true that . So unless you believe PA is consistent, you need to hold open the possibility that the ball will both (stop and continue) and (do at most one of those). But of course you can also prove that it will do at most one of those. And so on. I'm not very confident whats right, ordinary imagination is probably just misleading here.

I think you're still just confusing levels here. If you're reasoning using PA, you'll hold open the possibility that PA is inconsistent, but you won't hold open the... (read more)

1Bunthut18dDo you? This sounds like PA is not actually the logic you're using. Which is realistic for a human. But if PA is indeed inconsistent, and you don't have some further-out system to think in, then what is the difference to you between "PA is inconsistent" and "the world is inconsistent"? In both cases you just believe everything and its negation. This also goes with what I said about thought and perception not being separated in this model, which stand in for "logic" and "the world". So I suppose that is where you would look when trying to fix this. You do fully believe in PA. But it might be that you also believe its negation. Obviously this doesn't go well with probabilistic approaches.
Voting-like mechanisms which address size of preferences?

Wait, so, my previous analysis doesn't make that much sense. I now think your claim is pretty plausible.

Expected value if you don't buy one more vote: 

Expected value if you do: 

Here,  is specifically the probability that candidate  ends up in a tie without our one additional vole. I'm assuming the vote uses the (rather dumb) tie-break procedure of choosing randomly from all the candidates, for simplicity. Hence, breaking the tie steals probability equally from each candidate (including... (read more)

1sxae18dI'd be lying if I claimed to fully grok the maths, but I'm glad it was a useful suggestion!
Voting-like mechanisms which address size of preferences?

Interesting. But then how do you argue that it gives approximately correct results? As I understand it, Weyl sees the argument as just: votes end up being roughly proportional to utility (under a lot of differenc scenarios/assumptions). When this condition holds, the quadratic vote is a good representation of the utilitarian value of the different options.

So, the reason I think we're looking for  to disappear entirely is because  is a function of ! It's fine if  or whatever, so long as none of those ex... (read more)

1sxae18dInteresting, you make some great points here and I don't think I have any good refutations to any of them. Perhaps if we play around with the auction structure by which we take away and refund these tokens?
Voting-like mechanisms which address size of preferences?

Oooh, is it really that simple?

Expected state of affairs if I don't buy one more vote (ignoring what else I could have done with the money):

, where  is the probability of candidate  winning if I do nothing, and  is the utility of candidate  winning.

Expectation if I do buy one more vote:

, where  is the candidate under consideration,  is the probability there would have been a tie w/o this one extra vote, and  is the utility adjustment for losin... (read more)

1sxae20dI'm surprised that we are looking forPxto disappear entirely, I'm not sure I understand that. Quadratic voting shines when you have lots of votes with the same voting token pool, because you force people to allocate resources to decisions they really care about. It's absolutely not meant to decide one decision - it's meant to force people to allocate limited resources over a long period, and by doing so reveal their true valuation of those decisions. I would therefore fully expectPxto play a part in every agent's considerations, as they must consider the probability of success in each vote in order to plan allocation of voting tokens for every other vote.
Troll Bridge

Now you might think this is dumb, because its impossible to see that. But why do you think its impossible? Only because its inconsistent. But if you're using PA, you must believe PA really might be inconsistent, so you can't believe its impossible.

This part, at least, I disagree with. If I'm using PA, I can prove that . So I don't need to believe PA is consistent to believe that the ball won't stop rolling and also continue rolling.

On the other hand, I have no direct objection to believing you can control the consistency of PA by doing some... (read more)

1Bunthut20dSure, thats always true. But sometimes its also true thatA&¬A. So unless you believe PA is consistent, you need to hold open the possibility that the ball will both (stop and continue) and (do at most one of those). But of course you can also prove that it will do at most one of those. And so on. I'm not very confident whats right, ordinary imagination is probably just misleading here. The facts about what you think are theorems of PA. Judging from the outside: clearly if an agent with this source code crosses the bridge, then PA is inconsistent. So, I think the agent is reasoning correctly about the kind of agent it is. I agree that the outcome looks bad - but its not clear if the agent is "doing something wrong". For comparison, if we built an agent that would only act if it could be sure its logic is consistent, it wouldn't do anything - but its not doing anything wrong. Its looking for logical certainty, and there isn't any, but thats not its fault.
Four Motivations for Learning Normativity

I'd be interested to hear this elaborated further. It seems to me to be technically challenging but not very;

  1. I agree; I'm not claiming this is a multi-year obstacle even. Mainly I included this line because I thought "add a meta-level" would be what some readers would think, so, I wanted to emphasize that that's not a solution.
  2. To elaborate on the difficulty: this is challenging because of the recursive nature of the request. Roughly, you need hypotheses which not only claim things at the object level but also hypothesize a method of hypothesis evaluation i
... (read more)
2Daniel Kokotajlo21dThanks! Well, I for one am feeling myself get nerd-sniped by this agenda. I'm resisting so far (so much else to do! Besides, this isn't my comparative advantage) but I'll definitely be reading your posts going forward and if you ever want to bounce ideas off me in a call I'd be down. :)
Voting-like mechanisms which address size of preferences?

This post also worried about issues still existing, while not performing calculations, which might have revealed whether quadratic voting made things worse, better, etc.

Agreed, but, calculations are difficult. Also, the issues seem severe. I think all the options I've mentioned here are probably significantly worse than business as usual.

Voting-like mechanisms which address size of preferences?

I theorize that the actual reason democracy 'works' is because the actual information to run a government is coming from elites, just that instead of a single unaccountable elite (a dictatorship), voters can express preferences to stop the absolute worst behaviors by the elite.  (except, uh, all the times they fail to do this)

This was my background assumption. That's why I kept the examples to legislators instead of postulating a voting public. I'm assuming you want to elect legislators rather than solve everything with direct democracy.

And I agree th... (read more)

Voting-like mechanisms which address size of preferences?

Agreed. I guess I'm trying to illustrate a couple of things:

  1. How things are sequenced as bills matters a whole lot. The bills presented individually would fail (unless people engage in tit-for-tat), but the combined bill would succeed. It would be much better if we had a guarantee that any sequence of bills would be voted on just as if it were the best combined bill (where "best" means something like utilitarian-best).
  2. Tit-for-tat can save the day in this example, but the "dark side" of tit-for-tat is when we get in defection spirals. In that case, a combined bill might be politically infeasible. This seems like a realistic model. The two parties can be angry enough with each other that they can't cooperate.
Voting-like mechanisms which address size of preferences?

It would seem better if, at least, the agenda were set by the most moderate person.

For example, the lawmaking body could elect the agenda-setter via 3-2-1 voting or STAR voting or some other sensible many-choice single-winner election method, with all lawmakers being candidates on the ballot. The winner of this process would probably be more moderate than the typical winning-coalition leader.

The theory is that this one person is most representative of the governed, and should cleverly optimize the agenda to minimize distortion.

Voting-like mechanisms which address size of preferences?

You've made foobles legal? We'll require fooble licenses costing two years' training and a million dollars.
You've banned smarbling? We'll switch all resources from anti-smarbling enforcement to crack down on unlicensed foobles.

Agreed. But it's not a totally meaningless rule. The fooble-legalization bill could include language preventing any restrictions on fooble use and possession, such as licensing. As for Smarbling, the budget for anti-smarbling enforcement will have itself passed with some margin, which you'll have to surpass to repeal the funding... (read more)

Voting-like mechanisms which address size of preferences?

Using Futarchy is just cheating ;3

But you're right, this does negate all my issues. I was just looking for something closer to existing governments.

Voting-like mechanisms which address size of preferences?

This seems to just be rejecting my hypothetical.

I can construct a similar dilemma where there's three single-issue parties, and each one really wants their pet bill passed, and slightly dislikes the other two bills. Would you have them pass none of the bills (the worst outcome in my view)?

The best things are often free or cheap

Importantly, almost all of your examples are only true very recently. With the exception of nature and comedy, all of these require the internet to get access to. Many of them require comparatively recent developments on the internet.

2adamzerner24dFantastic point! That didn't hit me until you pointed it out. We live in an amazing time :)
The best things are often free or cheap

Cars, phones, laptops, doctors, shoes, clothing, watches, jewelry, haircuts, makeup (both the physical makeup you might buy, and the attention of a professional to apply it really well), pillows, mattresses, chairs, yards/parks, gold, silver, bitcoin, ...

OK, out of that list, the only thing that's free/cheap is parks. (Of course my list is very biased, since I was primed with the idea of expensive things. But I don't think anyone would seriously contest the idea that difficulty of reproduction is an important factor which drives price. The question is more whether there are "many" top-in-class experiences which you can only have via difficult-to-reproduce things.)

2adamzerner24dAgreed. Most of the examples you mention seem like they're conspicuous consumption/about status signaling. However, I think the category of comfort (pillows, mattresses, chairs) is a good one. As for phones and laptops, I personally think that a big part of that is status signaling, and unless you're doing something intense like video editing, I don't think that using the best computer in the world would be much of a better experience than a Macbook.
Load More