Wiki Contributions

Comments

Forget where I read it, but this Idea seems similar. When responding to a request, being upfront about your boundaries or constraints feels intense but can be helpful for both parties. If Bob asks Alice to help him move, and Alice responds "sure thing" that leaves the interaction open to miscommunication. But if instead Alice says, " yeah! I am available 1pm to 5pm and my neck has been bothering me so no heavy lifting for me!" Although that's seems like less of a kind response Bob now doesn't have to guess at Alice's constraints and can comfortably move forward without feeling the need to tiptoe around how long and to what degree Alice can help.

Maybe if you have very "standard" tastes this is good advice to get movies that appeal to large audiences.

If you were the type of person who ONLY likes movies where the guns are super realistic, you aren't going to love Titanic and other top-rated movies. 

I have certain people I categorize as "well-aligned movie watchers" like my brother that I grew up with. We have similar tastes. I find that gets me further than aggregate rating systems. 

I think when you say, "I don't think this is relevant" you mean... I agree with your premise (that user stories are related to the assumed intent bias) but I don't think that we should upend user stories yet because they do what they are supposed to.

To which I agree.

Development is complex and realistically even with user stories, developers are considering other users (not in the narrative). If you were to take away user stories and focus on tasks, developers would still imagine user intent. By using user stories we are just shifting focus on intent. Which I think is usually a net positive. This post helped me illuminate in my head where it might not be a net positive.

I may have not been clear. I am agreeing with the entire post. I agree with your comment too that "user stories" arose most likely for the same reason as this bias.

I also agree with you that figuring out intention is an important part of development. A majority of users will use my software with the same intent.

I just meant to say that I immediately thought of "user stories" while reading this post. My initial thought was that user stories focus too much on intent. For example, if you are hyper focused on the user trying to reset their password you may neglect the user who accidentally clicked the reset password button and just want to navigate back to the log-in page. Would there be benefit to removing the user story as a goal and just make the goal, create a reset password page? I agree with you though, user stories serve their purpose and might be more of a net-good. This post just helped me recognize a potential pitfall of them.

With this in mind. It seems odd that a lot of agile developers build software around "user stories". Seems to lead us right into the trap of imagining a users intentions.

Luckily I think the industry is moving away from user stories.

I wanted to test log-normal but couldn't include pictures so made it a brief post.

Spoiler alert: It works

 

https://www.lesswrong.com/posts/GkEW4vH6M6pMdowdN/averaging-samples-from-a-population-with-log-normal

This is an interesting post. Your linked blog also is very personal and kind and I appreciate that.

My experience is that the things that I consider myself good at are things that interest me... They are the things I am actively reading about and looking for others to share my interest. Therefore it seems to me the things I say I am good at are the things that I am actively working towards being good at. Quite the opposite of the last portion of this post.

I think certainty isn't necessarily more wrought with error. In fact, certainty SHOULD be less error prone.

I see your points though. There are some things that are so valuable to us we couldn't admit we are uncertain. Instead of doing the calculation for how certain we are we just say, "I NEED to be right here so I am certain". I agree it's important to recognize this bias but i don't think certainty is the marker for the bias. I think certainty is 99% great and 1% bad (as in the cases you warn about)

A way to detect the bias you are concerned about would be to try to identify character traits that are most important to you. Ask what would hurt the most to find out weren't true. Then evaluate yourself on those traits and identifying characteristics and recognize that you may be biased here. I don't mean to say this is easy or I have mastered it, just that it might be more dependable than certainty.

I am not sure if that's the best way to ID the bias, but that's my thoughts.

Agreed, great post. But I think you are trying to push Bayesian Statistics past what it SHOULD be used for. 

 

Bayesian Statistics are only useful because we approach the correct answer as we gain all the information possible. Only in this limit (of infinite information) is Bayesian useful. Priors based off no information are, well, useless.

Scenario 1: You flip a fair coin and have a 50/50 chance of it landing heads

Scenario 2: (to steal xepo's example) are bloxors greeblic? You have NO IDEA, so your priors are 50/50

Even though in both scenarios the chances are 50/50, I would feel much more confident betting money on scenario 1 than scenario 2. Therefore my model of choices contains something MORE than probabilities. As far as I know Bayesian statistics just doesn't convey this NEEDED information. You cant use Bayesian probabilities here in a useful way. It's the wrong tool for the job. 

 

Even frequentest statistics is useless here. 

 

A lot of day-to-day decisions are based off very limited information. I am not able to lay out a TRUE model of how we intuitively make those decisions but "how much information I have to work with" is definitely an aspect in my mental model that is not entirely captured by Bayes Theorem. 

One option would be to have another percentage — a meta-percentage. e.g. “What credence do i give to “this is an accurate model of the world””? For coin flips, you’re 99.999% that 50% is a good model. For bloxors, you’re ~0% that 50% is a good model.

 

This is a model that I always tend to fall back on but I can never find a name for it so find it hard to look into. I have always figured I am misunderstanding Bayesian statistics and somehow credence is all factored in somehow. That doesn't really seem like the case though. 

Does the Scott Alexander post lay this out? I am having difficulty finding it. 

The closest term I have been able to find is Kelly constants, which is a measure of how much "wealth" you should rationally put into a probabilistic outcome. Replace "wealth" with credence and maybe it could be useful for decisions but even this misses the point! 

Load More