Also confirming. When I started the sum was $14.78. I started with $10 and ended with ~$101. It took about 35 minutes of mostly-but-not-entirely-focused clicking.
This is interesting, and I'm (a little) surprised that I hadn't heard about it yet, but I don't think the parallel with Milgram is quite there. Yes, he's asking them to do something against their conscience, but as romeo points out, they more or less have a gun to their heads. And despite that, one of them (presumably) was brave enough (and quick-thinking enough) to surreptitiously record the hilariously blatant election fraud.
I wouldn't say it's bad advice; it depends heavily on the context of the work. In an environment where you have some combination of:
1) a tight feedback loop with the relevant stakeholder (ideally the individual(s) who are going to be using the end product),
2) the product itself is amenable to quick iteration (i.e. composed of many smaller features, ideally with a focus on the presentation),
3) the requirements aren't clear (for example, the client has a mostly intuitive sense of how certain features should work; perhaps there are many implicit business rules that aren't formally written down anywhere but will come up as obviously "oh, it's missing the ability to do [x]" as the product gains capabilities)
...then avoiding significant investment in upfront design and adopting an iterative approach will very often save you spending a bunch of time designing something that doesn't fit your stakeholder's needs.
On the other hand, if you're operating in an environment where those conditions don't exist, such as one where you're mostly working on features or products that aren't easily broken down into smaller components that can be individually released or demoed to a stakeholder, and you have fairly clear requirements upfront that don't often change (or you have access to a product manager who you work with to iterate on the requirements until they're sufficiently well-detailed), then doing upfront design can often save you a lot of headache in wandering down dark alleys of "oops, we totally didn't account for how we'd incorporate this niche but relatively predictable use-case, so we optimized our design in ways which makes it very difficult to add without redoing a lot of work".
Having some experience with both, I'll say that the second seems better, in the sense that there are fewer meetings and interruptions, and the work is both faster and more pleasant, since there's less context-switching, conditional on the planning and product design being competent enough to come up with requirements that won't change too often. The downsides when it goes wrong do seem larger (throwing away three months of work feels a lot worse than throwing away two weeks), but ultimately that degenerates into a question of mitigating tail risk vs optimizing for upside, and I have yet to lose three months of work (though I did manage to lose almost two consecutive months working at an agile shop prior to this, which was part of a broader pattern that motivated my departure). I would recommend side-stepping that by attempting to find a place that does the "planning" thing well; at that point whether the team you're on is shipping small features every week or two or working on larger projects that span months is more a question of domain rather than effective strategy.
There are a few things to keep in mind:
1) The claim that 40 million Americans "deal with hunger" is, um, questionable. Their citation leads to feedingamerica.org, which cites USDA's Household Food Security in the United States report (https://www.ers.usda.gov/webdocs/publications/94849/err-270.pdf?v=963.1). The methodology used is an 11-question survey (18 for households with children), where answering 3 questions in the affirmative marks you as low food security. The questions asked are (naturally) subjective. Even better, the first question is this: “We worried whether our food would run out before we got money to buy more.” Was that often, sometimes, or never true for you in the last 12 months? That's an a real concern to have, but it is not what people are talking about when they say "dealing with hunger". You can be running on a shoestring budget and often worry about whether you'll have enough money for food without ever actually not having enough money for food.
“We worried whether our food would run out before we got money to buy more.” Was that often, sometimes, or never true for you in the last 12 months?
2) A significant percentage of the population has non-trivial issues with executive function. Also, most of the population isn't familiar with "best practices" (in terms of effective life strategies, basic finances, etc). Most people simply don't think about things like this systematically, which is how you get the phenomenon of ~50% of the population not being able to cover a $400 emergency (or whatever those numbers are, they're pretty close). This would be less of an issue if those cultural norms were inherited, but you can't teach something you don't know, and apparently we don't teach Home Economics anymore (not that it'd be sufficient, but it would be better than nothing). This is a subject that deserves a much more in-depth treatment, but I think as a high-level claim this is both close enough to true and sufficient as a cause for what we might observe here. Making an infographic with a rotating course of 10 cheap, easy-to-prepare, relatively healthy, and relatively tasty meals is a great idea, but it'll only be useful to the sorts of people who already know what "meal prep" means. You might catch some stragglers on the margin, but not a lot.
3) The upfront costs are less trivial than they appear if you don't inherit any of the larger items, and remember, 50% of the population can't cover a mid-3-figure emergency. "Basic kitchen equipment" can be had for under $100, but "basic kitchen equipment" doesn't necessarily set you up to prepare food in a "meal prep" kind of way.
That's fine, thanks!
Twitter: Seattle approaching Lombardy levels
Twitter: Seattle approaching Lombardy levels
The claims in that Twitter thread (now deleted) have been retracted: https://mobile.twitter.com/CT_Bergstrom/status/1239348331186249728
Kai Faust (not sure if he has an account here) has already developed a prototype desktop application (cross-platform via Electron) for this.
To reiterate, I don't explicitly use anything like the procedures I described in my posts to do any sort of interpretation. I came up with them to use as levers to attempt bridging the inferential distance between Said and I; I agree that in practice trying to use those models explicitly would be extremely error-prone (probably better than a random walk, but maybe not by much).
More salient to the point at hand: you understood (to a sufficient degree) the models I was describing, and your criticisms contain information about your understanding of those models. If for whatever reason I wanted to continue discussing those models, those two things being true would make it possible for me to respond further (with clarifications, questions about your interpretations, etc).
I was not describing the process I use to interpret novel linguistic compositions such as "authentic relationship" - my brain does that under the hood, automatically, in a process that is fairly opaque to me; despite that, the results are sufficiently accurate that I don't spend hours trying to resolve minutiae, even in highly complex technical domains.
I was attempting to use an analogy with word embeddings in multi-dimensional space to explain why the way you approach information-gathering has asymmetrical costs. I can't come up with another analogy, because your response is totally non-informative with respect to how/why/where my first analogy failed to land. Did you notice that you didn't even tell me whether you're familiar with the concepts used? I have literally zero bytes of information with which to attempt to generate a more targeted analogy.
Would it not be easy for him simply to say that?
This doesn't really seem material to the point I was trying to discuss, but (I imagine) it's because there can be a trade-off between density and precision when trying to convey information. (And, also, how is he supposed to know which parts of his post are going to be incomprehensible to which people? Again, one could put in an unbounded amount of effort into specifying with ever more clarity and precision exactly what they mean by every word.)
Your response to Habryka also seems to not materially respond to his main points (the grossly asymmetrical effort involved, and the fact that the time spent is not free, it is traded off against other pursuits).
You list certain outcomes you consider beneficial, but "things are not easy to explain and have hidden complexities" is true for literally everything given a sufficient level of desired precision. It is a fully general argument in favor of asking arbitrarily vague questions.
EDIT: I did want to thank you for your straightforward answer here:
I don’t know how you generated that guess, so my answer can only be the former.
That, at least, would let me move the conversation forward with a tentative conclusion for that question, but unfortunately that answer seems to imply sufficiently different mental machinery that I'm a bit stuck regardless. I'll come back to this if I come up with something exceptionally clever to try to solve that problem, I suppose.
If I read "authentic relationship", "a relationship which is built on honest premises and communication (i.e. neither party has lied or misled the other about their background, motivations, or relevant personality characteristics)" is my first guess as to what that would mean. My question is: are you incapable of performing this sort of "decryption work" (as in, the examples you generated are your best effort), or is your chief complaint that it's effortful and error-prone (as in, you could have extrapolated something similar to what I did, but you believe that doing so is epistemically unjustified)?
I am advocating for this because, in practice, this seems to minimize the amount of time and communication necessary to make sure both parties are on the same page w.r.t. the definitions of terms used and the intent behind what is being communicated. The way you ask questions reveals almost nothing about the state of your mental map of the subject of discussion (what you think the boundaries are, how you think it corresponds to the surrounding context, etc). This increases the amount of communication required to answer your question much more than linearly - you know "where" you are confused much better than the author. The author can guess, but the author is dealing with the entire possibility space of things you can be confused about; the amount of work that can go into resolving that confusion is unbounded. However, if you put forth your interpretation, then ask for clarification/correction, the author has a much more constrained space to explore to attempt to diagnose where your map is insufficiently well-specified/pointing at the wrong thing/has some other conflict with the author's map. ~Linear time for you to come up with the most straightforward possible interpretation (contingent on you actually being able to do so - still not clear to what degree this is a disagreement in the allowable degree of inference), + ~linear time for the author to identify mistakes, vs 0 time for you + unbounded time for the author.
The problem I'm having with trying to respond to the rest of your post (and the previous one in the thread) is that I don't feel like I have a better sense of your position on the more critical underlying issues now than when I first replied.
I will try to be more specific still, though I will be leaning on concepts similar to those in ML, such as embeddings, vectors, dimensionality, etc. I can try to find another set of concepts if this doesn't translate well enough. (I already tried to come up with an analogy with interfaces & generics in the software engineering sense, but couldn't actually come up with a coherent model without bringing in intersection types, at which point I gave up. Maybe that gives you some idea of what I was going for anyways.) When you performed the substitutions for "authentic", it looks like you traveled the smallest possible distance away from the "authentic" node, and not in the direction of any cluster of nodes that would be closer to (or have higher connective weight with, if you prefer) "relationship" (or "expression", or "reaction"). Naturally, the node you landed on fit the surrounding context about as well as a square peg in a round hole.
Now, to be absolutely clear, when you say that "authentic" has no standard meaning, are you claiming that "authentic" is equidistant from every other node in your graph (of all possible concepts)? I feel like we've ruled that out, but I'm not 100% sure; if that is the case then the direction I'm going in with the rest of this is probably fruitless.
If not, if you do indeed have a graph with concepts that are much closer to "authentic" than other concepts, then some of the concepts in the "authentic"-adjacent cluster will likewise be much closer to the "relationship" node along many dimensions than most of the others. What are those dimensions? Relationships have many properties and embedded concepts: participants, duration, style, etc. The dimensions we could say are relevant for linking together "authentic" and "relationship" would be more granular, likely describing the terms on which the participants engaged in the relationship, and the style of communication they use. If you refuse to traverse the graph to any appreciable degree (and make public where you landed; ideally also the path you followed), it's much harder for anybody else to help you. It's not clear at which level of linguistic abstraction the disconnect is - you could be missing the "authentic" node altogether (solved by dictionary), you could be missing connections from "authentic" to "honest" to "honesty about self" (don't think this is the problem; not clear how to solve this if it is), you could be asserting that those connections in your graph have equal weights to, say, the connections from "authentic" to "tangerine" to "random number generator", so there's literally no way for you to privilege the first set when trying to trace a path from "authentic" to "relationship", because you have no idea which direction to go looking in (don't think this is the problem either), or you could be asserting that the first set of connections do indeed have heavier weights, but not to a sufficient degree (if there is any such degree) that you would feel justified in traversing those nodes.
EDIT: I want to note that I started writing this comment well before Habryka posted his response. It strikes me that he hit on some very similar things (at one point I edited out a sentence that called your initial question "underspecified"; it's not that it wasn't an accurate description of my feelings on the subject, but I decided to taboo that word because I thought of a better way to explain what I thought the problem was).