# Wiki Contributions

quanticle24d200

The last answer is especially gross:

He chose to be a super-popular blogger and to have this influence as a psychiatrist. His name—when I sat down to figure out his name, it took me less than five minutes. It’s just obvious what his name is.

Can we apply the same logic to doors? "It took me less than five minutes to pick the lock so..."

Or people's dress choices? "She chose to wear a tight top and a miniskirt so..."

Metz persistently fails to state why it was necessary to publish Scott Alexander's real name in order to critique his ideas.

The second wheelbarrow example has a protagonist who knows the true value of the wheelbarrow, but still loses out:

At the town fair, a wheelbarrow is up for auction. You think the fair price of the wheelbarrow is around \$120 (with some uncertainty), so you submit a bid for \$108. You find out that you didn’t win—the winning bidder ends up being some schmuck who bid \$180. You don’t exchange any money or wheelbarrows. When you get home, you check online out of curiosity, and indeed the item retails for \$120. Your estimate was great, your bid was reasonable, and you exchanged nothing as a result, reaping a profit of zero dollars and zero cents.

But, in my example, Burry wasn't outbid by "some schmuck" who thought that Avant! was worth vastly more than it ended up being worth. Burry was able to guess not just the true value of Avant!, but also the value that other market participants placed on Avant!, enabling him to buy up shares at a discount compared to what the company ended up selling for.

The implied question in my post was, "How do you know if you're Michael Burry, or the trader selling Avant! shares for \$2?"

quanticle1mo2-6

That point is contradicted by the wheelbarrow examples in the OP, which seem to imply that either you'll be the greater fool or you'll be outbid by the greater fool. Why wasn't Burry outbid by a fool who thought that Avant! was worth \$40 a share?

This is why I disagree with the OP; like you, I believe that it's possible to gain from informed trading, even in a market filled with adverse selection.

quanticle1mo172

I don't think the Widgets Inc. example is a good one. Michael Lewis has a good counterpoint in The Big Short, which I will quote at length:

The alarmingly named Avant! Corporation was a good example. He [Michael Burry] had found it searching for the word "accepted" in news stories. He knew, standing at the edge of the playing field, he needed to find unorthodox ways to tilt it to his advantage, and that usually meant finding situations the world might not be fully aware of. "I wasn't looking for a news report of a scam or fraud per se," he said. "That would have been too backward-looking, and I was looking to get in front of something. I was looking for something happening in the courts that might lead to an investment thesis." A court had accepted a plea from a software company called the Avant! Corporation. Avant! had been accused of stealing from a competitor the software code that was the whole foundation of Avant!'s business. The company had \$100 million cash in the bank, was still generating \$100 million a year of free cash flow -- and had a market value of only \$250 million! Michael Burry started digging; by the time he was done, he knew more about the Avant! Corporation than any man on earth. He was able to see that even if the executives went to jail (as they did) and the fines were paid (as they were), Avant! would be worth a lot more than the market then assumed. Most of its engineers were Chinese nationals on work visas, thus trapped -- there was no risk that anyone would quit before the lights were out. To make money on Avant!'s stock, however, he'd probably have to stomach short-term losses, as investors puked up shares in horrified response to negative publicity.

Burry bought his first shares of Avant! in June 2001 at \$12 a share. Avant!'s management then appeared on the cover of Business Week, under the headline, "Does Crime Pay?" The stock plunged; Burry bought more. Avant!'s management went to jail. The stock fell some more. Mike Burry kept on buying it -- all the way down to \$2 a share. He became Avant!'s single largest shareholder; he pressed management for changes. "With [the former CEO's] criminal aura no longer a part of operating management," he wrote to the new bosses, "Avant! has a chance to demonstrate its concern for shareholders." In August, in another e-mail, he wrote, "Avant! still makes me feel I'm sleeping with the village slut. No matter how well my needs are met, I doubt I'll ever brag about it. The 'creep' factor is off the charts. I half think that if I pushed Avant! too hard I'd end up being terrorized by the Chinese mafia." Four months later, Avant! got taken over for \$22 a share.

Why should Michael Burry have assumed that he had more insight about Avant! Corporation than the people trading with him? When all of those other traders exited Avant!, driving its share price to \$2, Burry stayed in. Would you have? Or would you have thought, "I wonder what that trader selling Avant! for \$2 knows that I don't?"

Why isn’t there a standardized test given by a third party for job relevant skills?

That's what Triplebyte was trying to do for programming jobs. It didn't seem to work out very well for them. Last I heard, they'd been acquired by Karat after running out of funding.

quanticle3mo219

My intuition here is “actually fairly good.” Firms typically spend a decent amount on hiring processes—they run screening tests, conduct interviews, look at CVs, and ask for references. It’s fair to say that companies have a reasonable amount of data collected when they make hiring decisions, and generally, the people involved are incentivized to hire well.

Every part of this is false. Companies don't collect a fair amount of data during the hiring process, and the data they do collect is often irrelevant or biased. How much do you really learn about a candidate by having them demonstrate whether they've managed to memorize the tricks to solving programming puzzles on a whiteboard?

The people involved are not incentivized to hire well, either. They're often engineers or managers dragged away from the tasks that they are incentivized to perform in order to check a box that the participated in the minimum number of interviews necessary to not get in trouble with their managers. If they take hiring seriously, it's out of an altruistic motivation, not because it benefits their own career.

Furthermore, no company actually goes back and determines whether its hires worked out. If a new hire doesn't work out, and is let go after a year's time, does anyone actually go back through their hiring packet and determine if there were any red flags that were missed? No, of course not. And yet, I would argue that that is the minimum necessary to ensure improvement in hiring practices.

The point of a prediction market in hiring is to enforce that last practice. The existence of fixed term contracts with definite criteria and payouts for those criteria forces people to go back and look at their interview feedback and ask themselves, "Was I actually correct in my decision that this person would or would not be a good fit at this company?"

Do you know a healthy kid who will do nothing?

Yes. Many. In fact, I'd go so far as to say that most people in this community, who claim that they're self-motivated learners who were stunted by school would have been worse off without the structure of a formal education. One only needs to go through the archives and look at all the posts about akrasia to find evidence of this.

What does "lowercase 'p' political advocacy" mean, in this context? I'm familiar with similar formulations for "democratic" ("lowercase 'd' democratic") to distinguish matters relating to the system of government from the eponymous American political party. I'm also familiar with "lowercase 'c' conservative" to distinguish a reluctance to embrace change over any particular program of traditionalist values. But what does "lowercase 'p' politics" mean? How is it different from "uppercase 'P' Politics"?

A great example of a product actually changing for the worse is Microsoft Office. Up until 2003, Microsoft Office had the standard "File, Edit, ..." menu system that was characteristic of desktop applications in the '90s and early 2000s. For 2007, though, Microsoft radically changed the menu system. They introduced the ribbon. I was in school at the time, and there was a representative from Microsoft who came and gave a presentation on this bold, new UI. He pointed out how, in focus group studies, new users found it easier to discover functionality with the Ribbon than they did with the old menu system. He pointed out how the Ribbon made commonly used functions more visible, and how, over time, it would adapt to the user's preferences, hiding functionality that was little used and surfacing functionality that the user had interacted with more often.

Thus, when Microsoft shipped Office 2007 with the Ribbon, it was a great success, and Office gained a reputation for having the gold standard in intuitive UI, right?

Wrong. What Microsoft forgot is that the average user of Office wasn't some neophyte sitting in a carefully controlled room with a one-way mirror. The average user of Office was upgrading from Office 2003. The average user of Office had web links, books, and hand-written notes detailing how to accomplish the tasks they needed to do. By radically changing the UI like that, Microsoft made all of that tacit knowledge obsolete. Furthermore, by making the Ribbon "adaptive", they actively prevented new tacit knowledge from being formed.

I was working helpdesk for my university around that time, and I remember just how difficult it was to instruct people with how to do tasks in Office 2007. Instead of writing down (or showing with screenshots) the specific menus they had to click through to access functionality like line or paragraph spacing, and disseminating that, I had to sit with each user, ascertain the current state of their unique special snowflake Ribbon, and then show them how to find the tools to allow them to do whatever it is they wanted to do. And then I had to do it all over again a few weeks later, when the Ribbon adapted to their new behavior and changed again.

This task was further complicated by the fact that Microsoft moved away from having standardized UI controls to making custom UI controls for each separate task.

For example, here is the Office 2003 menu bar:

Note how it's two rows. The top row is text menus. The bottom row is a set of legible buttons and drop-downs which allow the user to access commonly used tasks. The important thing to note is that everything in the bottom row of buttons also exists as menu entries in the top row. If the user is ever unsure of which button to press, they can always fall back to the menus. Furthermore, documents can refer to the fixed menu structure allowing for simple text instructions telling the user how to access obscure controls.

By comparison, this is the Ribbon:

(Source: https://kb.iu.edu/d/auqi)

Note how the Ribbon is multiple rows of differently shaped buttons and dropdowns, without clear labels. The top row is now a set of tabs, and switching tabs now just brings up different panels of equally arcane buttons. Microsoft replaced text with hieroglyphs. Hieroglyphs that don't even have the decency to stand still over time so you can learn their meaning. It's impossible to create text instructions to show users how to use this UI; instructions have to include screenshots. Worse, the screenshots may not match what the user sees, because of how items may move around or be hidden.

I suspect that many instances of UIs getting worse are due to the same sort of focus-group induced blindness that caused Microsoft to ship the ribbon. Companies get hung up on how new inexperienced users interact with their software in a tightly controlled lab setting, completely isolated from outside resources, and blind themselves to the vast amount of tacit knowledge they are destroying by revamping their UI to make it more "intuitive". I think the Ribbon is an especially good example of this, because it avoids the confounding effect of mobile devices. Both Office 2003 and 2007 were strictly desktop products, so one can ignore the further corrosive effect of having to revamp the UI to be legible on a smartphone or tablet.

Websites and applications can definitely become worse after updates, but the company shipping the update will think that things are getting better, because the cost of rebuilding tacit knowledge is borne by the user, not the corporation.

3-year later follow-up: I bought a Hi-Tec C Coleto pen for my brother, who is in a profession where he has to write a lot, and color code forms, etc. He likes it a lot. Thanks for the recommendation.