Vaniver

Comments

Weighted Voting Delenda Est

IMO the thing voting is mostly useful for is sorting content, not users. You might imagine me writing twenty different things, and then only some of them making it in front of the eyes of most users, and this is done primarily through people upvoting and downvoting to say "I want to see more/less content like this", and then more/less people being shown that content.

Yes, this has first-mover problems and various other things, but so do things like 'recent discussion' (where the number of comments that are spawned by something determines its 'effective karma').

Now, in situations where all the users see all the things, I don't think you need this sort of thing--but I'm assuming LW-ish things are hoping to be larger than that scale.

Weighted Voting Delenda Est

I could talk about the time Jameson Quinn spent a month or two writing up an open research question in voting theory and a commenter came in and solved it.

I do think this is overselling this a little, given that Shapley value already existed. [Like, 'open research question' feels to me like "the field didn't know how to do this", when it was more like "Jameson Quinn discounted the solution to his problem after knowing about it, and then reading a LW comment changed his mind about that."]

Mentorship, Management, and Mysterious Old Wizards

IMO the Mysterious Old Wizards are spending time, but it's in tracking people and understanding them, and tracking problems and understanding them. It's said of Erdos that he was exceptionally good at matching people with problems that were just at the edge of their ability--a less skilled Erdos would have given too many people quests that didn't cause them to grow, or quests that they failed at.

Now, maybe your response is that I'm focusing on someone who wasn't really a MOW, and was more of a manager. There's a form of wizardry that involves giving quests, and there's another form of wizardry that focuses on making heroes, and while they're related you're interested in the second one.

I guess I'm... less convinced that the second one works through these sorts of interactions, or independently of the management aspect, or so on. The various times in my life where I've been a MOW to people (maybe?) I think they all involved actually being familiar with the people involved, and having a specific vision of a strength they could develop to meet a challenge that I could see.

“PR” is corrosive; “reputation” is not.

My dictionary has "dishonor" in it, as both a noun and a verb.

Lean Startup Reading Comprehension Quiz

When I was in graduate school, they let you take the qualifying exams twice. So I didn't study at all the first time, confident that if I failed, I could study and pass the second time. So in that spirit, here's me answering the questions without having read the book, and without Googling anything.
 

1. Learning is a change in your mental model; validated learning is a change in your model that is either tied to a core metric or that you then confirm through tests.

2. I'm going to guess True. That is, both of those metrics (revenue and number of customers) are pretty hard to fake and so feel validated, and exponential growth means 'something real is happening' in startup land. It feels a bit like it's a trick question, because the metrics are between clearly real/good metrics (like profit) and more fake metrics (like number of users, as opposed to number of active users), but my guess is these are real according to the author.

3. Hmmm. "by forcing you to iterate quickly, and by keeping you in contact with users / learning from reality."

4. Innovation accounting presumably has something more like "net present value" as calculated from projections; I'm going to guess this means stuff like tracking growth rate instead of current level (since the number of users will still look explosive even if your growth is dropping from 5% to 4% to 3%).

5. Push has stations doing work whenever they get the inputs and then sending it on; so maybe a supplier makes screws and just sends a hundred box of screws a week or w/e. The engine station makes engines and then sends them to assembly, assembly starts at the start and pushes things down the line, etc.; basically you have lots of work-in-progress (WIP), fungible workers working on the wrong thing (someone who could assemble doors and windows is just doing whatever they're assigned, which might not be the important thing), and potentially imbalanced flows / stations starved of necessary inputs.
Pull instead flows backwards from the outputs. I'm going to describe a particular way to implement this to make it concrete, but there are lots of different ways to do this / this is only sometimes applicable. You have a number of cars you want to get out the door, and so you look at the final station (polishing, say), and have a sense of what rate it can polish at that determines how many cars needing polishing should be sitting in the 'in box' for that station. Whenever the in box drops below that, the previous station (upholstery, say) gets an order, which it then tries to fulfill, which maybe drops some of its inboxes below their level, which then makes previous stations generate more (screws, seats, etc.).
A thing that's nice about pull is that you've put a control system on the WIP, hoping to make sure that everyone is able to do the work that's most useful at the moment. If you don't actually need any more screws, you don't make any more screws. If you have a thousand different parts, the WIP control system is less good, and instead you just want to send orders directly to all the stations, tho prioritization is then more of a challenge.

6. Presumably the pull is something like "growth"; like, you have whatever core metric you care about (like % increase in revenue week-on-week), and then you try to reason backwards from that to everything else the company is doing. You don't have an engineer who just comes in and cleans up the code every day and then goes home (a more push-based model), you have a story that goes from % increase in revenue to shipping new features to what the engineer needs to be doing this week (which might be cleaning up the codebase so that you can make new features).

7. True in that you're letting the system plan for you, instead of needing your human planners to be forecasting demand correctly. But obviously the WIP cost is a function of the underlying uncertainty.

8. False; lean often points you towards flexibility instead of rigidity, and rigidity is baked into a lot of 'economies of scale' things. Instead of getting a deal on 10,000 screws, you buy them in boxes of 100 whenever you open one and only have five boxes on hand. This helps when you suddenly stop needing as many screws, and also if you suddenly need lots of screws (since you can easily buy more boxes, where it may be difficult to shift the delivery date on your huge crate of screws).

9. First, the dad is able to ship his first letter sooner. Second, the dad learns things from going through the whole cycle; if, for example, the fold of the first letter doesn't quite fit in the envelope, he can improve the fold for next time, whereas the kids will discover that all of their letters don't quite fit in the envelope.

10. True. Employees will spend more time switching, which drops productivity, and maybe even waiting, which drops productivity. This is the cost of flexibility, and ends up paying for itself because the increased prioritization means they're getting less output out the door but the output is more valuable.

11. Hmmm, I don't think I've heard this phrase before, but I assume it means something like trying to do lots of things at once (like the kids doing the letters in an assembly-line way without feedback), such that the product is late and low-quality, and in particular having to abandon lots of WIP when the market changes underneath you. "Well, we didn't send all of the letters in time for Christmas, and now we have to start our Valentine's letters, which really can't use much of the WIP we have lying around from Christmas."

12. It's a negative feedback system for faults, errors, and incidents. Something goes wrong, and you try to get information from that event fed back into five generating systems (as defined by levels of abstraction). This then drives down (you hope) the number of future errors, eventually controlling it to 0 (in the absence of changes that then introduce new faults).

13. Hmm, I can see two meanings here. The first one, that I'm more confident in, is the "any worker can halt the line at any time" system, where giving anyone the power to identify problems immediately means that you are always either 1) producing good product or 2) fixing the line such that it produces good product. "Production" consists of 1 and 2 together, and not of 3) producing bad product, since the outputs of 3 will just have to be thrown away.
The other meaning is that if your station doesn't have any needed output, you shouldn't do something just to not be idle; this is so that if a needed order does come in, you can immediately start on it so that it's done as soon as possible. 

The GameStop Situation: Simplified

In particular, my understanding is that most people who shorted in the early days are now out (including, for some, giving up on shorting entirely) and have realized billion dollar losses, but short interest remains approximately the same, because new funds have taken their place. It was quite risky to think a stock at $4 would decline to $0, but it's not very risky to think a stock at $350 will decline to $40. It remains to be seen where the price will stabilize (and, perhaps more importantly, when) but I think the main story is going to be "early shorts lost money, late shorts gained money, retail investors mostly lost money).

Vaniver's Shortform

I am confused about how to invest in 2021. 

I remember in college, talking with a friend who was in a class on technical investing, and he was mentioning that the class was talking about momentum investing on 7 day and 30 day timescales; I said "wait, those numbers are obviously suspicious; can't we figure out what it should actually be from the past?", downloading a dataset of historical S&P500 returns, and measuring the performance of simple momentum trading algorithms on that data. I discovered that basically all of the returns came from before 1980; there was a period where momentum investing worked, and then it stopped, but before I drilled down into the dataset (like if I just looked at the overall optimization results), it looked like momentum investing worked on net.

Part of my suspicion had also been an 'efficient markets' sense; if my friend was learning in his freshman classes about patterns in the market, presumably Wall Street also knew about those patterns, and was getting rid of them? I believed in the dynamic form of efficient markets: you could get rich by finding mispricings, but mostly by putting in the calories, and I thought I had better places to put calories. But this made it clear to me that there were shifts in how the market worked; if you were more sophisticated than the market, you could make money, but then at some point the market would reach your level of sophistication, and the opportunity would disappear.

I learned how to invest about 15 years ago (and a few years before the above anecdote). At the time, I was a smart high-schooler; my parents had followed a lifelong strategy of "earn lots of money, save most of it, and buy and hold", and in particular had invested in a college fund for me; they told me (roughly) "this money is yours to do what you want with, and if you want to pay more for college, you need to take out loans." I, armed with a study that suggested colleges were mostly selection effect instead of treatment effect, chose the state school (with top programs in the things I was interested in) that offered me a full ride instead of the fancier school that would have charged me, and had high five figures to invest.

I did a mixture of active investing and buying index funds; overall, they performed about as well, and I grew more to believe that active investing was a mistake whereas opportunity investing wasn't. That is, looking at the market and trying to figure out which companies were most promising at the moment took more effort than I was going to put into it, whereas every five years or so a big opportunity would come along, that was worth betting big on. I was more optimistic about Netflix than the other companies in my portfolio, but instead of saying "I will be long Netflix and long S&P and that's it", I said "I will be long these ten stocks and long S&P", and so Netflix's massive outperformance over that time period only made me slightly in the black compared to the S&P instead of doing much better than it.

It feels like the stock market is entering a new era, and I don't know what strategy is good for that era. There are a few components I'll try to separate:

First, I'm not actually sure I believe the medium-term forward trend for US stocks is generically good in the way it has been for much of the past. As another historical example, my boyfriend, who previously worked at Google, has a bunch of GOOG that he's never diversified out of, mostly out of laziness. About 2.5 years ago (when we were housemates but before we were dating), I offered to help him just go through the chore of diversification to make it happen. Since then GOOG has significantly outperformed the S&P 500, and I find myself glad we never got around to it. On the one hand, it didn't have to be that way, and variance seems bad--but on the other hand, I'm more optimistic about Alphabet than I am about the US as a whole.

[Similarly, there's some standard advice that tech workers should buy less tech stocks, since this correlates their income and assets in a way that's undesirable. But this feels sort of nuts to me--one of the reasons I think it makes sense to work in tech is because software is eating the world, and it wouldn't surprise me if in fact the markets are undervaluing the growth prospects of tech stocks.]

So this sense that tech is eating the world / is turning more markets into winner-takes-all situations means that I should be buying winners, because they'll keep on winning because of underlying structural factors that aren't priced into the stocks. This is the sense that if I would seriously consider working for a company, I should be buying their stock because my seriously considering working for them isn't fully priced in. [Similarly, this suggests real estate only in areas that I would seriously consider living in: as crazy as the SFBA prices are, it seems more likely to me that they will become more crazy rather than become more sane. Places like Atlanta, on the other hand, I should just ignore rather than trying to include in an index.]

Second, I think the amount of 'dumb money' has increased dramatically, and has become much more correlated through memes and other sorts of internet coordination. I've previously become more 'realist' about my ability to pick opportunities better than the market, but have avoided thinking about meme investments because of a general allergy to 'greater fool theory'. But this is making me wonder if I should be more of a realist about where I fall on the fool spectrum. [This one feels pretty poisonous to attention, because the opportunities are more time-sensitive. While I think I have a scheme for selling in ways that would attention-free, I don't think I have a scheme for seeing new opportunities and buying in that's attention-free.]

[There's a related point here about passive investors, which I think is less important for how I should invest but is somewhat important for thinking about what's going on. A huge component of TSLA's recent jump is being part of the S&P 500, for example.]

Third, I think the world as a whole is going to get crazier before it gets saner, which sort of just adds variance to everything. A thing I realized at the start of the pandemic is that I didn't have a brokerage setup where I could sell my index fund shares and immediately turn them into options, and to the extent I think 'opportunity investing' is the way to go / there might be more opportunities with the world getting crazier, the less value I get out of "this will probably be worth 5% more next year", because the odds that I see a 2x or 5x time-sensitive opportunity really don't have to be very high for it to be worthwhile to have it in cash instead of locked into a 5% increase.

Lessons I've Learned from Self-Teaching

Interestingly, this comment made me more excited about using Anki again (my one great success with it was memorizing student names, which it's well-suited for, and I found it pretty useless for other things), because this comment has a great idea with a citation that I probably won't be able to find again unless I remember some ancillary keywords (searching "blurry to sharp" on Google won't help very much). But if I have it in an Anki deck, not only will be it more likely to be remembered, but also I'll have the citation recorded somewhere easy to search.

Alex Ray's Shortform

When choosing between policies that have the same actions, I prefer the policies that are simpler.

Could you elaborate on this? I feel like there's a tension between "which policy is computationally simpler for me to execute in the moment?" and "which policy is more easily predicted by the agents around me?", and it's not obvious which one you should be optimizing for. [Like, predictions about other diachronic people seem more durable / easier to make, and so are easier to calculate and plan around.] Or maybe the 'simple' approaches for one metric are generally simple on the other metric.

Deutsch and Yudkowsky on scientific explanation

I'm responding to claims that SI can solve long standing philosophical puzzles such as the existence of God or the correct interpretation of quantum mechanics.

Ah, I see. I'm not sure I would describe SI as 'solving' those puzzles, rather than recasting them in a clearer light.

Like, a program which contains Zeus and Hera will give rather different observations than a program which doesn't. On the other hand, when we look at programs that give the same observations, one of which also simulates a causally disconnected God and the other of which doesn't, then it should be clear that those programs look the same from our stream of observations (by definition!) and so we can't learn anything about them through empirical investigation (like with p-zombies).

So in my mind, the interesting "theism vs. atheism?" question is the question of whether there are activist gods out there; if Ares actually exists, then you (probably) profit by not displeasing him. Beliefs should pay rent in anticipated experiences, which feels like a very SI position to have. 

Of course, it's possible to have a causally disconnected afterlife downstream of us, where things that we do now can affect it and nothing we do there can affect us now. [This relationship should already be familiar from the relationship between the past and the present.] SI doesn't rule that out--it can't until you get relevant observations!--but the underlying intuition notes that the causal disconnection makes it pretty hard to figure out which afterlife. [This is the response to Pascal's Wager where you say "well, but what about anti-God, who sends you to hell for being a Christian and sends you to heaven for not being one?", and then you get into how likely it is that you have an activist God that then steps back, and arguments between Christians as to whether or not miracles happen in the present day.]

But I think the actual productive path, once you're moderately confident Zeus isn't on Olympus, is not trying to figure out if invisi-Zeus is in causally-disconnected-Olympus, but looking at humans to figure out why they would have thought Zeus was intuitively likely in the first place; this is the dissolving the question approach.

With regard to QM, when I read through this post, it is relying pretty heavily on Occam's Razor, which (for Eliezer at least) I assume is backed by SI. But it's in the normal way where, if you want to postulate something other than the simplest hypothesis, you have to make additional choices, and that each choice that could have been different loses you points in the game of Follow-The-Improbability. But a thing that I hadn't noticed before this conversation, which seems pretty interesting to me, is that whether you prefer MWI might depend on whether you use the simplicity prior or the speed prior, and then I think the real argument for MWI rests more on the arguments here than on Occam's Razor grounds (except for the way in which you think a physics that follows all the same principles is more likely because of Occam's Razor on principles, which might be people's justification for that?).

Load More