Ben Thompson ( https://stratechery.com/ ) , an American industry analyst currently living in Taiwan has a bunch of analyses on this on his blog. In nutshell, the US has a critical infra dependency on Taiwan in high-performance chip manufacturing; specifically, TSMC has a 90% share of 7nm, and 5nm chips. This is critical infra, for which the US does not have good (or even close-enough) substitues. Based on both these economic incentives, and Biden's own statements, the US is extremely likely to reply to Chineese aggression against Taiwan with military force.
Cross-posting some thoughts:
Facebook's metaverse strat is focusing heavily on capability / platform, and not content / single-awe-of-moment. To them it's possibly okay if vrchat wins at the expense of horizon worlds, _just as long as majority of peeps access it via quest_ which they do: https://metrics.vrchat.community/?orgId=1&refresh=30s <- quest users now outnumber pc ones 1:2.
Consider the apple & appstore fiasco, whereby apple can basically, in one OS update, kill retargeting by introducing privacy popups into apps at os level, kneecapping the entire ad industry (single major reason for FB's this quarter _decline of revenues for the first time ever_); and unilaterally decide, that everyone who takes payments for digital services, and has an an app on ios (read: entire B2C SAAS market) now has to pay 30% to them. _and make it a reality_ on pain of removal from app store. _and it works_.
Basically, FB wants to position itself into the same capability / platform play -they control the device, they can dictate terms for everyone building on top of it.
Oh darn, you're right. Thank you!
I'm running simulations to get a feel for what "betting Kelly" would mean in specific contexts. See code here: https://jsfiddle.net/se56Luva/ . I observe, that given a uniform distribution of probabilities 0-1, if the maximum odds ratio is less than 40/1, this algo has a high chance of going bankrupt within 50-100 bets. Any thoughts on why that should be?
In the context of customer development for product research, yes. For good questions on that, see eg the book "Mom test" by Rob Fitzpatrick, and lean customer development field in general. This was solving for the general question "will developing x be paid for"; being wrong on this particular question is expensive.
In the name of supporting people actually doing stuff:
Not grandparent, but browsing through my private notebook for potentially breaking links, eg:
http://lesswrong.com/r/discussion/lw/deg/less_wrong_product_service_recommendations/6yry <- which is one specific advice (and a good one at that) vs https://www.lesserwrong.com/r/discussion/lw/deg/less_wrong_product_service_recommendations/6yry <- which is 404. This actually do have a high impact both on other sites linking specific comment threads, and by extension, on SEO in general (linked page with content changed to empty).
( Relatedly, https://www.lesserwrong.com/non-existing-page returns HTTP 200 instead of 404, which is more wrong, than http://lesswrong.com/not-existing-page )
Hello my values a decade ago, it's so nice to see you publicly documented! In retrospect & in particular, the level of paranoia imbued here will serve you well against incentive hijacking, and will serve as a foundational stone in goal stability.
There is one particular policy here, where my thinking has changed significantly since then; and I'd love to check against Time whether it makes sense, or has my values been shifted:
| Reject invest-y power. Some kinds of power increase your freedom. Some other kinds require an ongoing investment of your time and energy, and explode if you fail to provide it. The second kind binds you, and ultimately forces you to give up your values. The second kind is also easier, and you'll be tempted all the time.
| Optimization never stops. Avoid one-time effort if at all possible. Aim for long-term stability of the process that generates improvements. There is no room for the psychological comfort of certainty.
So, the operative word above is "freedom" (personally, I've used "possibility space maximization"), and it's super useful to run a conceptually exhaustive search across surface-y options . But.
You probably have goals of interest, that you wish to achieve (eg "long-term future of humanity"). Some of these might require banging at stuff for an extended period of time. You have behaviours (eg your meta-policies), which you do for an extended period of time. Whether you recognize it as such, or not, you are also vesting into these; and by way of the forgetting curve, and blog readership, they also require ongoing maintenance. And yes, there might come future technological change which will make them obsolete, and put you into the decision between "your values" & "rolling with changes".
So, my counter to this is, _Anything which does not take into consideration the passage of time, gets eaten by it._ Your Time is a super scarce resource -probably the scarcest of them all. One way to turn this liability into an asset is by vesting into stuff (projects, startups, skills, people, ideas, what have you), and riding the compounding interest across time. This is, to my knowledge, the only way one can scale scarce resources into epic levels of task-specific utility.
(Relatedly, it seems to me, that there is a sliding scale between the need for change in the face of future changes and vesting into things, that most people tend to shift through as they age. Obvious problem here is simulated annealing being susceptible to fixation on phantom (local) maxima by way of changing environment.)
So, unpacking the desiredata from above, the model I'd offer for consideration is the Affordable Loss Principle, with a side dish of Avoiding Infinite Optimizers:
* The affordable-loss principle: prescribes committing in advance to what one is willing to lose rather than investing in calculations about expected returns to the project. Key to affordable loss policies is generation of Next-best-alternatives, such so when it comes to move, there is something to seamlessly move forward to.
Or, in the wise words of Zvi: https://www.lesserwrong.com/posts/ENBzEkoyvdakz4w5d/out-to-get-you
In conclusion, I'd suggest that yes, run a freedom-maximizing circle, because it eliminates conceptual blindsight, and there is a lot of low-hanging fruit you can pick up on your way. But additionally, be on the lookout for opportunities that are compact, low-hanging, and compounding across time, such so that linear investments today leads to incremental & compounding utility for tomorrow.
Thank you for posting this. I agree, that growing negotiation skills is hard under best of circumstances; and I agree that certain types of newbies might self-identify with the post above.
There is a qualitative difference between people who are negotiating (but lack the proper skill), and the parasites described above:
Beginner negotiators state their request, and ask explicitly (or expect impliedly) for price / counter
More advanced negotiators start with needs/wants discovery, to figure out where a mutually beneficial deal can be made; and they adjust as discussion proceeds
These parasites, in comparison, attempt to raise their request against explicitly stated, nebulous things (or nothing at all): "Would you like to do free translation for me?" - "Cause X is very important, and therefore you, specifically, should do something about it" - "Would you like to build my full website for me in exchange of 1% shares?"
For the record:
I have attempted education in some cases (1-on-1, no social standings on the line on either parties, being discreet, etc), to no effect, and only resentment from the other party.
I observe that this parasitic strategy works some of the time, which incentivize existing parasitic behavior to grow until saturation. These are the reasons why I brought this up here in the first place.
Kindly note, that while there were a lot more evidencing going into this than described above, I am hesitant to disclose more specificities about any of these cases, because the Bay is small (-> personal identification), and discussion isn't reflective-complete (parasites read this, too; the more I disclose here, the more they can shift their strategies)