Keep in mind you are not getting the same terms as the investors. And the valuation is based on the terms the recent investors got. See more details: https://www.benkuhn.net/terms/
I feel like most of the value I got from this post is from the first section on management. I wish you had talked about your experiences in more detail. A serious problem with thinking about this stuff is that there are serious issues on both sides. For example, I have found that asking for advice from people who don't actually share your goals/values if often, at best, a waste of time. They give you advice that optimizes their goals not yours without being explicit about what they are doing. In the case of the management you did, I think this problem is less severe because your goals were reasonably overlapping. You just had to convince people that your goals overlapped. In many similar-ish cases the actual incentive is to avoid honesty.
Retrospectively I essentially always regret accept Chesterton's Fence type arguments. Overall I think the meme has been quite harmful to me. At the very least it caused me to lose a lot of time.
I loudly promote a large number of rather contentious ideas. In particular, I am an animal right hardliner (an active member of Direct Action Everywhere) and a socialist top of the big rationalist stereotypes (singularity is near, poly, etc). I certainly annoy a lot of people but socially I am doing well. I have many friends, an amazing long term relationship, and am doing well financially. You can read my blog to see the sort of beliefs I promote.
It is unclear why this works out for me. I look rather average which might help? Plausible I have some sort of social skills that help me smooth things over if they get too hot. I handle conflict fairly well. It seems empirically true many people are socially successful despite having extremely controversial. In some cases it seems to help them?
Most people, including most lesswrong readers, are not top AI experts. Nor will they be able to become one quickly.
I wound up doing something similar to this:
ARKQ - 27%
Botz - 9%
Microsoft - 9%
Amazon - 9%
Alphabet - 8% (ARKQ is ~4% alphabet)
Facebook - 7%
Tencent - 6%
Baidu - 6%
Apple - 5%
IBM - 4%
Tesla - 0 (ArkQ is 10% Tesla)
Nvidia - 2% (both Botz and ARKQ hold Nvidia)
Intel - 3%
Salesforce - 2%
Twilio - 1.5%
Alteryx - 1.5%
BOTZ and ARKQ are ETFs. They have pretty high expense ratios. You can replicate them if you want to save 68-75 basis points. Botz is pretty easy to replicate with only ~10K.
I would have a lot of trust in a vote. I seriously doubt we as a community would agree on a set of knower I would trust. Also some similar ideas have been tried and went horribly in at least some cases (ex the alumni dispute resolution council system). It is much harder for bad actors to subvert a vote than to subvert a small number of people.
It is commonly claimed that if you make it to 'level N' in your mathematics education you will only remember level n - 1 or level n-2 longterm. Obviously there is no canonical way to split knowlege into levels. But one could imagine a chain like:
2) Calculus (think AP calc in the USA)
3) Linear Algebra and Multi-variable Calculus
4) Basic 'Analysis' (roughly proofs of things in Calculus)
5) Measure Theory
6) Advanced Analysis topic X (ex Evan's Partial Differential Equations)
This theory roughly fits my experience.
I don't think Qiaochu's comment is particularly low effort. He has been Berkeley for a long time and spoke about his experiences. Given that he shared his google doc with some people, the comment was probably constructive on net. Though I don't think it was constructive to the conversation on lesswrong.
If someone posts a detailed thread describing how they want to do X, maybe people should hold off on posting 'actually trying to do X is a bad idea'. Sometimes the negative comments are right. But lesswrong seems to have gone way too far in the direction of naysaying. As you point out, the top comments are often negative on even high effort posts by highly regarded community members. This is a big problem.
I would post much more on lesswrong if there was a 'no nitpicking' norm available.
(re-posted as a top level comment at Ray's request)