"You must be new around here."

Guilty as charged. But the financial model is usually one of my first areas of meta-interest when I start looking (or relooking) at a website. What are the motivations? "Follow the money," said the detective. But in the worst cases, unsustainable financial models usually disappear.

It always seems to me that the money matters and that systems (including websites and companies) adjust their various behaviors to reflect where the money is coming from and how it is flowing through the system. LessWrong clearly has ongoing costs for servers and support (and I hope the helpful person in the Intercom chat room was duly compensated for the time). I also read about the big karma project in the last quarter of 2019. Nothing there about the development and evaluation costs, but it sure sounded like a lot of work was done. Somewhere in the FAQ it said that LessWrong doesn't make money, which is fine, but it did mention donations. (My observations indicate that big donors usually like to call the shots and small donors generally don't get to (which bothers me (but that might be simple projection since I'm strictly a small donor)).)

My own preference would be cost recovery, but mostly based on benefits received. Would you believe "Basically anything that people are willing to pay for should be allowed to happen?" My fantasy funding mechanism usually flies under the handle of CSB (for Charity Share Brokerage), but before speculating farther I'd like to understand more about how things work now on LessWrong. (Even more than this financial question, my primary confusion right now is how to detect the current flow of activity. But maybe I should be most focused on figuring out which parts of the old activity are most worth reading? That side seems overwhelming.)

Better clarify that I don't think that everything should be reduced to monetary values, but money is a helpful metric. Even sustainable. I actually think economics is mostly bogus because time is not equal to money, even approximately. The proper relationship is time >> money. (But ekronomics is another one of my favorite cans for worms.)

New Answer
New Comment

2 Answers sorted by



LessWrong has a team of six (which includes site development and support, the Alignment Forum in collaboration with MIRI and the EA Forum in collaboration with CEA, plus some assorted smaller projects). We get some funding from small donors, but the majority of funding is from a few large donations. We chose which donations to pursue in part based on which donors would best preserve our independence, and don't talk to them about site directions and decisions very much. We are currently adequately funded and not actively seeking more donations.

Yep, for more context, most of our funding comes from the Survival and Flourishing Fund and the Open Philanthropy Project.

Again, thanks for your replies, though I'm still not sure what to make of them.  On the one hand, I agree that independence is a good thing (even though I may sometimes disagree with some people's independent decisions). On the other hand, I have deep reservations about charities that in a sense allow governments to evade their appropriate responsibilities to the citizens of their nations. Especially in the case of serious problems, it shouldn't be a matter of luck (if the victim stumbles across a helpful charity) or willingness and ability to actively beg for help. (Food as an obvious example. Some people prefer to starve to death before begging.) On the third hand, I think there are multiple constituencies here (within LW) and each person and each group of people have different priorities and objectives, etc. Several more hands, but let me try a few exploratory questions instead. Which "constituency" do I belong to (from your LW team perspective)? How should I properly express support for or concern about "developments" (on LW)? BtW, I think I like the leisurely atmosphere of LW. However I may be projecting due to my recent externally forced shifts of priorities (which are also obliging me to give LW a relatively low priority). But on the fourth hand I am also having trouble figuring out what material on LW is still relevant even though it is old. LW kind of feels like a virtual book in the process of formation, with various chapters in various states of completion... (The longest chunk of my career was technical editing for a TLC, but the research lab didn't publish many books. Some chapters and dissertations came across my desk from time to time, but mostly just conference papers and HR stuff.)
Charity vs government, both have big disadvantages. Charity depends on luck, and on the victim being "popular" in some sense. Government depends on politics, and dealing with the bureaucracy is sometimes almost as humiliating as begging. This said, projects like "new Less Wrong website" in my opinion should not be paid by government. It is something that serves a specific group, which can pay its own expenses. Just to avoid possible confusion, the team is paid for developing and maintaining the technical infrastructure, not writing the articles. The articles are all written by volunteers. So if you were worried about independence of content from sponsors, I hope this helps. Making the corpus of old articles easier to navigate is a known problem, and there are several attempts to solve it: wiki, tags, books. Wiki could in theory be as organized as you want it to be. In practice, it seems to be ignored, as the main attention is on the articles, and the wiki is almost a separate project. (But recently it was integrated with tags.) Tags provide an overview of topics, catalogize articles per topic, and allow you to find articles similar to one you are reading. Best articles from 2018 were published as a book, and the same is planned for the following years. So if you joined recently and want to quickly get an overview of the "best of Less Wrong", I would recommend reading the Sequences (web; PDF/epub/mobi) and the 2018 book.
Again, thank you for your thoughtful reply. I feel like I'm trying to use a depth-first response strategy and it's making it harder for me to see what is really going on. I think the most interesting problem raised in your response is the integration problem. If people are just contributing their thoughts because they want to, then they don't really have much incentive to do the hard work of integrating their thoughts into the thoughts of other people. If Wikipedia is able to accomplish that kind of integration to a fairly high degree, I think it is due to their guiding principles, and right now I don't understand the principles of LW. I can definitely say (based on many years of professional work) that it's hard work and I was well paid for my efforts in making technical papers (up to dissertations) more cohesive and integrated with previous research results. My newer view is that LW is almost like a form of performance art, with the contributors in the role of artists. What could LW do to encourage more integration of the content? I see it largely as a search and editing problem. Projecting again? At least I think it would be nice if LW was looking at what I am writing and searching for related content, perhaps showing candidates over in that empty space on the right side (of my biggest display). Then the editing problem would be supporting me in integrating my new content into the older content so that I could help extend or clarify that material. But why would I make the effort? Obviously when I was paid to do that kind of thing, the answer was easy. Because I was doing it for money. Trickier to justify now. I think I'm mostly writing because it helps me clarify my own thinking about things. I also feel a sort of "teacher's satisfaction" when I feel like I've "enlightened" someone. It would be nice if the system (LW in this example) offered me some way to track my contributions. I might even feel like I'd accomplished something if I found I had written 30,00

Wikipedia generally works fine, but occassionally problems happen. Sometimes obsessive editors are rewarded with power, which they sometimes abuse to win the debates on their pet topics. As long as other similarly powerful editors don't care, they are allowed to rule their little fiefdoms.

As an example, David Gerard, the admin of RationalWiki, is currently camping at the Wikipedia article on Less Wrong; most of his effort goes towards reducing the section on effective altruism and expanding the section on "Roko's basilisk"... which itself is known mostly because he previously popularized it on RationalWiki. (Also notice other subtle manipulation, like the fact that the page mentions the political opinion of 0.92% of 2016 survey participants, but the remaining 99.08% is not worth mentioning.) I mean, just make your own opinion on how much the content of Less Wrong as you see it here actually resembles the thing that is described at Wikipedia. -- One guy, with a strong grudge, willing to spend more time fighting wiki wars than all his opponents together. ¯\_(ツ)_/¯

The principles of LW... well, originally it was a shared blog by Robin Hanson and Eliezer Yudkowsky, later (cca 10 ye... (read more)

The problem here is that the goal of Wikipedia isn't to describe LessWrong as it's seen by someone who goes to LessWrong but how LessWrong is seen by reliably secondary sources. 
Just rereading the entire "question" to try to assess it, and almost overlooked your [Viliam's] helpful numbered list. I think I have replied as appropriate (if replying was appropriate?) and hope that the notification system will let me know if I should come back.  On the basis of your encouragement, I'm going to try to write something for the literacy software topic. Not sure upon what basis you think it might be "great", but I could not find much that seemed to be related in my search efforts on LW. The obvious searches did produce some results, but how they are ranked is still unclear. For example, I remember a "literacy" search with four primary results, but two of them were for narrow senses of literacy such as "financial literacy". Before starting to write, I'm going to try searching from the list of tags. (It would be helpful if there were an option to sort by the numbers there... That way I could spot the more relevant tags more easily. (I'm guessing that the numbers are the authors' usage counts for the tags, but there should be a way to link to the readers' counts to capture the other side of interest? (What people want to read about in contrast to what people want to write about. (Yet another symmetry thing?)))) Backing up to the top level, I haven't obtained much insight into the original question. I guess my summary of my understanding now would be "We're sort of above worrying about money, so go have fun with the LW tools we are creating." I think that summary reflects input from at least two of the creators of the tools. The users' side seems to be "We're having fun and that's why we do it."
Your summary seems correct. Here is a part of LW history that may be relevant to the question of money and sponsors: the Less Wrong website you see is, from a technical perspective, already a third version. The first version was Overcoming Bias, a shared blog of Robin Hanson and Eliezer Yudkowsky, which started in 2006. Being just two guys' personal WordPress blog, I assume the costs were negligible. The second version was Less Wrong implemented with a clone of Reddit code in 2009, which started with importing the existing Eliezer's articles. The initial software was free, but required some maintenance and extra functionality, which was provided by TrikeApps. TrikeApps is a company owned by Less Wrong user matt. The third version that you see now, with a complete rewrite of code, was actually made only a few years ago. I couldn't quickly find the exact year, but not sooner than 2017. This was the first version that was actually quite expensive to develop. In other words, before Less Wrong started needing serious money to exist as a website, it already had more than 10 years of history. So there is a strong momentum. The people who donated money are presumably the people who liked the existing LW, and therefore their wish is probably to keep it roughly like it was, only more awesome. (The people who didn't like the historical Less Wrong would probably not donate money to keep it alive.) The fans of Less Wrong, as a whole, are sufficiently rich to keep the website alive. PS: You are taking this too seriously; probably more seriously than most users here. There is no need to overthink it. If you have an idea for a nice article, write it. If you don't, just reading and commenting is perfectly okay.
I think a better historical perspective would be that they liked was LessWrong was in it's first years of existing and felt that LessWrong declined and that there was a potential to bring it back to it's old glory and make it even beter.
I feel like this branch of the discussion might be related to Dunbar's Number? Either for total members or for active participants. Is there any data for number of participants over time and system versions? However I also feel like Dunbar's Number is probably different for different people. Social hubs have large numbers of personal friends, whereas I feel overwhelmed by any group of 150. My personal Dunbar's Number might be around 15?
I don't think the history here is about Dunbar's number.
Thank you for another deep and thoughtful response. But what response should I make? [Note that second person "you" here refers to Viliam, but there is risk of confusion if I say something to the broader (but unknown) audience. I'll try to be careful... But in this discussion I am sure that I have already used "you" with reference to someone else. [I find myself wishing that English had a mechanism to avoid confusing "you" references without ponderous third person descriptions such as "Viliam in his comment of <timestamp> said..."]] The easy part is to pick a couple of nits, but I'm trying to get deeper than that... But when I back up (and look at the context) then the volume becomes overwhelming and I'm having trouble unraveling the topics. I do feel that part of the problem is my poor and unclear writing, but it is also true that I don't understand how to use the system well. So I'm going to focus on two nits here, one that reflects my lack of understanding of the system and one that reflects the lack of clarity in my writing. Then I'll try to get back up to a higher perspective, which seems to be the karma thing... (But that topic is more related to my earlier reply on the karma "research" from the end of 2019.) At the end of your comment, what you described is an interesting example of my lack of understanding of the LW system. Or maybe an example of my failing eyesight? I definitely knew that it worked exactly the way you described it for "top-level" content, but for several days I was apparently unable to see the fifth icon on the context menu when I was working on a reply (such as this one). But this is just part of a more general lack of familiarity with the system. Another example: A few minutes ago I spent several minutes figuring out that a "5m" notation meant 5 minutes ago, not 5 months ago, even though the article had an "11y" notation for the 11 years from 2010. The section heading of "Recent Discussion" should have made it more obvious to me, but n
Hey, if you're new here, it's perfectly natural that there are some website functions you are not familiar with. I am here for years, and there are still things I don't know. Keep reading, you will gradually get more familiar with how this all works. Good catch! I never noticed this one. (If you move the mouse above the abbreviation, the full date and time will be displayed.) The UI you imagine probably does not exist. What you can get is (a) the list of all articles you posted, in chronological order; and (b) the list of all comments you made, in chronological order, with links to context. Both of them are on the same page, when you click on your name. For me, this is quite enough, because the number of my posts will most likely never exceed three digits, probably not even two (though I wish the meetup announcements were displayed separately from the actual articles), and given the huge number of comments I wrote during the years, I don't believe I would ever want to see them all. Maybe read articles with the Site Meta tag? Not all of them are related to what you want, but probably most of what you want is covered somewhere there.
Thanks for the lead to the "Site Meta" tag. I have that one open in another tab and will explore it next. However my general response to your reply is that part of the problem is that I would like to see different kinds of "tracking summaries" depending on what kinds of things I am trying to understand at a particular time. You introduced a new example with your mention of "meetup announcements". If you are trying to track your activity on LW in terms of such meetings, then you want to see things from that perspective. What I have done in today's experiment is to open all the "recent" notifications in tabs because it is not clear which ones are actually new... It would be helpful if the notifications pulldown list also showed the notification times (though the mouseover trick for date expansion also works for the relative dates on the floating summary that appears to the left of the notification when you hover over it). Overall I'm still having a difficult time grasping the status of this question.
LessWrong doesn't focus on AI in general but on AI safety or AGI. Saying that the answers are intuitively obvious sounds to me like not understanding the questions and why people consider the open questions to be open and interesting. Without understanding the questions well enough, I doubt writing about it would lead to articles that are useful to anyone and such are well received.
LessWrong shows you how much karma you get on your profile. That seems to me like a better metric then how much words are written.
Thank you for the reply, and I am also somewhat aware of karma. It does seem useful, but not in a searchable way. Per my suggestion for extended karma (one of my first efforts on LW), I wish that karma (in a multidimensional form) were usable for self-improvement, for filtering and prioritizing, and even for searching for people who are likely to write things worth reading. I guess one helpful step would be if karma was included in the flyover display. Right now the "ChristianKI" flyover only reveals 4 dimensions of your identity: Your identity's age (joined date), # of sequences, # of posts, and # of comments. That gives me some idea of your activities, but isn't as helpful (in my imagination) as a radar icon showing that you are above average on consistency and accuracy and perhaps below average in some other dimensions.
Consistency and accuracy are both dimensions that are hard to measure. I don't see where you would get numbers for that.
Accuracy is relatively easy to assess. If you think someone is saying something that is false and you are reacting to the comment on that basis, then you should be able to cite appropriate evidence to that effect. (But the other person should be able to object to your evidence as part of a 'proper' MEPR system.)  I actually think most dimensions of the reputation system should be normalized around zero, so that if people tend to give more negative reactions, then the system should be adjusted to make it more difficult to give a negative reaction, such as saying a comment is inaccurate. (However I also think that should be weighted by the MEPR of the person making the rating. If someone has a established a long track record of catching inaccuracies, then the likelihood is higher for that person.) I agree that consistency is much trickier. Even in the case where I know the person has changed his mind on a topic, I would not regard it as inconsistent if there was good reason for that change. I think I might like computer support for something like that. How about a triggered search? "Show me this person's comments about <target keyword>" and I could then look over the results to see if they are unchanged, evolving over time, or jumping back and forth.  But actually that is something I would like to apply to my own comments over time. I think I am fairly consistent, but perhaps I am deluding myself?



Between people like Jaan Tallinn and OpenPhil there are billions of philantrophic money floating around. There are also a few EA individuals with who don't have access to 9/10-figures who are still willing to fund important projects. 

This both provides for the actual funding of the LessWrong team and provides individuals without any traditional credentials a chance to prove themselves to be capable of making important intellectual contributions to rationality or AI safety and plausible get grants.

Providing billionaires a way to buy increased rationality in the world and decreased AI risk seems to be the strongest monetary consideration. Both things are hard to buy with money. 

That said, currently most people who contribute on LessWrong don't do it with financial motivations.  

12 comments, sorted by Click to highlight new comments since:

Meta: I think "Where does LessWrong stand financially" is a very good question, and I never knew I wanted a clear answer to it until now (my model always was something like "It gets money from CFAR & MIRI, not sure how that is organized otoh"). However, the way you phrased the question is confusing to me, and you go into several tangents along the way, which causes me to only understand part of what you're asking about.

I can easily apologize for my tangents. I do tend to wander. However I can also easily blame my zen collapse from some years back. It used to be a 6-degrees-of-Kevin-Bacon world, but now things too often seem to me to be only one or even zero degrees separated. It's the same thing when you look at it that way?

Not easy to figure out how to fix my question. And if I figured out how to improve it, then I'm not even sure I should fix it or just stay here in the comments, though I see it is possible to edit the original question. 

So... Another way to word the question along your lines could be "How are the (visible) conversational flows affected by the (less visible) money flows?" (If I do modify the original, does it preserve old versions to clarify replies that will then seem out of context?)

Take the google as an example for the main topic? The google started with one set of goals and even had the motto of "Don't be evil." Then the money started flowing and the business mutated. I actually think the google's de-facto motto these days is "All your attentions is belong to us [so we can sell your eyeballs to the paying advertisers]." But there is a fundamental inconsistency there. Advertisers do not want to pay for the most critical thinkers reasoning based on the best data. Advertisers want obedient consumers who will obey the ads, whether they are selling deodorant or stinky politicians. (In another (still simplified) perspective on that subtopic of advertising (bridging to money), the costs are extremely high for the final increments of quality to produce the best products that can then be advertised as being the best. In contrast, the costs are much lower for advertising that portrays legally adequate products as the best.)

This looks kind of like a question, but actually it's you stating your opinions about what you'd prefer as funding for LW, and then randomly speculating about LW funding specifically, with almost no tie to anything that matters about LW.  I think there are some valuable things to explore here, but you might want to start with smaller, distinctly contained concepts introduced in shortform or posts.  

Also, it's not quite identical to https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer, the recommendations apply to anything with personal impact on admins or posters of LW: be very careful to separate theoretical arguments from specific examples on sensitive topics.

Thank you for your reply. I looked at your link, but I am not clear about the relation of "politics" to my question as currently constrained. (Right now I see no reason to extend it in that direction unless the financial model is related to politics. I have so far seen no evidence to that effect. Maybe you could clarify how you see the relationship?)

I was trying to avoid expressing my opinions or suggestions, though if I didn't see the world (or some aspect of the world) as potentially different, maybe even better, then I would deny that there is any problem to be considered. A problem without a solution is not really a problem, but just part of the way things are and we have to live with it. To pretend that I have no opinion or perspective would be quite misleading. 

Or I could remap it to the word "question" itself? If no answer exists, then where (or why) was the question?

Perhaps you could clarify what you mean by "question" in the context of a question that is suitable input for the "New Question" prompt? Would that be a better way to approach it?

Looking (yet again) at the "Default comment guidelines", the explanation for my phrasing of the question was because my initial reading of LW seemed to indicate that money is not supposed to influence the discussions and I am skeptical of that. I am asking for clarification, but that may be a request to be persuaded LW has a viable financial model? My previous reply included a more concrete example. As a prediction? Hmm... I guess there must be some topics which are not suitable for discussion on LW and therefore I could predict that some of them may be unsuitable for reasons related to the financial models? I still don't see anything that I disagree with and I am already curious about what y'all are thinking (but that is part of my general theory of communication as a two-way process).

Say oops? Not yet, but it happens all the time. I hope I change my mind frequently as I learn new things, but I also try to minimize logical contradictions. I am usually trying to extend my mental frameworks so that apparent contradictions can be resolved or diverted. (I've gotten old enough that I think most of my positions have substantial data underlying them.)

It's not exactly politics, but has some of the same characteristics, in that many participants will have strong emotional reactions that interfere with exploring rationality or general lessons.  It's not that it should never be discussed, but it's more important than usual to be careful to distinguish between when you're theorizing about general concepts and when you're identifying near-mode personal beliefs and actions.

For that reason, you should keep your posts/questions small and self-contained.  Asking "is LW influenced by the financing mechanisms of the site" is very reasonable.  And you should point out some site features or behaviors that make it look like finance influence is happening.  Personally, I don't see it.   Asking "how is LW financed" or "how much does LW cost to run" is maybe reasonable, depending on your reasoning for asking.  Exploring the general fact that financing can affect the behavior or operation of message boards is ALSO very reasonable, but should be a separate post (it's not a question), and should use examples other than LW itself.

For myself, I frequent a WHOLE lot of groups that are minimally influenced by the funding mechanism.  Pre-internet, I ran a BBS and frequented a number of others, which were entirely hobbies and only influenced by our parents rules for use of their paid phone lines.  I see no reason to believe that LW is very much different - it's much more professional, and has way better operators than I ever was or could hope to be.  But I don't think they're motivated by getting rich.  I know that at least one frequent poster doesn't derive any income from participating.

strong emotional reactions

I expect being part of one's identity is key, and doesn't require notable emotional reactions.

Only processing this now, and I'd like to understand your model more deeply.  I think that beliefs being part of one's identity is highly correlated with strong emotions for things that challenge or support those beliefs.  

There are other things which engender strong emotional reactions, as well, so I think of "part of identity" as a subset of things which are difficult contexts in which to discuss rationality.  For instance, one's livelihood (even when it's not particularly part of identity) is likely a difficult topic to explore well in this forum.

Identity colors the status quo in how the world is perceived, but the process of changing it is not aligned with learning (it masks the absence of attempting to substantiate its claims), thus a systematic bias resistant to observations that should change one's mind. There are emotions involved in the tribal psychological drives responsible for maintaining identity, but they are not significant for expressing identity in everything it has a stance on, subtly (or less so) warping all cognition.

There's some clarification of what I'm talking about in this comment and references therein.

Is your [Dogon's] reference to "your model" a reference to 'my [shanen's] preferred financial model' (obliquely referenced in the original question) or a reference to Vladimir_Nesov's comment?

In the first case, my "preferred financial model" would involve cost recovery for services shared. An interesting example came up earlier in this discussion in relation to recognizing consistency in comments. One solution approach could involve sentiment analysis. In brief, if you change your sentiment back and forth as regards some topic, then that would indicate negative "consistency", whereas if your sentiment towards the same topic is unchanged, then it indicates positive consistency. (If your sentiment changes rarely, then it indicates learning?) So in the context of my preferred (or fantasy) financial model, the question becomes "Are enough people willing to pay for that feature?"

Now things get more complicated and interesting in this case, because there are several ways to implement the feature in question. My hypothesis is that the solution would use a deep neural network trained to recognize sentiments. The tricky part is whether we yet know how to create such a neural network that can take a specific topic as an input. As far as I know, right now such a neural network needs to be trained for a specific domain, and the domain has to be narrowly defined. But for the sake of linking it to my financial model, I'm going to risk extending the hypothesis that way.

Now we get to an interesting branch point in the implementation of this feature for measuring consistency. Where do we do the calculations? As my financial model works, it would depend on which approach the users of the feature wanted to donate money for. I'm going to split it into three projects that could be funded:

  1. Developing the deep neural network to analyze sentiments towards input topics. This is basically a prerequisite project and unless enough people are willing to fund this project the feature is DOA.
  2. Analyzing the data with the neural network on the LW (LessWrong) side. In this version of consistency measurement there would be a lot of calculation on the LW side testing sentiments against topics, so there would be both a development project and a significant ongoing cost project. Both parts of this double project would need sufficient donor support to use this approach.
  3. Analyzing the data with the neural network on the users' side. In this version of consistency measurement, the tedious calculations could be removed from LW's servers. The trained neural network would be downloaded and each person would calculate (and optionally share) the consistency metric using the data of that person's own comments. The cost of the development project should be similar, but there wouldn't need to be donors for a major ongoing cost project. (I would actually favor this version and be more likely to donate money for this version due to privacy considerations.)

(If there are enough donors, then both 2 and 3 could be supported. However, deciding which one to implement first could be determined by which project proposal attracts enough donors first.)

In the second case, I'm afraid I don't understand what part of Vladimir_Nesov's comment was about a "model". And you weren't talking to me, anyway. And I should also apologize for my longish and misdirected response?

Oh, I meant the mental model behind 

I expect being part of one's identity is key, and doesn't require notable emotional reactions.

As this is the opposite of my intuition (emotional reaction is the salient feature, part of one's identity is one way to generate that emotional reaction).


I do appreciate the further exposition of your financing model, though.  

Okay and you're welcome, though I wish I had understood that part of the discussion more clearly. Can I blame it on the ambiguity of second-person references where many people are involved? (An advantage of the Japanese language in minimizing pronoun usage?)

Interesting reply, and again I thank you for your thoughts. Still not seeing how "politics" figures in. I'm not trying to provoke any emotional reactions. (Nor do I perceive myself as having any strong emotional reactions to anything I've seen on LW so far.)

The part about your BBS especially hits a nerve. I created and operated a BBS in my youth. I did include a financial model in the design of my BBS, but my primary motivation at the time was to create a real cost for abuse of the BBS and secondarily to recover some of the costs. (Dedicated hardware and an extra phone line (I think).) I did not include my programming time as a cost because I mostly regarded that as a learning experience that was also improving my own market value as a programmer. Looking back, I actually think the deficiencies in my financial model greatly limited the success of the system, and if I had done it again, then I would have changed the priorities so that the funding model of the BBS put priority on the main objectives of the users. I even see how I could have arranged the model to align my personal philosophy more closely to the users' objectives. (But I don't have a time machine to go back and fix it now and I got busy with other stuff for many years after that...)

I also sympathize (?) or partially concur with the idea of keeping things small and self-contained. However I also see that as part of the financial model. I think the Diaspora fiasco on Kickstarter is a good example of how such things can go wrong. If they had just gotten the first increment of money and started by implementing the kernel server, then maybe the project could have succeeded step by step. Instead, the project hit the jackpot, and they tried to refactor and redesign for the grand new budget, and things mostly went bad after that.

Another relevant example I could use would be Slashdot, though I don't know how many of the people on LW are familiar with it. My perception is that the rolling ownership indicates a portable nuisance status, though the nuisance status may be some form of non-pressing debt rather than anything that threatens the existence of the website. Whatever the cause, it seems that Slashdot lacks the resources to fix even the oldest and best-known limitations of the system. (In particular, the moderation system of Slashdot would seem to need some adjustments.)

Hmm... I feel like my use of examples is diverging from the guidelines' intended meaning for "concrete models".