Viliam

Wiki Contributions

Comments

One issue that arises with starting a socialist firms is acquiring initial investing. This is probably because co-ops want to maximize income (wages), not profits. They pursue the interests of their members rather than investors and may sometimes opt to increase wages instead of profits. Capitalist firms on the other hand are explicitly investor owned so investor interests will take priority.

This does not explain e.g. why we do not have more software development co-ops. The costs of starting a new software company are not that high, and a group of experienced software developers should have decent savings. They already own the means of production, i.e. their brains and notebooks, and they can rent the other necessary resources from Amazon. Thanks to remote work, they do not even have to live in the same city.

By the way, you seem to suggest that a capitalist investor is able to prioritize long-term wealth over immediate consumption, but the co-op employees are not. Not even when all the benefits of working at a co-op are at stake, and they know that if the co-op fails, they will have to return to their previous open spaces and agile meetings. Why is that so?

Then we probably should not use the same word.

Startups sometimes offer shares to their early employees, but it seems to me this is usually some kind of scam. Or maybe "scam" is an unnecessarily harsh word, but it is definitely a form of "it does not mean what you assume it means, and I strategically do not correct your misconceptions which are obvious to me".

Only a smart fraction of startups sells for millions, but when they do, only a small fraction of employees who own shares actually gets rich. Most often they find out that their shares are some special kind of shares (different than the ones owned by the CEO), and therefore... blah blah... they are practically worthless. Or they find out that the shares were so diluted that although the company grew 1000×, the value of the shares did not. Etc., I am sure new tricks to make employee shares worthless are being invented every day.

So, I guess a campaign to make co-ops more attractive should include an explanation why current employee shares are for all practical purposes not a form of worker ownership.

Oh, come on, the afterword ruined an otherwise great article!

I would love to learn more about co-ops. As far as I know, Mondragon Corporation is probably the biggest one, but there is frustratingly little information to find about them online.

What I heard about co-ops (no idea how generally this applies) is that if they grow, they often change to something else. Like, you have a worker-owned company with 100 workers, and one day they have an opportunity to grow 10× in size. And suddenly the original 100 workers are like: "wait, we spent decades working for this moment to happen, and now the new guys should be our equals? no way!", and the company finds a way to change itself in such way that the new 900 workers are mere workers, but not co-owners. (Perhaps the Mondragon Corporation is exceptional precisely for not doing this. They encourage new people joining their system; they even lend them money to start new branches within the system.)

The difficult part about starting a co-op is in my opinion simply the fact that people who have the skills necessary to start a co-op, can usually use the same skills to start a company that is fully owned by them.

Calling co-ops "socialist" means introducing a culture war needlessly. I grew up in socialism, and we had lots of companies that were definitely not owned by their workers, workers did not put in much effort, and the only reason they didn't fail was the government endlessly bailing them out. On the other hand, there are co-ops in capitalist countries. So please do not use "socialist" and "co-op" as synonyms!

Given the recent discussion surrounding the structuring and transparency of EA organizations, perhaps the community could consider turning their EA organizations into socialist firms.

Not sure if you are aware of the irony, but notice that you are proposing to convert a "non-socialist" firm into a "socialist" one, instead of creating a new "socialist" firm from scratch. Which shows that even you do not see a creation of a "socialist" firm as a realistic goal. Perhaps it is much easier for your tribe to take over an existing successful organization than create a new one, but shouldn't your opponents be making this point?

Try figuring out a way to start co-ops without taking over existing firms, and you may find much less resistance. (Yes, it is possible, the Mondragon Corporation did it.) You might find that very difficult, but then you will be addressing the real problem.

Notice that if you cannot solve the problem of starting new co-ops, then even if thanks to some revolution the co-ops would take over the world, it wouldn't work in long term, because sometimes the existing ones would deteriorate and there would be no new ones to replace them.

Not a result of in-our-universe evolution, or evolution in general? That is, could it be an intelligent species that runs a simulation containing us, where that species itself evolved in their universe? Or does it have to be come kind of just-magically-appearing intelligence?

Assuming Tegmark multiverse, there must be a universe somewhere where the laws of physics themselves just happened to encode an intelligent being. It's just very, very unlikely. Us being in a simulation is probably more likely.

we can do this collectively - f.e. my clickbait is probably clickbaity for you too

This assumes good faith. As soon as enough people learn about the Guardian AI, I expect Twitter threads coordinating people: "let's flag all outgroup content as 'clickbait'".

Just like people are abusing current systems by falsely labeling the content that want removed as "spam" or "porn" or "original research" or whichever label effectively means "this will be hidden from the audience".

I don't know such source, but if I tried to create one, it would probably be a list of computer achievements, with years. Such as "defeats humans in chess", "defeats humans in go", "writes poetry", "draws pictures"... also some older ones, like "finds a travel route between cities", "makes a naive person believe they are talking to another human"... and ones that were not achieved yet, like "drives a car in traffic", "composes a popular song", "writes a popular book", "defeats a human army".

The achievements would be listed chronologically, the future ones only with a question mark (ordered by their estimated complexity). The idea would be to see that the distances between years are getting shorter recently. (That of course depends on which specific events you choose, which is why this isn't an argument, more like an intuition pump.) This could be further emphasized by drawing a timeline below the list, making big dots corresponding to the listed things (so you see the distances are getting smaller), and a question mark beyond 2022.

I agree with the idea in general, but not with its implementations I see.

If making things reliably on time is so important, you could simply hire more people. (Not in the last minute when you already see that you will miss the deadline; by then it's usually too late.)

In my experience, many software projects are late because the teams are chronically understaffed. If you are completing deadlines reliably on time, the managers feel that you have too many people on your team, so they remove one or two. (Maybe in other countries this works differently, I don't know.) Then there is no slack, which means that things like refactoring almost never happen, and when something unexpected happens, either the deadline is missed, or at least everyone is under big stress.

The usual response to this is that hiring more people costs more money. Yes, obviously. But the alternative, sacrificing lots of productivity to achieve greater reliability, also costs money.

Now that I think about it, maybe this is about different levels of management having different incentives. Like, maybe the upper management makes the strategical decision to sacrifice productivity for predictability, but then the lower management threatens predictability by keeping the teams too small and barely meeting the deadlines, because that is what their bonuses come from? I am just guessing here.

It seems like multiplication isn't the right model, because different ideas have different "work : profit" curves.

For example, the idea of "getting a job" will give you a relatively safe income with mediocre execution, but with the additional effort the profit scales logarithmically. On the other hand, if you sell something online, a good marketing campaign can increase your profit dramatically. Some jobs are about persistence, for example if people pay for your lectures, then the fact that you are already giving lectures for 20 years and have thousands of satisfied customers, becomes a sufficient reason for more people to pay you. Some jobs are about luck, like you produce Angry Birds, and maybe you make millions, but most likely no one will care.

So "choosing a better dish" could be realizing that in your current situation (your preferences, skills, resources), a different idea has a better curve than your current one.

I guess I can be happy if there is any data on this at all and see what parameters are available.

Yeah.

Well, taking your question completely literally (a group of N people doing an IQ test together), there are essentially two ways how to fail at an IQ test. Either you can solve each individual problem given enough time, but you run out of time before the entire test is finished. Or there is a problem that you cannot solve (better than guessing randomly) regardless of how much time you have.

The first case should scale linearly, because N people can simply split the test and do each their own part. The second scale would probably be logarithmic, because it requires a different approach, and many people will keep trying the same thing.

...but this is still about how "the number of solved problems" scales, and we need to convert that value to IQ. And the standard way is "what fraction of population would do worse than you". But this depends on the nature of the test. If the test is "zillion simple questions, not enough time", then dozen random students together will do better than Einstein. But if the test is "a few very hard questions", then perhaps Einstein could do better than a team of million people, if some wrong answer seems more convincing than the right one to most people.

This reminds me of chess; how great chess players play against groups of people, sometimes against the entire world. Not the same thing that you want, but you might be able to get more data here: the records of such games, and the ratings of the chess players.

Load More