Posts

Sorted by New

Wiki Contributions

Comments

I generally agree but also find that people also accuse people of nitpicking or excessive nuance when trying to defend ideas that feel true but are logically weak.

I find the critical distinction between rigor and nitpicking to be whether the detail being argued about is critical to the foundation of any of an argument's premises.

This is a very well-written piece that asks a lot of interesting questions. I probably won't be able to go through all of it right now, but I wanted to respond to a few initial points, and hopefully, my response is at least half as coherent as your original post.

I agree that the metric for 'progress' is mostly amorphic, but if we accept the simplified version of what's been described as 19th-century progress, I think we're mostly doing a good job. Some of what has been called mistakes here seem to be generally successful to me.

19th-century Mistakes?

That technology would lead to world peace

  • That technology would lead to world peace. Enlightenment thinkers such as Condorcet had forecast progress in morality and society just as much as in science, technology and industry. By the late 1800s, this seemed plausible. The previous century had seen monarchy and aristocracy replaced by democratic republics, and the end of slavery in the West. Economic growth was making everyone richer, and free trade was binding nations together, giving them opportunities for win-win collaboration rather than destructive, zero-sum competition. The telegraph in particular was hailed as an invention that would unite humanity by allowing us to better communicate. Everything seemed to be going relatively well, especially after 1871 (end of the Franco-Prussian War), for over 40 years…

Technology and economic growth seem to have generally led to more world peace, but they've also led to an increase in coordination and weapons technology. The world is generally more peaceful, in terms of the population proportion of the world that's at war at any given moment. Most countries also seem to find war less acceptable and start to lose public support for a given conflict when casualties; total casualties for each conflict seem to be trending downwards. 

However, in the rare cases where society actually approves of a conflict, we now have more ability to coordinate more advanced weapons technology to create more havoc in a shorter time. I think the world is more peaceful in terms of the average person, but the average person now has more destructive power, so bad apples are more apparent.

That "improvements on nature" would avoid unintended consequences

  • That “improvements on nature” would avoid unintended consequences. (This one may have been implicit.) It’s good to try to improve on nature; it’s bad to go about it blithely and heedless of risk. One striking example is the popularity of “acclimatization societies”, “based upon the concept that native fauna and flora were inherently deficient and that nature could be greatly improved upon by the addition of more species…. the American Acclimatization Society was founded in New York City in 1871, dedicated to introducing European flora and fauna into North America for both economic and aesthetic purposes. Much of the effort made by the society focused on birds, and in the late 1870’s, New York pharmacist Eugene Schieffelin led the society in a program to introduce every bird species mentioned in the works of Shakespeare.” (Emphasis added.) These importations led to invasive pests that threatened crops, and were ultimately placed under strict controls.

Even just sticking to plants and animals, we learned that there are a great many plants and animals that damage local ecosystems, but we've also discovered that there are many plants that can be safely transplanted to new areas to provide new foods or plant products. 

Examples

  • Potatoes and corn came from Central America and have become common foods around the world especially in poor areas
  • Genetically modified dwarf wheat designed to grow in harsh conditions has doubled to quintupled wheat production in many countries and provides a significant amount of total calories 
  • Many plant species not native to certain areas, like mint, dill, or lavender, can be planted with local plants and repel local pests that harm the local plants.

There are a lot of ways to mess up delicate systems, but just because there are ways to break a system, doesn't mean there aren't also ways to optimize it.

That progress was inevitable

  • That progress was inevitable. The most optimistic thinkers believed not only that continued progress was possible, but that it was being driven by some grand historical force. Historian Carl Becker, writing about this period soon after it had passed, spoke of the conviction that “the Idea or the Dialectic or Natural Law, functioning through the conscious purposes or the unconscious activities of men, could be counted on to safeguard mankind against future hazards,“ adding that “the doctrine was in essence an emotional conviction, a species of religion.”

This gets down to semantics about what exactly progress is, but society and civilization seem to allow a good record of past failures and successes,  so it does seem to be generally inevitable that people will try to fix problems (personal or societal) that they think can be fixed. Others can see what did and didn't work and continue to make further small improvements. Humans as a group seem to like solving problems/puzzles, so it does seem that as long as humans have some solution that doesn't wipe us all out, we're probably going to keep progressing. The only question on that progression is which environments allow slower or quicker progression.

20th-century challenges

I think many of these things may have changed society, but I don't think they did, or logically should have, changed our ideal of progress.

The world wars

The world wars. With World War I, it became clear that technology had not led to an end to war; it had made war all the more horrible and destructive. Progress was not inevitable, certainly not moral and social progress. By the end of World War 2, the atomic bomb in particular made it clear that science, technology and industry had unleashed a new and very deadly threat on the world.

The wars, I think, were the main catalyst for the change. But they were not the only challenge to the idea of progress. There were other concerns that had existed at least since the 19th century:

Humans will always have some conflict, and greater technology and coordination between groups will allow those conflicts to be more destructive,  but humans are individuals less prone to violence and less approving of the deaths of others. A world government that could collectively exclude or punish state aggressors, for example, could reasonably mitigate the world war problem. 

The atomic bomb could end civilization at any moment, so I tend to agree that despite the large possible benefits of nuclear technology, it was probably a bad thing.

Poverty and inequality

Poverty and inequality. Many people were still living in dilapidated conditions, without even toilets or clean water, at the same time as others were getting rich from new industrial ventures.

Technological (food, power, healthcare) progress and economic progress, is generally improving conditions around the world, but it also improves conditions enough that poor countries tend to have more children, and those children are more likely to survive, so there are still a great many people in poverty. However, it also seems like at a certain point in education/economic prosperity people tend to have fewer children. We may just have to wait for progress to complete that cycle in some areas.

Job loss and economic upheaval

Job loss and economic upheaval. As technology wrought its “creative destruction” in a capitalist economy, entire professions from blacksmiths to longshoremen became obsolete. As early as the 1700s, groups led by “Ned Ludd” and “Captain Swing” smashed and burned textile machinery in protest.

New technology destroys old professions and creates new ones. Machine learning engineers, social media managers, or professional e-sports gamers didn't exist a few decades ago. It generally seems that we create new niche needs once technology fills one of our basic needs. If AI actually does end up destroying old jobs without any new jobs to replace them, that seems more like an argument for modifying current forms of government than needing different measurements for progress.

Hars, risks, and accountability in a complex economy

Harms, risks, and accountability in a complex economy. As the economy grew more complex and people were living more interconnected lives, increasingly in dense urban spaces, they had the ability to affect each other—and harm each other—in many more ways, many of which were subtle and hard to detect. To take one example, households that once were largely self-sufficient farms began buying more and more of their food as commercial products, from increasingly farther distances via rail. Meat packing plants were filthy; milk was transported warm in open containers; many foods became contaminated. In the US, these concerns led in 1906 to the Pure Food & Drug Act and ultimately to the creation of the FDA.

Most new developments seem to have early problems and further developments allow us to identify and correct those problems. There are certainly problems in areas like industrial agriculture, but we have to choose between having problems that we try to identify and mitigate and dropping total production by an order of magnitude by switching back to more local systems. This at least partially depends on whether you think that more people are good, bad, or neutral. I generally think that larger populations create a more interesting environment and produce more of the things that I enjoy in the world.

Concentration of wealth and power

Concentration of wealth and power. The new industrial economy was creating a new elite: Rockefeller, Morgan, Carnegie. Their wealth came from business, not inheritance, and their power was more economic than political, but to many people they looked like a new aristocracy, little different than the old. In America especially, the people—who just a few generations ago had fought a war to throw off monarchical rule—were suspicious of this new elite, even as they celebrated rags-to-riches stories and praised the “self-made man.” It was a deep conflict that persists to this day.

Concentrations of wealth and power seem to be endemic to any system where strict coordination of a large group can produce more than a smaller less coordinated group. This is generally why people go through the effort to form larger groups at all, or why military structures have extremely concentrated, hierarchical power structures. 

I'm skeptical of the idea that people generally distrust the inherent power of business elites, but rather that they correctly recognize that the interests of business elites don't necessarily align with their interests. Continually improving regulations seem to be improving this problem over time, though we seem to be at another high point in power concentration, and we likely need to strengthen anti-trust regulations to dissipate the individual power of some corporations and business leaders.

Resource consumption

Resource consumption. Long before Peak Oil, William Stanley Jevons was warning of Peak Coal. Others predicted the end of silver or other precious metals. Sir William Crookes (more accurately) sounded the alarm that the world was running out of fertilizer. Even as people celebrated growth, they worried that the bounty of nature would not last forever.

Further technological progress in energy will likely solve the types of resource consumption problems we have now, although we will likely consume more energy as we are able to produce more, but critically, we're more focused on energy production that doesn't have as many negative environmental trade-offs. Most of the other resource consumption problems just require different strategies. Precious metals for aesthetic purposes are dumb, and most precious metals in tech can be substituted for other elements that produce slightly worse technology or can likely be replaced as new technology is developed.

Pollution

Pollution. Coal use was blackening not only the skies but the houses, streets, and lungs of cities such as London or Pittsburgh, both of which were likened to hell on Earth because of the clouds of smoke. Raw sewage dumped into the Thames in London led to the Great Stink and to cholera epidemics. Pesticides based on toxic substances such as arsenic, dumped in copious quantities over crops, sickened people and animals and poisoned the soil

Pollution was bad and has been a consistent problem, but the visible forms of pollution peaked and are now on a downward trend. Our normal ideal of progress created a new problem but also created a better overall solution to that problem. Newer forms of energy are likely to further decrease pollution and its byproducts.

The environment

The environment, as such. The 19th century may have worried about pollution and resources, but in the 20th century these concerns were united into a larger concept of “the environment” considered as a systematic whole, which led to new fears of large-scale, long-term unintended consequences of industrial activity.

Environmental problems, seem to be following a similar trend as many of the other problems here. We identify a problem, we create a solution, that solution creates other problems, we create solutions to those problems, and the process continues, recursively refining itself as necessary.

 

Regrouping in the 21st century

  • Is material progress actually good for humanity? Does it promote human well-being? Or is it an unhealthy “addiction?”

Material progress is close to objectively good concerning increased access to food, shelter, and medical care. It's also very likely good concerning for most technologies that entertain us or make our lives more comfortable, and possibly bad concerning technologies that ultimately seem to make us less connected to friends and family--like computers or smart devices where many shallow relationships seem to displace fewer, but more meaningful relationships.

  • Is progress “unsustainable?” How do we make it “sustainable?” And what exactly do we want to sustain?

This depends on how one defines progress. If progress is recursively solving problems then solving the problems that those solutions create, then progress is sustainable almost by definition. If we define progress as just making more things and using resources until we run out, then progress is definitionally unsustainable.

  • Does progress benefit everyone? Does it do so in a fair and just way?

Capitalism generally solves this over time if there is a way to solve this. Most extremely expensive goods of material progress (e.g. better housing technology, better medical care, etc...) are usually eventually made available to everyone over time.

  • How can we have both progress and safety? How do we avoid destroying ourselves?

We usually solve problems after the fact, so we should have as much caution and regulation in questionable areas as possible (e.g. nuclear power, AI, gene modifications, etc...), but there will always be a significant risk that some new unpredictable development will destroy humanity, or at least destroy the current human civilization.

  • What are the appropriate legal frameworks for existing technologies and for emerging ones?

Most countries already have legal frameworks to guard against the most obvious existential risks to humanity (e.g. extreme genetic manipulation, nuclear power), the bigger issue is incentivizing countries that have a large potential upside in experimenting with these technologies to not do so, in the interest of the general human good. This would be much easier if a more central world government made the power and prestige of individual countries less relevant, but any sufficiently large carrot or stick would do.

  • How do we address environmental issues such as climate change and pollution?

We just have to convince people to recognize that the issue is real and to trade a little bit of current progress for better long-term expected progress. I think this is primarily an education and technology problem.

  • How do we deal with the fact that technology makes war more destructive?

Seems like a game theory question. The only solution to less frequent, but more deadly wars, seems to be some sufficiently strong central body that can moderate conflicts between nations and punish nations who don't comply with those agreements.

  • How can we make sure technology is used for good? How do we avoid enabling oppression and authoritarianism?

Like the question before this, some central body needs to be able to punish bad actors. There is unquestionably a huge risk that this central body might abuse its power and treat some people unfairly, but weighing that risk against the possibility of dictators and authoritarians, I think the strong central body would generally produce better outcomes.

  • How can we make moral and social progress at least as fast as we make scientific, technological and industrial progress? How do we prevent our capabilities from outrunning our wisdom?

I don't think humans are able to make that much individual moral progress, but we strongly respond to better external structures. For example, I don't think humans are instinctually less violent than our hunter-gatherer ancestors, but once we have a structure like laws and the enforcement of those laws, violence is less necessary to live your life, and I think humans have a general dislike of unecessary violence.  Similarly, since we have more information and connection to far away humans, it's harder for us to imagine that they are an "other" in the sense that they aren't humans in the same way that we are. All of this to say, structures that inhibit violence and increase empathy are functionally equivalent to moral and social progress. On the technological side, we often don't understand how a new technology fits into larger societal structres, so there will always be a learning period where we adapt to new technology and there are many initial unpredcited secondary harms, so greater caution around dangerous technology needs to be enforced globally.

This response went very long. Thank you for your excellent post.

I think perhaps, as humans, we want morality and happiness to overlap when this is rarely the case. Self-sacrifice is definitely a limited resource, but if most people believed it to be a moral duty, the human race would likely be better off. The problem with the self-sacrificial strategy is the problem of defection in any game.

If we could convince a sufficient amount of people to sacrifice their personal resources and time, then the average cost of self-sacrifice could go down enough that more people would be willing to do it and we would all be better off. But there will always be those that defect for personal gain. In the modern world, we have little incentive to give more than a small amount.

Even if we're good people we have to choose whether to maximize happiness and optimize goodness or the reverse. I think the key advantage of maximizing happiness and optimizing morality is that we can still do good, though less than we might have otherwise, while having an attractive enough life for others to want to do the same.

I think that the most effective strategies in altruism are those that can coerce systems into rewarding those that would have otherwise defected—like somehow making good people cooler, richer, or happier. So, perhaps it is those strategies that make you happy while helping others at the same time that are the most likely to do the most good in the long run.

So, essentially, I'm agreeing with you, but from a slightly different perspective.