Coafos

conspi-rationalist

Wiki Contributions

Comments

Coafos1y2-1

Taking a bad option away might be worse for a person, but will be much better for the people. These regulations (no selling organs or sex) exists, becuse in a free market there would be a race-to-bottom which would not increase human values.

Suppose we allow selling sex for rent. The number of rentable apartmants stays the same; however, there will more demand for them, because some people can now pay for them by non-monetary means. Because of this, the rent prices will increase, and that would just accelerate the rent-problem.

While exchanging kidneys for medical treatment is OK for me, it should not be mixed with the standard money market. The forces of money markets usually optimize for dollar value, which could be decoupled from human wellbeing. The result would be a worse state for everyone.

Also: If the rent is so high, why can't a developer build a new complex? They could rent it out and would very fastly pay for itself. It would increase the number of flats and lower rents. These bad options try to solve a supply issue from demand side.

Coafos1y40

Searched PsyNet on Google, and I think PSYNet refers to the netcode for RocketLeague, a popular game. Maybe they pulled text message logs from somewhere; based on the "ForgeModLoader" token, it's plausible.

Alternative guess is this, a python library for online behavioural experiments. It connects to Dallinger and Mechanical Turk.

On Google, the string "PsyNetMessage" also appeared in this paper and at a few gpt2 vocab lists, but no other results for me.

On Bing/DuckDuckGo it outputted a lot more Reddit threads with RocketLeague crash logs. The crash logs are full of messages like [0187.84] PsyNet: PsyNetRequestQue_X_1 SendRequest ID=PsyNetMessage_X_57 Message=PsyNetMessage_X_57, so I guess it's an RL (as in RocketLeague) thing. It was also found in some (clearly) GPT-generated texts.

The world is a complicated and chaotic place. Anything could interact with everything, and some of these are good. This post describes that general paralysis of the insane can be cured with malaria. At least if they do not die during the treatment.

If late-stage syphilis (general paralysis) isn't treated, then they probably die 3-5 years with progressively worse symptoms each year. So even when 5-20% of the died immediately when the treatment started, they still had better survival rates in one and five years. A morbid example of an expected value choice: waiting for a certain long death vs taking a chance at a short or longer possible lifetime.

If they were allowed to choose at all, where the "they" means the patients. The post mentions that Wagner-Jauregg maybe hasn't asked for consent when he tried his experiments. But this is on par for the age, early XX. century hasn't considered mentally ill patients human. Anyway, at this point, I disagree with the tone of the post, which may support human experimentation without consent. I mean, the guy just tried a bunch of diseases on terminally ill because of a fight-fire-with-fire theory and randomly found one which somewhat works.

He got a Nobel for this discovery, and a few years later he supported eugenics and anti-Semitism. Nowadays we don't use it because somebody else discovered penicillin and half of medicine was solved. We know a bit more about malaria. We don't know why this therapy worked and other high-temperature methods don't. The guy got a few places named after him in Austria.

The article is well-researched. Does it carve reality at its joints? I don't feel like it describes a reliable and ethical scientific process. But maybe sometimes you just can't, because the world is a complicated and chaotic place.

What does this post add to the conversation?

Two pictures of elephant seals.

How did this post affect you, your thinking, and your actions?

I am, if not deeply, but certainly affected by this post. I felt some kind of joy looking at these animals. It calmed my anger and made my thoughts somewhat happier. I started to believe the world can become a better place, and I would like to make it happen. This post made me a better person.

Does it make accurate claims? Does it carve reality at the joints? How do you know?

The title says elephant seals 2 and contains 2 pictures of elephant seals, which is accurate. However, I do not think it carves reality because these animals don't have joints. I know it from experimental evidence: I once interacted with a toy model of a seal and it was soft and fluffy and without bones.

Is there a subclaim of this post that you can test?

no

What followup work would you like to see building on this post?

You wouldn't guess it, but I have an idea...

I was always surprised that small changes in public perception, a slight change in consumption or political opinion can have large effects. This post introduced the concept of the social behaviour curves for me, and it feels like explains quite a lot of things. The writer presents some example behaviours and movements (like why revolutions start slowly or why societal changes are sticky), and then it provides clear explanations for them using this model. Which explains how to use social behaviour curves and verifies some of the model's predictions at the same time.

The second half of the post bases a theory on what an ideal society would look like, and how should you act on a radical-conformists axis. Be a radical except if only radicals are around you is a cool slogan for a punk band, but even he writes that he's gonna do a little trolling. I feel like there are some missing assumptions about why he chooses these curves.

In the addendum, there are references to other uses in the literature, which can be used as a jumping point for further understanding. What I'm missing from this post is the discussion of large networks. Everyone knows everyone in a small group, but changes propagate over time for large ones; it also matters if someone has few or many connections. There is also some kind of criticality in large networks too, but it's a bit different. Also, the math gets much more complicated, in fact, graph criticality results are few and hard, and most places use computer simulations instead of closed equations. All in all, I think social behaviour curves are a simple and good tool for understanding an aspect of social reality.

There are lots of anecdotes about choosing the unused path and being the disruptor, but I feel this post explains the idea more clearly, with better analogies and boundaries.

To achieve a goal you have to build a lot of skills (deliberate practice) and apply them when it is really needed (maximum performance). Less is talked about searching for the best strategy and combination of skills. I think "deliberate play" is a good concept for this because it shows that strategy research is a small but important part of playing well.

I think this post points towards something important, which is a bit more than what the title suggests, but I have a problem describing it succinctly. :)

Computer programming is about creating abstractions, and leaky abstractions are a common enough occurrence to have their own wiki page. Most systems are hard to comprehend as a whole, and a human has to break them into parts which can be understood individually. But these are not perfect cuts, the boundaries are wobbly, and the parts "leak" into each other.

Most commonly these leaks happen because of a technical/physical simplification like forgetting that a byte overflows at 255 or electrons have travel time. However, these leaks could happen due to social simplifications too, like getTodayPosts means "the things that get put on the top of the feed" for one and "the things which had the most engagement today" for another. Social errors are often downplayed in technical circles, which is why I think this post has an important message.

I think this post describes an important idea for political situations.

While online politics is a mind-killer, it (mostly) manages to avoid "controversial" topics and stays on a meta-level. The examples show that in group decisions the main factor is not the truth of statements but the early focus of attention. This dynamic can be used for good or bad, but it feels like it really happens a lot, and accurately describes an aspect of social reality.

Coafos1y175

If you heard "good is the enemy of the great", then also consider "perfect is the enemy of good".

Have you factored in the cost of task switching and meta-strategy research? A lot of economic theory trivializes the energy required for thinking, which might be correct for larger entities, but it's an important factor for individual humans.

Switching to a different worldview and re-evaluating your options takes a non-negligible amount of mental energy. Learning about new opportunities and hearing new strategies also takes time. If you're optimizing for doing good instead of collecting opportunities, then there could be a point where just doing your current best is better than the expected values of new ideas if you add the processing costs into them.

Coafos1y20

Thanks for the post, it's an important update on the state of information warfare.

Privacy can be thought of as a shield. If you build a wall against small-arm spam, then it's ok, but if you try to build an underground bunker, then it's weird because only Certified Good Guys have access to advanced weapons. Why are you trying to protect yourself against the Certified Good Guys?

What changed is that thanks to AI advancements in the last few years, it become possible to create homemade heat-seeking infomissiles. Suddenly, there are other arguments for building bunkers.

Load More