Wiki Contributions

Comments

Another argument that you will let the AI out of the box

The problem is that normal people very often give up collective resources to look good. They just don't give up their personal resources. For the AI, the former is sufficient.

Grandpa Has Different Rules

People attributing their own shortcomings to others is rather weak evidence.

Consume fiction wisely

It is also pretty unbelievable. (Spoilers ahead.)

The security around keeping the whole secret is way off. This is their biggest priority, and they know it. Yet the children can just walk where they are not supposed to go, and discover it.

The technological measures do not match up, and they absolutely can have sensors that make conspiring and/or escaping much harder.

The children are too competent. Well, we can forgive this one, but it really takes things too far; e.g., one child has learned to make a device from scraps of other devices to disable their GPS tracker without sounding the alarm. Seriously?

The children are way too selfless. This gets worse and worse, ultimately ruining the second season. This still would have been okay if the characters paid the consequences for their selfless choices, but no, they get to have their cake and eat it, too. (I guess the average viewer loves to see self-sacrifice while hating "losers".)

The escape would have ultimately failed in (anime) canon if not for some obvious author insertions. They encountered a rescuer randomly at just the right time. They also magically found some pen that had all kinds of information on empty bases and such in it.

All in all, the first season is a good show, but it will definitely further harm your priors than help them.

Consume fiction wisely

The damage chance per encounter is higher with sharks than cows, surely?

Consume fiction wisely

The claim here is definitely 'audiobooks would generally be more relaxing than the written word.'

I personally find it somewhat true; I need to listen to fiction very attentively to not lose the plot, but I can jump back into a nonfiction podcast/audiobook after not listening for 10 minutes just fine (most of the time).

Consume fiction wisely

No adult updates their probability that dragons are real after reading Game of Thrones

Without fiction, the hypothesis "dragon" would not even exist in our minds. We are wasting cultural bandwidth on this concept, and our probability estimation of it is orders of magnitude more than if we did not have it plastered everywhere in fiction.

such that you update on them.

This is a valid point, and I think an extreme case of it can be seen in fundamentalist religions. But my prior is that anyone who understands the argument the OP has presented, is smart enough to curate the non-fiction they consume such that they end up vastly better informed.

Even outdated, dumbed-down popsci books usually make one better informed than the default cultural memes. Usually, the important themes are correct; e.g., you're more or less guaranteed to see spaced repetition as an effective tactic if you read popsci books on learning. The failure mode is probably garbage like The Secret that is easy enough to filter.

Consume fiction wisely

Can you write a post about things you learned via video games? I am highly skeptical that they can teach anything transferable to the real world for STEM-adjacent adults. (Programming video games like https://store.steampowered.com/app/375820/Human_Resource_Machine/ can teach some programming, but they are more like gamefied Leetcode than a strategy/puzzle game. Most non-programmers I have introduced these games to could not even win the starting levels.)

How to think about and deal with OpenAI

Epistemic status: I am not an expert on this debate, I have not thought very deeply about it, etc.

  1. I am fairly certain that as long as we don’t fail miserably (i.e., a loose misaligned super AI that collapses our civilization), FOSS AI is extremely preferable to proprietary software. The reasons are common to other software projects, though the usefulness and blackboxy-ness of AGI make this particularly important.
  2. I am skeptical of “conspiracies.” I think a publicly auditable, transparent process with frequent peer feedback on a global scale is much more likely to result in trustable results with fewer unforeseen consequences/edge cases.
  3. I am extremely skeptical of the human incentives that a monopoly on AGI encourages. E.g., when was the single time atomic bombs were used? Exactly when there was a monopoly on them.
  4. I don’t see the current DL approaches as at all near achieving efficient AGI that would be dangerous. AI alignment probably needs more concrete capability research IMO. (At least, more capability research is likely to contribute to safety research as well.) I like the world to enjoy having better narrow AI sooner, and I am not convinced delaying things is buying all that much. (Full disclosure: I am weighing the lives of my social bubble and contemporaries more than random future lives. Though if I were to not do this, then it’s likely that intelligence will evolve again in the universe anyhow, and so humanity failing is not that big of a deal? None of our civilization is built that long-termist, too, so it’s pretty out of distribution for me to think about. Related point: I have an unverified impression that the people who advocate slowing capability research are already well off and healthy, so they don’t particularly need technological progress. Perhaps this is an unfair/false intuition I have, but I do have it, and disabusing me of it will change my opinion a bit.)
  5. In a slow takeoff scenario, my intuition is that multiple competing super intelligences will leave us more leverage. (I assume in a fast takeoff scenario the first such intelligence will crush the others in their infancy.)
  6. Safety research seems to be more aligned with academic incentives than business incentives. Proprietary research is less suited to academia though.
Blood Is Thicker Than Water 🐬

My point is more about prioritization. English, math, programming and computer literacy, economics, basic home skills (cooking, trivial repairs, etc.), and possibly rationality (though the existence of “The Dark Valley of Rationality” makes me a bit hesitant on this one) are much better subjects for a “general info” curriculum.

PS: Knowing about elementary particles (without a mathematical model of them) is trivial. You can fit all such facts into a single year’s science curriculum. The things that take time to learn are calculations, e.g., finding the mass of some reagent after some chemical reaction.

How to think about and deal with OpenAI

Ironically, I am a believer in FOSS AI models, and I find OpenAI’s influence anything but encouraging in this regard. The only thing they are publicly releasing is marketing nowadays.

Load More