clone of saturn

Wiki Contributions

Comments

This post is currently tagged "security mindset" but the advice seems close to the opposite of security mindset; it amounts to just trying to be extra careful, and if that doesn't work, hoping the damage isn't too bad. Security mindset would require strategies to make a leak impossible or at least extremely unlikely.

Remember when Google Shopping used to be an actual search index of pretty much every online store? You could effortlessly find even the most obscure products and comparison shop between literally thousands of sellers. Then one day they decided to make it pay-to-play and put advertisers in control of what appears on there. Now it's pretty much useless to me. I think a similar process has happened with Search, just more gradually. Your experience with it probably has a lot to do with how well your tastes and preferences happen to align with what advertisers want to steer people toward.

It's absurd to equate the shaky and informal coalition of Russia, China, Iran, and Syria with the 750+ extraterritorial bases, worldwide naval dominance, and global surveillance network of the US Military.

Language models seem to do a pretty good job at judging text "quality" in a way that agrees with humans. And of course, they're good at generating new text. Could it be useful for a model to generate a bunch of output, filter it for quality by its own judgment, and then continue training on its own output? If so, would it be possible to "bootstrap" arbitrary amounts of extra training data?

But you’d have to be one really stupid correctional officer to get an order to disable the cameras around Epstein’s cell the night he was murdered, and not know who killed him after he dies. Even if you were that dumb, it seems like something you would mention unless you were threatened, in which case you obviously are now a possible defecting member of the plot.

If I were a prison guard who had just seen a well-connected group of conspirators murder someone who had become inconvenient to them and easily get away with it, it seems to me that one of the stupidest things I could possibly do would be to tell anyone about it. Why would they need to explicitly threaten me? We both understand there's no one I could "defect" to who could stop them or protect me.

That said, it took the software industry a long time to learn all the ways to NOT solve XSS before people really understood what a correct fix looked like. It often takes many many examples in the reference class before a clear fundamental solution can be seen.

This is true about the average software developer, but unlike in AI alignment, the correct fix was at least known to a few people from the beginning.

As someone who has watched "Century of the Self" I'd guess it's more along the lines of

  • What people want is not what they need. People don't need much help to self-improve in ways which are already consonant with their natural desires and self-image. So any safe and effective self-improvement program would be a nonstarter in the free market because it would immediately repel the very people who could benefit from it.

There is an icon in the lower right that looks like this which toggles previews on or off. Do they come back if you click on it?

Load More