Wiki Contributions

Comments

This sounds like there's a niche for technical writers who format/edit AF posts as proper ML articles for credits, points and glory

There are extensions like adnauseum which try to poison your data trace. Though it's dubious whether they help much. You could have some kind of crawler thingy which would pretend to be like 100 normal users so you get lost in the noise. But even that could probably be filtered out if someone really wanted to - it would be hard to accurately simulate a human (I also dimly recall reading an article about it?). Maybe something that records other peoples sessions and plays them back? Or a LLM doing it (hehe)? But even that wouldn't help in the case of being logged in to various services, and I'm guessing that most people don't automatically log out of gmail whenever they change tabs?

One source of hope is that data gets stale quickly. People can change their minds (even if they don't), so just because you know what I thought a year ago doesn't mean that you know what I think now. Then again, most people don't care either way and it would be pretty simple to remove the small number of outliers who suddenly go dark. One possible way of cleaning up would be to spend a couple of months posting more and more radically strange posts (e.g. going all in on flat earthism) before going private in order to mislead any analysis. This is hard and requires passing an ITT.

Tor + cleaning cookies + logging out of everything after using them + separate user profiles goes a long way. But it's very inconvenient. 

The chatbot should get better over time, but for now the website works quite well as a collection of questions and answers, where the answers go through an editing process. New questions can be easily added, and if you can't find an answer, but there is a similar question waiting to be answered, you can bump its priority a bit, which will hopefully result in someone working on it sooner. 

One thing worth mentioning is that proto-eukaryotic cells originally got the energy to focus on larger scale things by capturing other prokaryotic cells. Mitochondria and chloroplasts look a lot like simplified bacteria that specialise in producing ATP and glucose+oxygen, respectively, both of which give the cell more energy. They even have their own, specific DNA inside them.

To get really speculative, this is quite similar to how agriculture and the resulting centralisation+specialisation allowed for the explosion of civilization, where the last ~10k years have overwhelmingly more complex stuff than the preceding 100k years.

You should probably differentiate more between curing and preventing. Most of the infection gains were in preventive measures - both in avoiding infections and boosting the body's systems. And this is where massive anticancer gains have been possible (e.g. banning asbestos and dissuading smoking). 

The low hanging gains in actual curing were generally mechanisms that make use of differences between different kingdoms, e.g. antibiotics targeting bacteria cell walls, since they have a different structure than eukaryotic cells. The problem with cancers is that they're your own cells that are going wild, for a whole host of reasons. This means that any successful anti-cancer thingy will either have to specifically target cancer cells (which is generally hard) or be area spells, that will also kill off healthy cells. Pretty much what DirectedEvolution wrote.

It can be rephrased as a variation of the off button, where rather than just turning itself off, it runs NOPs, and rather than getting pushed manually, it's triggered by escaping (however that could be defined). A lot of the problems raised in the original paper should also apply to honeypots.

It could be somewhat helpful, for sure. And certainly better than nothing (unless it creates a false sense of security). Though I doubt it would be adopted, because of how much it would slow things down.

It would depend a lot on the implementation details. Also on whether it's possible. The basic wireheader seems simpler, in that it doesn't need to factor in humanities stupid ideas.

Adding in extra details, like that it has to pretty much make a virtual heaven for people, makes it a lot harder than just pumping them with morphine.

maximizing pleasure is very close in idea space to max suffering, because both do pretty much the same, but with opposite signs (yes, this is a travesty of an oversimplification, but you get the idea). The most common raised problems are mainly Goodhart issues. How would you specify that the AI maximise pleasure?

You seem to have a slightly idiosyncratic interpretation of wireheading, in that it's usually described as blissing out on drugs, rather than being simulated in an enriching and fun environment. A lot more people would be fine with the second option than would agree to the first one.

As to the simple, conscious blissers, they seem empty to me. I expect this is a difference in basic morality/worldview though, in that I don't find hedonism attractive. And boredom is very useful.

Load More