lcmgcd

Comments

Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems)

I think crypto markets can't be regulated except by random moderators' filtering on bets and betters' choices of where to put money. It seems someone could put a million dollars against a terrorist attack on a certain date and hope someone bets against it & executes to get the money. So a betting market allows hiring for certain tasks (not most tasks) with reliable verification & payout, and you get your money back if it doesn't happen. I have some faith in moderators' filters, though. I hope they would have the wisdom to forbid bets on terrorist attacks, assassinations, etc. Insider trading cannot be prevented (as far as I can tell) if betting is anonymous…  

Splitting Debate up into Two Subsystems

What about the info helper by itself? Is it much more useful in this context than it would be alone? Rewarding based on human prediction skill seems the best to me. I think Bostrom might have mentioned this problem (educating someone on a topic) somewhere.

Building up to an Internal Family Systems model

As with real and fake memories, I think if you’re careful then you can mainly deal with real ones

Owen Another Thing

Is the top one meant to be read first or last?

lcmgcd's Shortform

Zettelkasten in five seconds with no tooling

Have one big textfile with every thought you ever have. Number the thoughts and don't make each thought too long. Reference thoughts with a pound (e.g. #456) for easy search.

Reframing Superintelligence: Comprehensive AI Services as General Intelligence

One way to test the "tasks don't overlap" idea is to have two nets do two different tasks, but connect their internal layers. Then see how high the weights on those layers get. Like, is the internal processing done by Mario AI useful for Greek translation at all? If it is then backprop etc should discover that.

Creating Environments to Design and Test Embedded Agents

Or something simpler would be that the agent's money counter is in the environment but unmodifiable except by getting tokens, and the agent's goal is to maximize this quantity. Feels kind of fake maybe because money gives the agent no power or intelligence, but it's a valid object-in-the-world to have a preference over the state of.

Yet another option is to have the agent maximize energy tokens (which actions consume)

Creating Environments to Design and Test Embedded Agents

Yes I agree it feels fishy. The problem with maximizing rubes is that the dilemmas might get lost in the detail of preventing rube hacking. Perhaps agents can "paint" existing money their own color, and money can only be painted once, and agents want to paint as much money as possible. Then the details remain in the env

Embedded Agency (full-text version)

You may want to add MIRI's botworld 1.0 project to the bibliography, so that people looking into this don't duplicate the idea

Load More