ProgramCrafter

Wiki Contributions

Comments

The system might develop several search parts, some of which would be epistemical - for instance, "where my friend Bob is? Volunteering at a camp? Eating out at a cafe? Watching a movie?" - and attempt to retarget one to select the option based on alignment target instead of truth would make AI underperform or act on invalid world model.

Are there better ways to fix this issue than to retarget just the last search (one nearest to the output)?

A problem is that

  • we don't know specific goal representation (actual string in place of "A"),
  • we don't know how to evaluate LLM output (in particular, how to check whether the plan suggested works for a goal),
  • we have a large (presumably infinite non-enumerable) set of behavior B we want to avoid,
  • we have explicit representation for some items in B, mentally understand a bit more, and don't understand/know about other unwanted things.

The MVP version is that everyone buys (obviously transferrable) credits, and communications have a credit amount attached. Each person can set a minimum below which communications get filtered out entirely, and the target can see the credit bid when determining whether to open the message. Once they open the message, they can choose to keep the credits, do nothing or tip the person back depending on whether they found the interaction helpful.

By the way, such technology already exists and is called "blockchain"; it allows to send public or semi-public (encrypted) messages to anyone but requires to pay for that, and it allows to authenticate sender (in particular, for forwarding messages).

I assume we all agree that the system can understand the human ontology, though?

This, however likely, is not certain. A possible way for this assumption to fail is when a system allocates minimal cognitive capacity to its internal ontology and remaining power to selecting best actions; this may be a viable strategy if system's world model is still enough descriptive but does not have extra space to represent human ontology fully.

Just make it in John's self-interest.

That's the first step; the second is to make it more beneficial than alternatives, and preferably by a large margin so that adversaries can't outbid norm-following way (as is case with peer pressure).

I'm unsure in whether that point should be in condition, actually; for me, it feels like very few chains of thoughts will be considered for optimization then, so the advice would be useful only for already self-improving people. I would try to replace that point so that it doesn't trigger too often in the same area of life, maybe.

What I didn't know is how immediately thought-provoking it would be to learn even the most basic things about playing music. Maybe it's like learning to program, if you used a computer all the time but you never had one thought about how it might work.

That comparison is also thought-provoking) Thinking for a minute yielded that programming may be considered quite similar to playing music, but differs that in programming you do not need to do most things in any specific order. For example, if I have a dataset of a competition participants, it doesn't matter whether I deduplicate names or remove disqualified entries first.

Reuters: BREAKING: Reuters reports that TikTok’s owner ByteDance would prefer to ‘shut down’ its app in the US rather than sell it if all legal options are exhausted

Eigenrobot: Why would you say this?

It’s odd that a profit maximizing firm would actually pursue this strategy.

 

I'd like to mention the explanation that ByteDance does not consider US dollars to have enough value. Given that China can't use them to lobby cancelling sanctions, for instance, that does mean that US dollars aren't equivalent to unspecialized optimization power for them, and might have little value.

Answer by ProgramCrafter30

I would guess this is somewhat similar to having a network of friends: a polycule is even bound to be smaller. And I can totally imagine being emotionally, romantically, sexually attached to one set of partners and opinion-sharing attached to a slightly different set.

I believe Focus Your Uncertainty essay of Sequences touches this topic: at very least, math is useful for splitting limited amount of resources.

Load More