This represents an attempt by the parent to impose their will on the child by proxy of AI. Thus the AI would refuse.
I like it. But I am afraid the obvious next step is that the parent will ban the child from using the AI.
What even is human self-determination?
our cultural aversions to tactics that rob people of self-determination, like brainwashing, torture or coercion.
And yet, religion remains legal, although to a large degree it is brainwashing people since childhood to be scared of disobeying the religious authorities.
Should human self-determination respecting AI be like: "I will let you follow your religion etc., but if you ask me whether god exists, I will truthfully say no, and I will give the same truthful answer to your children, if they ask"?
Should it allow or prevent killing heretics? What about heretics who have formerly stated explicitly "if I ever deviate from our religion, I want you to kill me publicly, and I want my current wish to override my future heretical wishes". Would it make a difference if the future heretic at the moment of asking for this is a scared child who believes that god will put him in hell to be tortured for eternity if he does not make this request to the AI?
A wonderful vision of a world where you don't need a job because you can make money by full-time arguing with people online!
However, any objections to various karma systems (e.g. you can get upvotes by posting clickbait) would apply the same here, only more strongly, because there would be a financial incentive now.
I think Reddit tried something like that; you could award people "Reddit gold", not sure how it worked.
Prediction markets in forums and systems that support them, naturally giving rise to/being refutation bounties.
You need to have a way to evaluate the outcome. For example, you couldn't use a prediction market to ask whether people have a free will, or what is the meaning of life. Probably not even whether Trump won the 2020 election, unless you specify how exactly the answer will be determined -- because simply asking people won't work.
A subscription model with fees being distributed to artists depending on post-watch user evaluations, allowing outsized rewards for media that's initially hard for the consumer to appreciate the value of, but turns out to have immense value after they've fully understood it. (media economics by default are terminally punishing to works like that)
The details matter, because they determine how people will try to game this. I could imagine a system where you e.g. upvote the articles you liked, and then one year later it shows you the articles you liked, and you can now specify whether you like them on reflection. An, uhm, maybe 10% of your subscription is distributed to the articles you liked immediately, and 90% to those you liked on reflection? -- I just made this up, not sure what is the weakness, other than the authors having to wait for 1 year until the rewards for meaningful content start coming.
Take a notebook, and before reading lesswrong make notes of all your values and opinions, so that you can backtrack if necessary. :D
Coordination is hard. "Assigning Molochian elements a lower value" is a kind of coordination. Making rules, and punishing people when they break them is another. Even if attack is stronger than defense, the punishment could be stronger yet (because it is a kind of attack). I agree that it is difficult, not sure if impossible.
I'd say "things that are good in moderation and harmful in excess... and most people (in our community) do them in excess".
Even better, we should have two different words for "doing it in moderation" and "doing it in excess", but that would predictably end up with people saying that they are doing the former while they are doing the latter, or insisting that both words actually mean the same only you use the former for the people you like and the latter for the people you dislike.
I am not even sure whether "contrarianism" refers to the former or the latter (to a systematically independent honest thinker, or to an annoying edgy clickbait poser -- many people probably don't even have separate mental buckets for these).
It doesn't seem like knowing your enemy and knowing yourself should actually make you invincible in war. Besides, what if your enemy also knows themselves and knows you?
It makes more sense if you consider that another option is to avoid the war. So I would interpret it like this:
If you know that you are strong and that the enemy is weak, you will win the war. (And if you know otherwise, you will avoid the war -- by keeping peace, paying tribute, or surrendering.)
If you know that you are strong, but you don't know your enemy... sometimes you will win, sometimes you will be surprised by finding that your enemy is strong, too.
If you have no idea, and just attack randomly... expect to get destroyed soon.
In this light, the next quote would be interpreted like: before you start the war, make sure to build a strong army, so that you don't have to improvise desperately after the war has started.
Thank you! I probably wouldn't read the book, but this description is fascinating.
Not sure if this might be helpful -- I asked an AI how to tell the difference between "smart, autistic, and ADHD" and "smart, autistic, but no ADHD", and it gave me the following:
There are similarities between the two, because both autism and ADHD involve some executive dysfunction; social avoidance/exhaustion looks similar to ADHD avoidance; autistic burnout looks similar to ADHD inattention; being tired from masking looks similar to ADHD lack of focus; and high intelligence can mask both through compensation.
The differences:
Suppose that you need to read a boring technical book to understand something that is very important for you. Could you read it? (Autism only: if it is perfectly clear why the books is important, and you have a lot of time, and a quiet room only for yourself: yes. ADHD: sorry, after 10 minutes you will drop the book and go research something else.)
Do you lose hours of time without noticing? (Autism only: only when engaged with something interested. ADHD: yes, all the time.)
If you have a clear task, proper environment, and interest; can you start doing the task? (Autism only: usually yes. ADHD: probably no.)
Do you make major decisions on impulse -- such as buy something expensive, quit your job, start a new project, start driving too fast -- and then wonder "why did I do this"? (Autism only: no. ADHD: often.)
...I found this interesting, because I was operating under assumption that I have both autism and ADHD, but now it seems more like autism only. (Then again, this is AI, they like to hallucinate.)
The obvious problem in this question is that people can be wrong in estimating how talented they are, how important is a problem, how capable they are to contribute to the problem, and how much time would it take.
From my perspective, my problem seems to be that I am bad at communicating my ideas convincingly. A typical pattern is that I describe my vision to others, others say "that's stupid" (sometimes they provide a more sophisticated argument, such as "if this was actually a good idea, someone else would have already done it long ago"), and then... I mostly don't do anything about it, either because I do not have the necessary skills to do it alone, or because I am busy doing other activities that pay my bills. Sometimes, a few years later, someone else does it, and it is a great success. Very rarely, I do it myself, and it is a success (but not sufficiently large for people to trust me the next time, or to make enough money that I do not need a daily job anymore). This is further complicated by my problem figuring out how to monetize the solution, e.g. if the goal is public education, putting the project behind a paywall would destroy most of its potential value. Some of my ideas are illegal, e.g. involve violating copyright.
From the perspective of the Less Wrong community, my ideas are probably meh, because I have no experience with LLMs (other than as a user), many ideas are related to education, some involve translating stuff to Slovak language. Here are some that come quickly to my mind:
I admit that I didn't systematically try to get funding for my ideas or something like that. Unfortunately, I am not good at things like navigating bureaucracy, which would almost certainly be required. (Even using something like Kickstarter would require figuring out how to process foreign income in my tax reports, which sounds like a nightmare. Last time I tried to find a local accountant who would understand that, I couldn't.) So all I have is my free time, but after my daily job I am generally too exhausted to do anything meaningful. Plus I have small kids.
I am not specifically looking for the most important problems. I am noticing problems that annoy me, and sometimes I think there are probably many others in similar situation.
For me the problem is money. If someone gave me some kind of unconditional basic income, I would probably start working on something from the list above. Until then, I need to do the stupid things that bring food to my family.