Wiki Contributions

Comments

I agree that deep canvassing would be interesting. I am also curious about the famous experiments in which forcing people to smile (by holding a pen in their mouth) makes them more likely to appreciate something. Though I don't know if there are already many replication studies for those.

But many as an economist you would consider this to be too out of your field?

No, I mean the price at which that party is indifferent between making the deal and not making the deal.

I think that's the same thing? By "highest price" I meant "the highest price the buyer is willing to pay". That's the turning point after which the buyer dislikes the deal and before which the buyer likes the deal.

Yeah, and I'm trying to make that difficult for humans to do.

I understand but I fail to see that this attempt works. It seems to me that in many / most real cases (for which I have a reasonable estimate on the other's best price) it is in my interst to lie if I know that the other is filling the form honestly. If that is correct, then the "honest meta" is unstable.

Just in case: I assume that by "best price" you mean "highest price" rather than "estimated fair price".

If so, I only need to have some information on it to be incentivized to lie. In the example above I only use the information that the buyer is willing to pay two units above the fair price. The kind of example I use doesn't work if I have no information at all about the other's best price but that is rare. Realistically, I always have "some" estimation of what the other is willing to pay.

If we take a general Bayesian framework, I have a distribution on the buyer's best and fair price. It seems to me that most/all nontrivial distributions will incentivize me to lie.

I am not so sure the incentives are properly aligned here. Let's assume I am the seller. In the extreme case in which I know the highest price accepted by the buyer, I am obviously incentivized to take it as my own limit price.

And I think this generalizes. If:

  • there is a universally agreed true fair price FAIR_PRICE, like an official market value
  • the buyer is still filling the form honestly
  • I know the buyer to have a highest price above FAIR_PRICE + 2 then I can easily get FAIR_PRICE+2

Of course this requires some information on the other negotiator, but I do not see this as unreasonable.

Could you clarify in which situation this is meant to incentivize people to fill the form honestly?

I guess. But I don't know of any real-world transactions where it's expected that people keep their word on something like this

I think there are two points worth raising here:

  1. If someone accepts to precommit to the result of this negotiation and then, when the website outputs a price, refuses to honor it then I probably do not want to trade with them anymore. At the least, I would count it as though they agreed to a price and refused to honor it the next day.
  2. You only need to keep a solid precomitment yourself to avoid falling prey to the strategy above.

The obvious issue in my eyes is that few people would agree to use this kind of tool anyway, especially for a nontrivial transaction. But in a society (or simply a subculture) that normalizes them it would probably no longer be true that people don't consider their precommitment to the tool as binding.

The obvious exploit is to lie and then negotiate “normally” if the tool fails to make a deal in your favor.

The website says:

In order for both participants to have the correct incentives, you must both commit to abide by that result and not try to negotiate further afterwards.

So this strategy fails against people that keep their word.

Maybe you would accept this paper, which was discussed quite a bit at the time: Emergent Tool Use From Multi-Agent Autocurricula

The AI learns to use a physics engine glitch in order to win a game. I am thinking of the behavior at 2:36 in this video. The code is available on github here. I didn't try to run it myself, so I do not know how easy to run or complete it is.

As to whether the article matches your other criteria:

  • The goal of the article was to get the AI to find new behaviors, so it might not count as purely natural. But it seems the physics glitch was not planned. So it did come as a surprise.
  • Maybe glitching the physics to win at hide and seek is not a sufficiently general behavior to count as a case of instrumental convergence.

I won't blame you if you think this doesn't count.

lack of built-in machinery for inviolable contracts which makes non-defection hard to enforce

Out of topic: if you change nothing else about the universe, an easy to use "magical" mechanism for inviolable contracts would be a dreadful thing. As soon as you have power of life or death over someone you can pretty much force into irrevocable slavery. I suppose we could imagine a "good" working society using that mechanism. But more probably almost all humans would be slaves, serving maybe a single small group of aristocrats.

You might want to add a "free of influence" condition to the contract system, but in a society that normalizes absolute power (such as many ancient monarchies), that becomes difficult to define.

Ok, I think I can clarify what people generally mean when they consider that the logic Church-Turing thesis is correct.

There is an intuitive notion of computation that is somewhat consensual. It doesn't account for limits on time (beyond the fact that everything must be finitely long) and does not account for limits in space / memory. It is also somewhat equivalent to "what a rigorous idiot could be made to do, given immortality and infinite paper/pen". Many people / most computer scientist share this intuitive idea and at some point people thought they should make more rigorous what it is exactly.

Whenever people tried to come up with formal processes that only allow "obviously acceptable" operations with regard to this intuitive notion, they produced frameworks that are either weaker or equivalent to the turing machine.

The Church-Turing thesis is that the turing machine formalism fully captures this intuitive notion of computation. It seems to be true.

With time, the word "computable" itself has come to be defined on the basis of this idea. So when we read or hear the word in a theoretical computer-science context, it now refers to the turing machine.

Beyond this, it is indeed the case that the word "computation" has also come to be applied to other formalisms that look like the initial notion to some degree. In general with an adjective in front to distinguish them from just "computation". We can think for example of "quantum computing", which does not match the initial intuition for "computation" (though it is not necessarily stronger). These other applications of the word "computation" are not what the Church-Turing thesis is about, so they are not counterexamples. Also, all of this is for the "logic" Church-Turing thesis, to which what can be built in real life is irrelevant.

PS: I take it for granted that computer scientists, including those knowledgeable on the notions at hand, usually consider the thesis correct. That's my experience, but maybe you disagree.

I didn't check the article yet but if I understood your comment correctly then a simpler example would have been "turing machines with a halting oracle", which is indeed stronger than normal turing machines. (Per my understanding) the church-turing thesis is about having a good formal definition of "the" intuitive notion of algorithm. And an important property of this intuitive notion is that a "perfectly rigorous idiot" could run any algorithm with just paper and a pen. So I would say it is wrong to take something that goes beyond that as a counterexample.

Maybe we should clarify this concept of "the intuitive notion of algorithm".

PS: We are running dangerously close to just arguing semantics, but insofar as "the church-turing thesis" is a generally consensual notion I do not think the debate is entirely pointless.

Load More