Posts

Sorted by New

Wiki Contributions

Comments

Morality is Temporary, Wisdom is Permanent.

-- Hunter S. Thompson

No, the article specifically warns against using a single trait. It gives specific examples of how a single trait can mean very different things. It takes a cluster of traits to establish something useful.

If you want to pursue getting the data, though, you could try to derive something like a table of probabilities from a self scored 'Big Five' test, like the one in the appendix of this review paper. From that same review paper you can also find the papers and data sets that gave rise to five factor personality analysis.

edit: fixed the link.

You might find something like this in market research. Certainly the sort of analysis that predicts which advertisements are relevant to a user on sites like Facebook would be similar to this. Trying to answer a question like "Which advertisement will the user be most receptive to given this cluster of traits?", where the traits are your likes / dislikes / music / etc.

This isn't exactly what you're asking for, but I doubt there is a P(personality type | trait) table anywhere. You're talking about a high-dimensional space and a single trait does not have much predictive power in isolation.

I'm currently reading Thomas Schelling's Strategy of Conflict and it sounds like what you're looking for here. From this Google Books Link to the table of contents you can sample some chapters.

That agrees with my intuitions. I had some series of ideas that ware developing around the idea that exploiting biases was sometimes necessary, and then I found:

Eliezer on Informers and Persuaders

I finally note, with regret, that in a world containing Persuaders, it may make sense for a second-order Informer to be deliberately eloquent if the issue has already been obscured by an eloquent Persuader - just exactly as elegant as the previous Persuader, no more, no less. It's a pity that this wonderful excuse exists, but in the real world, well...

It would seem that in trying to defend others against heuristic exploitation it may be more expedient to exploit heuristics yourself.

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would a FAI?

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would an FAI?

Video taping may not be the preferred way to go about it, but there is something to be said for reflection. While you are unlikely to get better without practice, merely sinking time into conversation won't necessarily help, and may harm you. Without analyzing your attempts, even if it's only a brief list of what went well and what didn't, you may be practicing and learning bad habits. 100 ungraded math problems doesn't make you better at math, and 100 uncoached squats may injure you.

Take a few moments after conversations to assess at least what went well and what didn't. If you have access to an honest friend, you can do even better. Converse with a third party (your friend can participate or merely be near enough to observe) and run a sort of post-conversation analysis later. Treat it like any other skill you're serious about learning. I've seen this help more than one struggling introvert.