a postdoctoral researcher at MIT, "complexity" enthusiast, digital nomad. http://pchvykov.mit.edu/
Wow, wonderful analysis! I'm on-board mostly - except maybe I'd leave some room for doubt of some claims you're making. And your last paragraph seems to suggest that a "sufficiently good and developed" algorithm could produce large cultural change? Also, you say "as human mediators (plus the problem of people framing it as 'objective'), just cheaper and more scalable" - to me that would quite a huge win! And I sort of thought that "people framing it as objective" is a good thing - why do you think it's a problem? I could even go as far as saying that even if it was totally inaccurate, but unbiased - like a coin-flip - and if people trusted it as objectively true, that would already help a lot! Unbiased = no advantage to either side. Trusted = no debate about who's right. Random = no way to game it.
Cool that you find this method so powerful! To me it's a question of scaling: do you think personal mindfulness practices like Gendlin's Focusing are as easy to scale to a population as a gadget that tell you some truth about you? I guess each of these face very different challenges - but so far experience seems to show that we're better at building fancy tech than we are at learning to change ourselves.What do you think is the most effective way to create such culture-shift?
Thanks for such thoughtful reply - I think I'm really on-board with most of what you're saying.
I agree that analysis is the hard part of this tech - and I'm hoping that this is what is just now becoming possible to do well with AI, like check out https://www.chipbrain.com/
Another point I think is important: you say "Emotions aren't exactly impossible to notice and introspect honestly on." - having been doing some emotional-intelligence practice for the last few years, I'm very aware of how difficult it is to honestly introspect on my own emotions. It's sort of like trying to objectively gauge my own attractiveness in photos - really tough to be objective! and I think this is one place that an AI could really help (they're building one for attractiveness now too actually).
I see your point that the impact will likely be marginal, compared to what we already have now - and I'm wondering if there is some way we could imagine applying such technology to have a revolutionary impact, without falling into Orwellian dystopia. Something about creative inevitable self-awareness, emotion-based success metrics, or conscious governance.
Any ideas how this could be used save the world? Or do you think there isn't any real edge it could give us?
yeah, I can try to clarify some of my assumptions, which probably won't be fully satisfactory to you, but a bit:
[of course, in the utilitarian sense such violent transitions are accompanied by a lot of suffering, which is bad - but in a consequentialist sense purely, with a sufficiently long time-horizon of consequences, perhaps it's not as big as it first seems?]
Yeah, I'm quite curious to understand this point too - certainly not sure how far this reasoning can be applied (and whether Ferdinand is too much of a stretch). I was thinking of this assassination as the "perturbation in a super-cooled liquid" - where it's really the overall geopolitical tension that was the dominant cause, and anything could have set off the global phase transition. Though this gets back to the limitations of counter-factual causality in the real-world...
cool - and I appreciate that you think my posts are promising! I'm never sure if my posts have any meaningful 'delta' - seems like everything's been said before.
But this community is really fun to post for, with meaningful engagement and discussion =)
hmm, so what I was thinking is whether we could give an improved definition of causality based on something like "A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments" - which may have a funny dependence on the game or environment we choose.
Though as hard as the counterfactual definition is to work with in practice, this may be even harder...
You post may be related to this, though not the same, I think. I guess what I'm suggesting isn't directly about decision theory.
whow, some Bayesian updating there - impressive! :)
I'm not sure why this was crossed out - seems quite civil to me... And I appreciate your thoughts on this!
I do think we agree at the big-picture level, but have some mismatch in details and language. In particular, as I understand J. Pearl's counter-factual analysis, you're supposed to compare this one perturbation against the average over the ensemble of all possible other interventions. So in this sense, it's not about "holding everything else fixed," but rather about "what are all the possible other things that could have happened."