LESSWRONG
LW

696
Ariel Cheng
15141
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Way You Go Depends A Good Deal On Where You Want To Get: FEP minimizes surprise about actions using preferences about the future as *evidence*
Ariel Cheng4mo80

See https://arxiv.org/abs/2006.12964:

CAI augments the ‘natural’ probabilistic graphical model with exogenous optimality variables. 4 . In contrast, AIF leaves the structure of the graphical model unaltered and instead encodes value into the generative model directly. These two approaches lead to significant differences between their respective functionals. AIF, by contaminating the veridical generative model with value-imbuing biases, loses a degree of freedom compared to CAI which maintains a strict separation between the veridical generative model of the environment and its goals. In POMDPs, this approach results in CAI being sensitive to an ‘observation-ambiguity’ term which is absent in the AIF formulation. Secondly, the different methods for encoding the probability of goals – likelihoods in CAI and priors in AIF – lead to different exploratory terms in the objective functionals. Specifically, AIF is endowed with an expected information gain that CAI lacks. AIF approaches thus lend themselves naturally to goal-directed exploration whereas CAI mandates only random, entropy-maximizing exploration.

These different ways of encoding goals into probabilistic models also lend themselves to more philosophical interpretations. CAI, by viewing goals as an additional exogenous factor in an otherwise unbiased inference process, maintains a clean separation between veridical perception and control, thus maintaining the modularity thesis of separate perception and action modules (Baltieri & Buckley, 2018). This makes CAI approaches consonant with mainstream views in machine learning that see the goal of perception as recovering veridical representations of the world, and control as using this world-model to plan actions. In contrast, AIF elides these clean boundaries between unbiased perception and action by instead positing that biased perception (Tschantz, Seth, & Buckley, 2020) is crucial to adaptive action. Rather than maintaining an unbiased world model that predicts likely consequences, AIF instead maintains a biased generative model which preferentially predicts our preferences being fulfilled. Active-inference thus aligns closely with enactive and embodied approaches (Baltieri & Buckley, 2019; Clark, 2015) to cognition, which view the action-perception loop as a continual flow rather than a sequence of distinct stages.

Reply
faul_sname's Shortform
Ariel Cheng6mo20

This is kinda related: 'Theories of Values' and 'Theories of Agents': confusions, musings and desiderata

Reply
Cole Wyeth's Shortform
Ariel Cheng7mo10

I think it would be persuasive to the left, but I'm worried that comparing AI x-risk to climate change would make it a left-wing issue to care about, which would make right-wingers automatically oppose it (upon hearing "it's like climate change").

Generally it seems difficult to make comparisons/analogies to issues that (1) people are familiar with and think are very important and (2) not already politicized.

Reply
Decomposing Agency — capabilities without desires
Ariel Cheng1y30

You might want to look here or here.

Reply
Predictive Processing
a month ago
6An Epistemological Nightmare
10mo
1