shminux

shminux's Comments

An Orthodox Case Against Utility Functions

First, I really like this shift in thinking, partly because it moves the needle toward an anti-realist position, where you don't even need to postulate an external world (you probably don't see it that way, despite saying "Everything is a subjective preference evaluation").

Second, I wonder if you need an even stronger restriction, not just computable, but efficiently computable, given that it's the agent that is doing the computation, not some theoretical AIXI. This would probably also change "too easily" in "those expectations aren't (too easily) exploitable to Dutch-book." to efficiently. Maybe it should be even more restrictive to avoid diminishing returns trying to squeeze every last bit of utility by spending a lot of compute.

Life as metaphor for everything else.

It makes sense to me that our views on consciousness are akin to Pasteur's views on vitalism, because the gap between what we observe (apparently conscious beings) and what we can explain (some simple interactions between neurons) is just so vast and murky.

What is the subjective experience of free will for agents?
how do I talk about how someone makes a decision and what a decision is from outside the subjective uncertainty of being the agent in the time prior to when the decision is made.

I am confused... Are you asking how would Omega describe someone's decision-making process? That would be like watching an open-source program execute. For example, if you know that the optimization algorithm is steepest descent, and you know the landscape it is run on, you can see every step it makes, including picking one of several possible paths.

What is the subjective experience of free will for agents?

Feel free to let me know either way, even if you find that the posts seem totally wrong or missing the point.

Implications of the Doomsday Argument for x-risk reduction

Right. Either way, it's not a good argument to base one's decisions on.

Implications of the Doomsday Argument for x-risk reduction
What are your stances on the Doomsday Argument?

The doomsday argument strikes me as complete and utter misguided bullshit, notwithstanding the fact that smart and careful physicists have worked on it, including J. Richard Gott and Brandon Carter, whose work in actual physics I had used extensively in my research. There are plenty of good reasons for x-risk work, no need to invoke lousy ones. The main issue with the argument is the misuse of probability.

First, the argument assumes a specific distribution (usually uniform) a priory without any justification. Indeed one needs a probability distribution to meaningfully talk about probabilities, but there is no reason to pick one specific distribution over another until you have a useful reference class.

Second, the potential infinite expectation value makes any conclusions from the argument moot.

Basically, the Doomsday argument has zero predictive power. Consider a set of civilizations with a fixed number of humans at any given time, each existing for a finite time T, randomly distributed with a distribution function f(T), which does not necessarily have a finite expectation value, standard deviation or any other moments. Now, given a random person from a random civilization at the time t, the Doomsday argument tells them that their civilization will exist for about as long as it had so far. It gives you no clue at all about the shape of f(t) beyond it being non-zero (though maybe measure zero) at t.

Now, shall we lay this nonsense to rest and focus on something productive?

What is the subjective experience of free will for agents?

It's a great post, just doesn't quite go far enough...

What is the subjective experience of free will for agents?
Answer by shminuxApr 02, 20207Ω3

My answer is a rather standard compatibilist one, the algorithm in your brain produces the sensation of free will as an artifact of an optimization process.

There is nothing you can do about it (you are executing an algorithm, after all), but your subjective perception of free will may change as you interact with other algorithms, like me or Jessica or whoever. There aren't really any objective intentional "decisions", only our perception of them. Therefore there the decision theories are just byproducts of all these algorithms executing. It doesn't matter though, because you have no choice but to feel that decision theories are important.

So, watch the world unfold before your eyes, and enjoy the illusion of making decisions.

I wrote about this over the last few years:

https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty

https://www.lesswrong.com/posts/TQvSZ4n4BuntC22Af/decisions-are-not-about-changing-the-world-they-are-about

https://www.lesswrong.com/posts/436REfuffDacQRbzq/logical-counterfactuals-are-low-res

Two Alternatives to Logical Counterfactuals

I'm trying t understand where exactly in your approach you sneak in the free will...

Necessity and Warrant

Another great post. I hope you elaborate on these Principles in your subsequent posts.

Load More