I dunno that arrows and set stuff makes even less sense to me, and if I can't understand I can't write it down. And as far as I'm concerned I did give what inputs each function takes and what outputs they produce.
In other words, one of the major points of this post is that you can't reason across or order world states, nobody does or can, you only ever reason across symbolic representations of world states which are created via some particular process.
Recall that the neural net in function 1 is a classifier, not making predictions about the relationships between the variables. All function 1 does is take a domain of a large set of input signals and correlate them to a smaller range of internal variables. Technically you don't need a neural net to do this, you can hard code it in various ways, but I like thinking of the original Perceptron diagram in my head for this.
The point of using this toy model rather than just assuming an ordering over world states is to show that any modeling of world state is produced by particular functions using real world data. This encoding itself is what generates the epistemic problems, because the encoding of a semantic meaning to a particular signifier always creates some uncertainty when using that encoding as a reference point. In the toy model, X can be arbitrarily complex phenomenal experience, it encompasses every observable state of the world, so even for phantom values of X all we're doing is extrapolating to the experience we'd expect in a given situation. By creating a function which gives us the relationship between different values of X, we can make a plan on how to achieve a specific value of X that we want, so if you take current and expected conditions of X, the input, and a function Ω, you can have a plan to achieve a goal. The ordering function therefore orders these possible plans from best to worst based on whatever arbitrary criteria.
Undecidability is definitely a related but distinct problem. I felt that there were some objections to undecidability arguments based on bounded rationality solutions, so I decided to focus on the question of uncertainty about the inputs rather than uncertainty about halting (Godel assumed that we knew the inputs and functions perfectly), which I think applies to both unbounded and bounded rationality. You could definitely make an argument to that effect that the open ended nature of super intelligence specifically opens it up to undecidability problems for ranking it's preferences though.
I haven't seen those things you cited at the end but will check them out, thanks.
Whether the neural net in function 1 is already trained is up to interpretation, it could include the process of training, all that matters is that there are classified outputs at the end of the process.
Ω is the set of functions, whereas function 2 is the function which makes those functions.
"Input functions" isn't a thing, function 3 is ordering input and function pairs.
Sorry if these questions seem like obsessing over details, but this is how I go about understanding any piece of obscure mathematics: going through it symbol by symbol asking "what exactly does this mean?"
No worries, I'm sorry if my writing isn't as intelligible as I'd like. I'm glad someone is taking the time to understand it at all.
Sorry about mixing up the f and g notation I'm not particularly used to it.
Ω by creating some functions of A...Z means that in principle you can input arbitrary values for B,C...Z and get an expected value for A, so it is not necessarily any specific value. Ψ aren't the weights of a neural net, they're the classified outputs, detecting certain patterns in specific signals. Like how the original perceptron could identify a square from a picture of a square vs a circle, ect. So all function 1 is doing is cutting up the continuum of signals its receiving into discrete chunks.
Function three is some arbitrary sorting function.
I don't believe that the points you make about internal context and the lack of that context necessarily means that "all meaning is subjective", nor that contextual information is inherently meaningless. In order to impart information that modifies the meaning of a word, the context must have meaning. So to, because you can elaborate on your point of view, and those utterances can have a meaning that allows your interlocutor to have a greater understanding of your meaning by approximation, tells me that the meaning is not purely subjective or totally inaccessible, only hidden.
I suggest you check out some work of semiotics, such as Umberto Eco's Theory of Semiotics, which goes into how words and utterances have meaning. I've previously argued that its impossible to speak of a semantically invalid statement. https://nicolasdvillarreal.substack.com/p/higher-order-signs-hallucination
Thank you for the substantive response. I do think there's a few misunderstandings here of what I'm saying.
There need not be one best world state, and a world state need not be distinguishable from all others - merely from some of them. (In fact, utility function yielding a real value compresses the world into a characteristic of things we care about in such a way.)
I'm not talking about world states which exist "out there" in the territory, which, is debatable whether they exist or not anyway, I'm talking about world states that exist within the agent, compressed however they like. Within the agent, each world state they consider as a possible goal is distinguished from the others in order for it to be meaningful in some way. The distinguishing characteristics can be decided by the agent in an arbitrary way.
Your series of posts also assume that signs have a fixed order. This is false. For instance, different fields of mathematics treat real number as either first order signs (atomic objects) or higher-order ones, defined as relations on rational numbers.
It is no coincidence that those definitions are identical; you cannot assume that if something is expressible using higher order signs, is not also expressible in lower order.
So when I'm talking about signs I'm talking about signifier/signified pairs, when we're talking about real numbers for example, we're taking about a signifier with two different signifieds, therefore two different signs. I talk about exactly this issue in my last post:
When your goal is to create something new, something novel, your goal is necessarily a higher order sign. Things which do not yet exist cannot be directly represented as a first order sign. And how do we know that this thing which doesn't yet exist is the thing we seek? The only way is through reference to other signs, hence making it a higher order sign. For example, when we speak of a theory of quantum gravity, we are not speaking the name of an actual theory, but the theory which fulfills a role within the existing scientific framework of physics. This is different from known signs that are the output of an operation, for example a specific number that is the answer to a math question, in these cases sign function collapse is possible (we can think of 4 as either a proper name of a concept, or merely as the consequence as from a certain logical rule).
As I say, most signifiers do have both associated first order and higher order signs! But these are /not/ the same thing, they are not equivalent, as you say they are, from an information perspective. If you know the first order sign, there's no reason you would automatically know the corresponding higher order sign, and the same for vice versa, as I show in my excerpt from my most recent blog.
My argument specifically hinges on whether it's possible for an agent to have final goals without higher order signs: it's not, precisely because first order and higher order signs do not contain the same information.
Engaging with the perspective of orthogonality thesis itself: rejecting it means that a change in intelligence will lead, in expectation, to change in final goals. Could you name the expected direction of such a change, like "more intelligent agents will act with less kindness"?
I couldn't name a specific direction, but what I would say is that agents of similar intelligence and environment will tend towards similar final goals. Otherwise, I generally agree with this post on the topic. https://unstableontology.com/2024/09/19/the-obliqueness-thesis/
I don't think your dialectical reversion back to randomista logic makes much sense considering we can't exactly do random control trials to figure out any of the major questions of the social sciences. If you want to promote social science research, I think the best thing you could do is collect consistent statistics over long periods of times. You can learn a lot about modern societies just by learning how national accounts work and looking back at them many different ways. Alternatively, building agent based simulations allows you to test in flexible ways how different types of behavior, both heterogenous and homogenous, might effect macroscopic social outcomes. These are the techniques that I use and they've proven very helpful.
If there's one other thing you're missing is this, epistemology isn't something you can rely on others for, even trying to triangulate between different viewpoints. You always have to do your own epistemology, because every way of knowing you encounter in society is a part of someone's ideological framework trying to adversarially draw you into it.
When you're budgeting resources, conflicts with adversaries are a little different than other sorts of categories of expense, which might be largely determined by your own consumption habits or, if put at risk by unexpected changes in nature or in the economy, don't change in a way to actively thwart us, are more or less random. When in a conflict, you're always going to want to be conservative in estimating the resources you need, which is something obvious in any book on military logistics, and being conservative requires overestimating what your opponent can do, and underestimating how far your current resources will actually go. If you weren't conservative, you could put more resources towards other things (guns vs butter debates) but being conservative is probably more evolutionarily fit than being more accurate in that estimation, as the conservative planner will be more prepared in unexpected situations.