Some language to simplify some of the places where the debate got stuck.
Analyzing how to preserve or act on preferences is a coherent thing to do, and it's possible to do so without assuming a one true universal morality. Assume a preference ordering, and now you're in the land of is, not ought, where there can be a correct answer (highest expected value).
Let existence be defined to mean everything, all the math, all the indexical facts. "Ah, but you left out-" Nope, throw that in too. Everything. Existence is a pretty handy word for that; let's reserve it for that purpose. As for any points about... (read more)
I can't really get why would one need to know which configuration gave rise to our universe.
This was with respect to feasibility of locating our specific universe for simulation at full fidelity. It's unclear if it's feasible, but if it were, that could entail a way to get at an entire future state of our universe.
I can't see why we would need to "distinguish our world from others"
This was only a point about useful macroscopic predictions any significant distance in the future; prediction relies on information which distinguishes which world we're in.
... (read more)For now I'm not sure to see where you're going after that, i'm sorry ! Maybe i'll think about it again
reevaluate how you're defining all the terms that you're using
Always a good idea. As for why I'm pointing to EV: epistemic justification and expected value both entail scoring rules for ways to adopt beliefs. Combining both into the same model makes it easier to discuss epistemic justification in situations with reasoners with arbitrary utility functions and states of awareness.
Knowledge as mutual information between two models induced by some unspecified causal pathway allows me to talk about knowledge in situations where beliefs could follow from arbitrary causal pathways. I would exclude from my definition of knowledge false beliefs instilled by an agent which still produce the correct predictions, and I'd ensure my definition... (read more)
Serious thinkers argue for both trying to slow down (PauseAI), and for defensive acceleration (Buterin, Aschenbrenner, etc)
Yeah, I'm in both camps. We should do our absolute best to slow down how quickly we approach building agents, and one way is leveraging AI that doesn't rely on being agentic. It offers us a way to do something like global compute monitoring and could possibly also alleviate short-term incentives satisfiable by building agents, by offering a safer avenue. Insofar as a global moratorium stopping all large model research is feasible, we should probably just do that.
then people fret about what they'd do with their time if they didn't have to work
It feels like there's... (read more)
The problem with that and many arguments for caution is that people usually barely care about possibilities even twenty years out.
It seems better to ask what would people do if they had more tangible options, such that they could reach a reflective equilibrium which explicitly endorses particular tradeoffs. People mostly pick not caring about possibilities twenty years out due to not seeing how their options constrain what happens in twenty years. This points to not treating their surface preferences as central insofar as they are not following from a reflective equilibrium with knowledge about all their available options. If one knows their principal can't get that opportunity, one has a responsibility to... (read more)
He was talking about academic philosophers.
This was a joke referencing academic philosophers rarely being motivated to pick satisfying answers in a time-dependent manner.
Are you saying that the mechanism of correspondence is an "isomorphism"? Can you please describe what the isomorphism is?
An isomorphism between two systems indicates those two systems implement a common mathematical structure -- a light switch and one's mental model of the light switch are both constrained by having implemented this mathematical structure such that their central behavior ends up the same. Even if your mental model of a light switch has two separate on and off buttons, and the real light switch you're comparing against has a single... (read 1091 more words →)
While many computations admit shortcuts that allow them to be performed more rapidly, others cannot be sped up.
In your game of life example, one could store larger than 3x3 grids and get the complete mapping from states to next states, reusing them to produce more efficient computations. The full table of state -> next state permits compression, bottoming out in a minimal generating set for next states. One can run the rules in reverse and generate all of the possible initial states that lead to any state without having to compute bottom-up for every state.
The laws of physics could preclude our perfectly pinpointing which universe is ours via fine measurement, but I don't... (read 352 more words →)
>They leave those questions "to the philosophers"
Those rascals. Never leave a question to philosophers unless you're trying to drive up the next century's employment statistics.
But why would there exist something outside a brain that has the same form as an idea? And even if such facts existed, how would ideas in the mind correspond to them? What is the mechanism of correspondence?
The missing abstraction here is isomorphism. Isomorphisms describe things that can be true in multiple systems simultaneously. How would the behavior of a light switch correspond to the behavior of another light switch, and to the behavior of one's mental model of a light switch? An isomorphism common to each of... (read 519 more words →)
You are Elon Musk instead of whoever you actually are.
This is a combination of descriptions only locally accurate in two different worlds and not coherent as a thought experiment asking about the one world fitting those descriptions.
Human Intelligence Enhancement via Learning:
Intelligence enhancement could entail cognitive enhancements which increase rate / throughput of cognition, increase memory, use of BCI or AI harnesses which offload work / agency or complement existing skills and awareness.
In the vein of strategies which could eventually lead to ASI alignment by leveraging human enhancement, there is an alternative to biological / direct enhancements which attempt to influence cognitive hardware, and instead attempt to externalize one's world model and some of the agency necessary to improve it. This could look like interacting with a system intended to elicit this world model and formalize it as a bayesian network or a HMM, with some included operations for... (read more)
Deleted.
'Alignment' has been used to refer to both aligning a single AI model, and the harder problem of aligning all AIs. This difference in the way the word alignment is used has led to some confusion. Alignment is not solved by aligning a single AI model, but by using a strategy which prevents catastrophic misalignment/misuse from any AI.
There are sometimes deadlines, such that we could get unacceptable outcomes by failing to make a particular sort of progress by the time a particular state arrives. Both referring to these fields as possibly needing to be fully solved, and referring to them as not containing things that might need to be solved by a deadline, are quite misleading.