Bachelor in general and applied physics. AI safety/Agent foundations researcher wannabe.
I love talking to people, and if you are an alignment researcher we will have at least one common topic (but I am very interested in talking about unknown to me topics too!), so I encourage you to book a call with me: https://calendly.com/roman-malov27/new-meeting
Email: roman.malov27@gmail.com
GitHub: https://github.com/RomanMalov
TG channels (in Russian): https://t.me/healwithcomedy, https://t.me/ai_safety_digest
Wouldn't physicalist theories gain complexity because they have to explain phenomenology for every new substrate differently? Suppose that in a post-utopian nanotech world, I am constantly changing substrates (but the abstract Turing machine implementing me doesn't change). Then a physicalist theory would grow in complexity very fast, having to connect complex conscious phenomena to different physics, while a computationalist theory would only have to connect the implementation. Though I might be wrong about the complexity: SI could just find this loophole of building physicalist phenomenological bridges using the same Turing machine with different implementations, saving on complexity, but wouldn't it count as computationalist at that point?
I would also add the following property to the list of legitimacy desiderata: the more reasoning is done, the more legitimacy it has.
When I'm struggling to understand what understanding means, I look at what this understanding does. Does this understanding improve my performance in some narrow domain (like solving math problems)? Does this understanding allow me to communicate with other agents more effectively?
Perhaps for this market analogy, there could be some kind of meta-level understanding: if you add this trader to the market, it allows other traders to translate between contexts (or make translation cheaper/more efficient).
What topic is your paper about?
or at least make them less obvious
My eyes are tired of AI-generated images. At this point, I would even prefer Corporate Memphis. It saddens me every time I see an obviously AI-generated image on the website of some good cause (like alignment agendas).
One counterexample is LessWrong’s featured articles, which sometimes use AI-generated backgrounds, but those are usually rather abstract, and their imperfections are less noticeable and actually fit the style.
Some folks on LessWrong already push back really hard on AI-generated text, and I’d like to add some pushback on AI-generated images too.
The market-maker will also happily accept money for nothing, corresponding to .
If I understood the analogy correctly, taking money for free would correspond to ⊢⊤, but taking the goods for free would correspond to .
These theories are typically developed within the domain of arithmetic, which means that in order to talk about sentences, we need to choose a way to encode sentences as numbers. This is now standard practice in computer science (where everything is encoded as binary numbers), but Gödel introduced the idea in this context, so we call the practice Gödel encoding. The current essay uses regular quotes to represent this, for familiarity. Hence, here represents the Gödel code for sentence .
Though slightly weaker systems that prove their own consistency exist: self-verifying theories. These might still have a lot of the theorems that we know and love.
Footnotes 3 and 4 do not refer to anything in the text.
Do 'transfinite natural number' and 'hyperfinite natural number' mean the same thing in this context?
White House launches Manhattan project for AI (sorta).
In this pivotal moment, the challenges we face require a historic national effort, comparable in urgency and ambition to the Manhattan Project that was instrumental to our victory in World War II and was a critical basis for the foundation of the Department of Energy (DOE) and its national laboratories.
I guess I am a bit confused about the process of encoding phenomenological data into bits. If a physicalist is doing it, they might include (from a computationalist perspective) unnecessary detail about the movement of subatomic particles. If a computationalist is doing it, they might (from a physicalist perspective) exclude important detail about EM fields that affect qualia. Or is there a common ground on which both perspectives can agree?
Trying to answer my own question: the obvious way is to have everything encoded, down to every quantum fluctuation. In that case, the computationalist hypothesis has to explain all of the thermal noise in addition to consciousness, which seems unfair to me, since it is a theory of consciousness, not of physics.