Bachelor in general and applied physics. AI safety/Agent foundations researcher wannabe.
I love talking to people, and if you are an alignment researcher we will have at least one common topic (but I am very interested in talking about unknown to me topics too!), so I encourage you to book a call with me: https://calendly.com/roman-malov27/new-meeting
Email: roman.malov27@gmail.com
GitHub: https://github.com/RomanMalov
TG channels (in Russian): https://t.me/healwithcomedy, https://t.me/ai_safety_digest
Where specifically that assumption is spelled out?
Also, I don't like that if I click on the post in the update feed and then refresh the page I loose the post
It might be that I have smth wrong with the internet but this beige widget isn't loading
The "will" is supposedly taken away by GLUT, which is possible to create and have a grasp of it for small systems, then people (wrongly) generalize this for all systems including themselves. I'm not claiming that any object that you can't predict has a free will, I'm saying that having ruled out free will from a small system will not imply lack of free will in humans. I'm claiming "physicality no free will" and "simplicity no free will", I'm not claiming "complexity free will".
What do you mean by programs here?
When someone is doing physics (tries to find out what happens with a physical system knowing it initial conditions), they are performing the transformation from the time-consuming-but-easy-to-express form of connecting the initial conditions to the end result (physical laws), to a form of a single entry in the giant look-up table which matches initial conditions to the end result (which is not-time-consuming-but-harder-to-express form), essentially flattening out the time dimension. That creates a feeling that the process that they are analyzing is pre-determined, that this giant look-up table already exists. And when they apply it to themselves, this can create a feeling of no control over their own actions, like those observation-action pairs are drawn from that pre-existing table. But this table doesn't actually exist; they still need to perform the computation to get to the action; there is no way around it. Wherever the process is performed, that process is the person.
In other words, when people do physics on simple enough systems that they can fit in their head both the initial conditions and the end result and the connection between them, they feel a sense of "machineness" about those systems. They can overgeneralize that feeling over all physical systems (like humans), missing out on the fact that this feeling should only be felt when they actually can fit the model of the system (and initial conditions/end results entries) in their head, which they don't in the case of humans.
Money is a good approximation for what people value. Value can be destroyed. But what should I do to money to destroy the value it encompasses?
I might feel bad if somebody stole my wallet, but that money hasn't been destroyed; it is just now going to bring utility to another human, and if I (for some weird reason) value the quality of life of the robber just as much as my own, I wouldn't even think something bad has happened.
If I actually destroy money, like burn it to ashes, then there will be less money in circulation, which will increase the value of each banknote, making everyone a bit richer (and me a little poorer). So is it balanced in that case?
Maybe I need to read some economics, please recommend me some book which would dissolve the question.
Yes, the point of this post is that low Kolmogorov complexity doesn't automatically yield high interpretability.
i.e. to see fresh comments