Johannes C. Mayer

↘↘↘↘↘↘↙↙↙↙↙↙
Checkout my Biography.
↗↗↗↗↗↗↖↖↖↖↖↖

Wiki Contributions

Comments

Here is a model of mine, that seems related.

[Edit: Add Epistemic status]
Epistemic status: I have used this successfully in the past and found it helpful. It is relatively easy to do. is large for me.

I think it is helpful to be able to emotionally detach yourself from your ideas. There is an implicit "concept of I" in our minds. When somebody criticizes this "concept of I", it is painful. If somebody says "You suck", that hurts.

There is an implicit assumption in the mind that this concept of "I" is eternal. This has the effect, that when somebody says "You suck", it is actually more like they say "You sucked in the past, you suck now, and you will suck, always and ever".

In order to emotionally detach yourself from your ideas, you need to sever the links in your mind, between your ideas and this "concept of I". You need to see an idea as an object that is not related to you. Don't see it as "your idea", but just as an idea.

It might help to imagine that there is an idea-generation machine in your brain. That machine makes ideas magically appear in your perception as thoughts. Normally when somebody says "Your idea is dumb", you feel hurt. But now we can translate "Your idea is dumb" to "There is idea-generating machinery in my brain. This machinery has produced some output. Somebody says this output is dumb".

Instead of feeling hurt, you can think "Hmm, the idea-generating machinery in my brain produced an idea that this person thinks is bad. Well maybe they don't understand my idea yet, and they criticize their idea of my idea, and not actually my idea. How can I make them understand?" This thought is a lot harder to have while being busy feeling hurt.

Or "Hmm, this person that I think is very competent thinks this idea is bad, and after thinking about it I agree that this idea is bad. Now how can I change the idea-generating machinery in my brain, such that in the future I will have better ideas?" That thought is a lot harder to have when you think that you yourself are the problem. What is that even supposed to mean that you yourself are the problem? This might not be a meaningful statement, but it is the default interpretation when somebody criticizes you.

The basic idea here is, to frame everything without any reference to yourself. It is not me producing a bad plan, but some mechanism that I just happened to observe the output of. In my experience, this not only helps alleviate pain but also makes you think thoughts that are more useful.

Answer by Johannes C. MayerDec 31, 202252

Here is what I would do, in the hypothetical scenario, where I have taken over the world.

  1. Guard against existential risk.
  2. Make sure that every conscious being I have access to is at least comfortable as the baseline.
  3. Figure out how to safely self-modify, and become much much much ... much stronger.
  4. Deconfuse myself about what consciousness is, such that I can do something like 'maximize positive experiences and minimize negative experiences in the universe', without it going horribly wrong. I expect that 'maximize positive experiences, minimize negative experiences in the universe' very roughly points in the right direction, and I don't expect that would change after a long reflection. Or after getting a better understanding of consciousness.
  5. Optimize hard for what I think is best.

Though this is what I would do in any situation really. It is what I am doing right now. This is what I breathe for, and I won't stop until I am dead.

[EDIT 2023-03-01_17-59: I have recently realized that is is just how one part of my mind feels. The part that feels like me. However, there are tons of other parts in my mind that pull me in different directions. For example, there is one part that wants me to do lots of random improvements to my computer setup, which are fun to do, but probably not worth the effort. I have been ignoring these parts in the past, and I think that their grip on me is stronger because I did not take them into account appropriately in my plans.]

Yes, that is a good point. I think you can totally write a program that checks given two lists as input, xs and xs', that xs' is sorted and also contains exactly all the elements from xs. That allows us to specify in code what it means that a list xs' is what I get when I sort xs.

And yes I can do this without talking about how to sort a list. I nearly give a property such that there is only one function that is implied by this property: the sorting function. I can constrain what the program can be totally (at least if we ignore runtime and memory stuff).

To test whether Drake’s circumvention of his short-term memory loss worked via the intended mechanism, I could ask my girlfriend in advance to prompt me once — and only once — to complete the long-term memory scene that I had been practicing. Then I could see if I have a memory of the scene after I fully regain my memory.

Maybe you need to think the thought many times over in order to overwrite the original memory. In your place, I would try to prepare something similar to what Drake did. Some mental objects that you can retrieve have a predesigned hole to put information. To me, it seems like this should not be that hard to get. Then for ideally 30 minutes or so (though the streaming algorithm experiment seems also very interesting) after the surgery when you don't have short-term memory, you can repeatedly try to insert some specific object in the memory.

Maybe it would make sense for the sake of the experiment to limit yourself to 3 possible objects that could be inserted. Your girlfriend can then choose one randomly after surgery, for you to drill into the memory, by repeatedly thinking about the scene completed with that specific object.

Then after the 30 minutes, you do something completely different. Then 1 hour afterwards your girlfriend can ask you what the object was that she told you 1 hour ago. Well and probably many times during the first 30 minutes.

Probably it would be best if your girlfriend (or whatever person is willing to do this) constantly reminds you during the first 30 minutes or so that you need to imagine the object. Probably at least every minute or so.

I don’t know about making god software, but human software is a lot of trial and error. I have been writing code for close to 40 years. The best I can do is write automated tests to anticipate the kinds of errors I might get. My imagination just isn’t as strong as reality.

I think it is incorrect to say that testing things fully formally is the only alternative to whatever the heck we are currently doing. I mean there is property-based testing as a first step (which maybe you also refer to with automated tests but I would guess you are probably mainly talking about unit tests).

Maybe try Haskell or even better Idris? The Haskell compiler is very annoying until you realize that it loves you. Each time it annoys you with compile errors it actually says "Look I found this error here that I am very very sure you'd agree is an error, so let me not produce this machine code that would do things you don't want it to do".

It's very bad at communicating this though, so it's words of love usually are blurted out like this:

Don't bother understanding the details, they are not important.

So maybe Haskell's greatest strength, being a very "noisy" compiler, is also its downfall. Nobody likes being told that they are wrong, well at least not until you understand that your goals and the compiler's goals are actually aligned. And the compiler is just better at thinking about certain kinds of things that are harder for you to think about.

In Haskell, you don't really ever try to prove anything about your program in your program. All of this you get by just using the language normally. You can then go one step further with Agda, Idris2, or Lean, and start to prove things about your programs, which easily can get tedious.

But even then when you have dependent types you can just add a lot more information to your types, which makes the compiler able to help you better. Really we could see it as an improvement to how you can tell the compiler what you want.

But again, you what you can do in dependent type theory? NOT use dependent type theory! You can use Haskell-style code in Idris whenever that is more convenient.

And by the way, I totally agree that all of these languages I named are probably only ghostly images of what they could truly be. But I guess humanity cares more about making javascript run REALLY REALLY FAST.

And I got to be careful not to go there.

I feel you on being distracted by software bugs. I’m one of those guys that reports them, or even code change suggestions (GitHub Pull Requests).

Yeah, I also do this. My experience so far generally has been very positive. It's really cool when I make an issue with "I would think it would be nice if this program does X", and then have it do x in 1 or 2 weeks.

I don't know where to open an issue though about that I think it would be better to not build a god we don't comprehend. Maybe I haven't looked hard enough.

Let xs be a finite list of natural numbers. Let xs' be the list that is xs sorted ascendingly.

I could write down in full formality, what it means for a list to be sorted, without ever talking at all about how you would go about calculating xs' given xs. That is the power I am talking about. We can say what something is, without talking about how to get it.

And yes this still applies for constructive logic, Because the property of being sorted is just the logical property of a list. It's a definition. To give a definition, I don't need to talk about what kind of algorithm would produce something that satisfies this condition. That is completely separate.

And being able to see that as separate is a really useful abstraction, because it hides away many unimportant details.

Computer Science is about how-to-do-X knowledge as SICP says. Mathe is about talking about stuff in full formal detail without talking about this how-to-do-X knowledge, which can get very complicated.

How does a modern CPU add two 64-bit floating-point numbers? It's certainly not an obvious simple way, because that would be way too slow. The CPU here illustrates the point as a sort of ultimate instantiation of implementation detail.

Yes, I totally agree. But I kind of got distracted with this post and wanted to get back to work as quickly as possible. But instead of making it a perpetual draft I pushed a bit further and got some MVP thing. I agree it's not that good, and adding concrete examples, and really making this more like a tutorial would be the first step I would take.

So I am wondering if doing something like this is still useful. Would it be worse if I had made it a perpetual draft or better?

"If you are assuming Software works well you are dead" because:

  • If you assume this you will be shocked by how terrible software is every moment you use a computer, and your brain will constantly try to fix it wasting your time.
  • You should not assume that humanity has it in it to make the god software without your intervention.
  • When making god software: Assume the worst.

Mathematical descriptions are powerful because they can be very terse. You can only specify the properties of a system and still get a well-defined system.

This is in contrast to writing algorithms and data structures where you need to get concrete implementations of the algorithms and data structures to get a full description.

The Model-View-Controller architecture is very powerful. It allows us to separate concerns.

For example, if we want to implement an algorithm, we can write down only the data structures and algorithms that are used.

We might want to visualize the steps that the algorithm is performing, but this can be separated from the actual running of the algorithm.

If the algorithm is interactive, then instead of putting the interaction logic in the algorithm, which could be thought of as the rules of the world, we instead implement functionality that directly changes the underlying data that the original algorithm is working on. These could be parameters to the original algorithm, which would modify the runtime behavior (e.g. we could change the maximum search depth for BFS). It could also change the current data the algorithm is working on (e.g. in quicksort we could change the pivot, or smaller_than_list just before they are set). The distinction is somewhat arbitrary. If we were to step through some Python code with a debugger, we could just set any variables in the program.

Usually, people think of something much "narrower" when they think about the Model-View-Controller-Architecture.

We could also do the same for a mathematical description. We can write down some mathematically well-defined thing and then separately think about how we can visualize this thing. And then again, separately, we can think about how would we interact with this thing.

Load More