redlizard

redlizard's Posts

Sorted by New

redlizard's Comments

An Orthodox Case Against Utility Functions

I do not think you are selling a strawman, but the notion that a utility function should be computable seems to me to be completely absurd. It seems like a confusion born from not understanding what computability means in practice.

Say I have a computer that will simulate an arbitrary Turing machine T, and will award me one utilon when that machine halts, and do nothing for me until that happens. With some clever cryptocurrency scheme, this is a scenario I could actually build today. My utility function ought plausibly to have a term in it that assigns a positive value to the computer simulating a halting Turing machine, and zero to the computer simulating a non-halting Turing machine. Yet the assumption of utility function computability would rule out this very sensible desire structure.

If I live in a Conway's Game of Life universe, there may be some chunk of universe somewhere that will eventually end up destroying all life (in the biological sense, not the Game of Life sense) in my universe. I assign lower utility to universes where this is the case, than to those were it is not. Is that computable? No.

More prosaically, as far as I currently understand, the universe we actually live in seems to be continuous in nature, and its state may not be describable even in principle with a finite number of bits. And even if it is, I do not actually know this, which means my utility function is also over potential universes (which, as far as I know, might be the one I live in) that require an infinite amount of state bits. Why in the world would one expect a utility function over an uncountable domain to be computable?

As far as I can see, the motivation for requiring a utility function to be computable is that this would make optimization for said utility function to be a great deal easier. Certainly this is true; there are powerful optimization techniques that apply only to computable utility functions, that an optimizer with an uncomputable utility function does not have access to in their full form. But the utility function is not up for grabs; the fact that life will be easier for me if I want a certain thing, should not be taken as an indication that that is want I want! This seems to me like the cart-before-horse error of trying to interpret the problem as one that is easier to solve, rather than the problem one actually wants solved.

One argument is that U() should be computable because the agent has to be able to use it in computations. If you can't evaluate U(), how are you supposed to use it? If U() exists as an actual module somewhere in the brain, how is it supposed to be implemented?

This line of thought here illustrates very well the (I claim) grossly mistaken intuition for assuming computability. If you can't evaluate U() perfectly, then perhaps what your brain is doing is only an approximation of what you really want, and perhaps the same constraint will hold for any greater mind that you can devise. But that does not mean that what your brain is optimizing for is necessarily what it actually wants! There is no requirement at all that your brain is a perfect judge of the desirability of the world it's looking at, after all (and we know for a fact that it does a far from perfect job at this).

Book Review: Design Principles of Biological Circuits

This second claim sounds to me as being a bit trivial. Perhaps it is my reverse engineering background, but I have always taken it for granted that approximately any mechanism is understandable by a clever human given enough effort.

This book [and your review] explains a number of particular pieces of understanding of biological systems in detail, which is super interesting; but the mere point that these things can be understood with sufficient study almost feels axiomatic. Ignorance is in the map, not the territory; there are no confusing phenomena, only minds confused by phenomena; etc. Even when I knew nothing about this biological machinery, I never imagined for a second that no understanding was attainable in principle. I only saw *systems that are not optimized for ease of understanding*, and therefore presumably more challenging to understand than systems designed by human engineers which *are* optimized for ease of understanding.

But I get the impression that the real point you are shooting for (and possibly, the point the book is shooting for) is a stronger point than this. Not so much "there is understanding to be had here, if you look deeply enough", but rather a claim about what *particular type of structure* we are likely to find, and how this may or may not conform to the type of structure that humans are trained to look for.

Is this true? If it is, could you expand on this distinction?

What Programming Language Characteristics Would Allow Provably Safe AI?

If you are going to include formal proofs with your AI showing that the code does what it's supposed to, in the style of Coq and friends, then the characteristics of traditionally unsafe languages are not a deal-breaker. You can totally write provably correct and safe code in C, and you don't need to restrict yourself to a sharply limited version of the language either. You just need to prove that you are doing something sensible each time you perform a potentially unsafe action, such as accessing memory through a pointer.

This slows things down and adds to your burdens of proof, but not really by that much. It's a few more invariants you need to carry around with you throughout your proofs. Where in a safer language you may have to prove that your Foo list is still sorted after a certain operation, in C you will additionally have to prove that your pointer topology is still what you want after a certain operation. No big deal. In particular, a mature toolkit for proving properties about C programs will presumably have tools for automating away the 99% of trivial proof obligations involving pointer topology, leaving something for you to prove only when you are doing something clever.

For any such property that you can screw up, a language that will not allow you to screw it up in the first place will make your life easier and your proofs simpler, which is why OCaml is more popular as a vehicle for proven correct code than C. But if you are proving the correctness of every aspect of your program anyway, this is a pretty minor gain; the amount of stuff there is to prove about your object-level algorithms will be vastly greater than the requirements added by the lack of safety of your programming language. If only because there will be specialized tools for rapidly dealing with the latter, but not the former.

Not all those specialized tools exist just yet. Currently, the program correctness proof systems are pretty good at proving properties about functions [in the mathematical sense] operating on data structured in a way that mathematicians like to work with, and a bit rubbish at everything else people use software to do; it is no coincidence that Coq, Agda, Idris, Isabelle, and friends all work primarily with purely functional languages using some form of constructed types as a data model. But techniques for dealing with computing applications in a broader sense will have to be on the proof-technology roadmap sooner or later, if correctness proofs are ever going to fulfill their promise outside selected examples. And when they do, there will not be a big difference between programming in C and programming in OCaml as far as proving correctness is concerned.

tl;dr: I don't think language safety is going to be more than a rounding error if you want to prove the correctness of a large piece of software down to the last bit, once all the techniques for doing that sort of thing at all are in place. The amount of program-specific logic in need of manual analysis is vastly greater than the amount of boilerplate a safe language can avoid.

A Personal Rationality Wishlist
The point is: if people understood how their bicycle worked, they’d be able to draw one even without having to literally have one in front of them as they drew it!

I don't think this is actually true. Turning a conceptual understanding into an accurate drawing is a nontrivial skill. It requires substantial spatial visualization ability, as well as quite a bit of drawing skill -- one who is not very skilled in drawing, like myself, might poorly draw one part of a bike, want to add two components to it, and then realize that there is no way to add a third component to the poor drawing without turning it into an illegible mess of ink. There is a reason technical drawing is an explicit course in engineering education.

I built a nontrivial construction yesterday, that I understand in great detail and personally designed in OpenSCAD beforehand, that I could not put on paper by hand in a way that is vaguely mechanically accurate, without a visual reference (be it the actual construction or the CAD model). At least, not in one try -- I might manage if it I threw away the first three sketches.

Change A View: An interesting online community
Even if such a person decides to do this, they will eventually get fed up and leave.

Will they, necessarily? The structure of the problem you describe sounds a lot like any sort of teaching, which involves a lot of finding out what a student misunderstands about a particular topic and then fixing that, even if you clear up that same misunderstanding for a different student every week. There are lots of people who do not get fed up with that. What makes this so different?

Pecking Order and Flight Leadership
Pigeons have stable, transitive hierarchies of flight leadership, and they have stable pecking order hierarchies, and these hierarchies do not correlate.

one of the things you can do with the power to give instructions is to instruct others to give you more goodies.

It occurs to me that leading a flight is an unusual instruction-giving power, in that it comes with almost zero opportunities to divert resources in your own direction. Choosing where to fly and when to land affects food options, but it does not affect your food options relative to your flight-mates. Most leadership jobs give many more opportunities to turn the position into zero-sum personal benefits.

I suspect this is not a coincidence. Can anyone think of a case where the pecking order and the leadership hierarchy are uncorrelated in a situation where the leadership is exploitable for pecking opportunities?

Thoughts on Ben Garfinkel's "How sure are we about this AI stuff?"

General rationality question that should not be taken to reflect any particular opinion of mine on the topic at hand:

At what point should "we can't find any knowledgeable critics offering meaningful criticism against <position>" be interpreted as substantial evidence in favor of <position>, and prompt one to update accordingly?

Good arguments against "cultural appropriation"
Having lost this signaling tool, we are that much poorer.

Are we? Signaling value is both a blessing and a curse, and my impression is that it is generally zero-sum. Personally, I consider myself *richer* when a mundane activity or lifestyle choice loses its signaling association, for it means I am now less restricted in applying it.

Fixed Point Exercises

At the time of writing, for the two spoilers in the main post, hovering over either will reveal both. Is that intentional? It does not seem desirable.

The funnel of human experience

I think there is about a three orders of magnitude difference between the difficulties of "inventing calculus where there was none before" and "learning calculus from a textbook explanation carefully laid out in the optimal order, with each component polished over the centuries to the easiest possible explanation, with all the barriers to understanding carefully paved over to construct the smoothest explanatory trajectory possible".

(Yes, "three orders of magnitude" is an actual attempt to estimate something, insofar as that is at all meaningful for an unquantified gut instinct; it's not just something I said for rhetoric effect.)

Load More