Posts

Sorted by New

Wiki Contributions

Comments

I have a habit of reading footnotes as soon as they are linked, and your footnote says that you won with queen odds before the call to guess what odds you'd win at, creating a minor spoiler.

Is it important that negentropy be the result of subtracting from the maximum entropy? It seemed a sensible choice, up until it introduces infinities, and made every state's negentropy infinite. (And also that, if you subtract from 0, then two identical states should have the same negentropy, even in different systems. Unsure if that's useful, or harmful).

Though perhaps that's important for the noting that reducing an infinite system to a finite macrostate is an infinite reduction? I'm not sure if I understand how (or perhaps when?) that's more useful than having it be defined as subtracted from 0, such that finite macrostates have finite negentropy, and infinite macrostates have -infinite negentropy (showing that you really haven't reduced it at all, which, as far as I understand with infinities, you haven't, by definition).

Back in Reward is not the optimization target, I wrote a comment, which received a (small I guess) amount of disagreement.

I intended the important part of that comment to be the link to Adaptation-Executers, not Fitness-Maximizers. (And more precisely the concept named in that title, and less about things like superstimuli that are mentioned in the article) But the disagreement is making me wonder if I've misunderstood both of these posts more than I thought. Is there not actually much relation between those concepts?

There was, obviously, other content to the comment, and that could be the source of disagreement. But I only have that there was disagreement to go on, and I think it would be bad for my understanding of the issue to assume that's where the disagreement was, if it wasn't.

When I tried to answer why we don't trade with ants myself, communication was one of the first things (I can't remember what was actually first) I considered. But I worry it may be more analogous to AI than argued here.

We sort of can communicate with ants. We know to some degree what makes them tick, it's just we mostly use that communication to lie to them and tell them this poison is actually really tasty. The issue may be less that communication is impossible, and more that it's too costly to figure out, and so no one tries to become Antman even if they could cut their janatorial costs by a factor of 7.

The next thought I had was that, if I were to try to get ants to clean my room, I think it's likely that the easiest line towards that is not figuring out how to communicate, but breeding some ants with different behavior (e.g. search for small bits of food, instead of large bits. This seems harder than that sentence suggests, but still probably easier than learning to speak ant). I don't like what that would be analogous to in human-AI interactions.

I think it's possible that an AI could fruitfully trade with humans. While it lacks a body, posting an ad on Craigslist to get someone to move something heavy is probably easier than figuring out how to hijack a wifi-enabled crane or something. 

But I don't know how quickly that changes. If the AI is trying to build a sci-fi gadget, it's possible that an instruction set to build it is long or complicated enough that a human has trouble following it accurately. The costs of writing intuitive instructions, and also designing the gadget such that idiot-proof construction is possible could be high enough that it's better to do it itself. 

I interpret OP  (though this is colored by the fact that I was thinking this before I read this) as saying Adaptation-Executers, not Fitness-Maximizers, but about ML. At which point you can open the reference category to all organisms.

Gradient descent isn't really different from what evolution does. It's just a bit faster, and takes a slightly more direct line. Importantly, it's not more capable of avoiding local maxima (per se, at least).

So, I want to note a few things. The original Eliezer post was intended to argue against this line of reasoning:

I occasionally run into people who say something like, "There's a theoretical limit on how much you can deduce about the outside world, given a finite amount of sensory data."

He didn't worry about compute, because that's not a barrier on the theoretical limit. And in his story, the entire human civilization had decades to work on this problem.

But you're right, in a practical world, compute is important.

I feel like you're trying to make this take as much compute as possible.

Since you talked about headers, I feel like I need to reiterate that, when we are talking to a neural network, we do not add the extra data. The goal is to communicate with the neural network, so we intentionally put it in easier to understand formats.

In the practical cases for this to come up (e.g. a nascent superintelligence figuring out physics faster than we expect), we probably will also be inputting data in an easy to understand format.

Similarly, I expect you don't need to check every possible esoteric format. The likelihood of the image using an encoding like 61 bits per pixel, with 2 for red, 54 for green and 5 for blue is just, very low, a priori. I do admit I'm not sure if only using "reasonable" formats would cut down the possibilities into the computable realm (obviously depends on definitions of reasonable, though part of me feels like you could (with a lot of work) actually have an objective likeliness score of various encodings). But certainly it's a lot harder to say that it isn't than just saying "f(x) = (63 pick x), grows very fast." 

Though, since I don't have a good sense for whether "reasonable" ones would be a more computable number, I should update in your direction. (I tried to look into something sort of analogous, and the most common 200 passwords cover a little over 4% of all used passwords, which, isn't large enough for me to feel comfortable expecting that the most "likely" 1,000 formats would cover a significant quantity of the probability space, or anything.)

(Also potentially important. Modern neural nets don't really receive things as a string of bits, but instead as a string of numbers, nicely split up into separate nodes. (yes, those numbers are made of bits, but they're floating point numbers, and the way neural nets interact with them is through all the floating point operations, so I don't think the neural net actually touches the bit representation of the number in any meaningful way.)

"you're jumping to the conclusion that you can reliably differentiate between..."

I think you absolutely can, and the idea was already described earlier.

You pay attention to regularities in the data. In most non-random images, pixels near to each other are similar. In an MxN image, the pixel below is a[i+M], whereas in an NxM image, it's a[i+N]. If, across the whole image, the difference between a[i+M] is less than the difference between a[i+N], it's more likely an MxN image. I expect you could find the resolution by searching all possible resolutions from 1x<length> to <length>x1, and finding which minimizes average distance of "adjacent" pixels.

Similarly (though you'd likely do this first), you can tell the difference between RGB and RGBA. If you have (255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0), this is probably 4 red pixels in RGB, and not a fully opaque red pixel, followed by a fully transparent green pixel, followed by a fully transparent blue pixel in RGBA. It could be 2 pixels that are mostly red and slightly green in 16 bit RGB, though. Not sure how you could piece that out.

Aliens would probably do a different encoding. We don't know what the rods and cones in their eye-equivalents are, and maybe they respond to different colors. Maybe it's not Red Green Blue, but instead Purple Chartreuse Infrared. I'm not sure this matters. It just means your eggplants look red.

I think, even if it had 5 (or 6, or 20) channels, this regularity would be born out, between bit i and bit i+5 being less than bit i and i+1, 2, 3, or 4.

Now, there's still a lot that that doesn't get you yet. But given that there are ways to figure out those, I kinda think I should have decent expectations there's ways to figure out other things, too, even if I don't know them.

I do also think it's important to zoom out to the original point. Eliezer posed this as an idea about AGI. We currently sometimes feed images to our AIs, and when we do, we feed them as raw RGB data, not encoded, because we know that would make it harder for the AI to figure out. I think it would be very weird, if we were trying to train an AI, to send it compressed video, and much more likely that we do, in fact, send it raw RGB values frame by frame.

I will also say that the original claim (by Eliezer, not the top of this thread), was not physics from one frame, but physics from like, 3 frames, so you get motion, and acceleration. 4 frames gets you to third derivatives, which, in our world, don't matter that much. Having multiple frames also aids in ideas like the 3d -> 2d projection, since motion and occlusion are hints at that.

And I think the whole question is "does this image look reasonable", which you're right, is not a rigorous mathematical concept. But "'looks reasonable' is not a rigorous concept" doesn't get followed by "and therefore is impossible" Above are some of the mathematical descriptions of what 'reasonable' means in certain contexts. Rendering a 100x75 image as 75x100 will not "look reasonable". But it's not beyond thinking and math to determine what you mean by that.

"the addition of an unemployable worker causes ... the worker's Shapley values to drop to $208.33 (from $250)."

I would emphasize here that the "workers'" includes the unemployed one. It was not obvious to me, until about halfway through the next paragraph, and I think the next paragraph would read better with that in mind from the start.

I'd be interested to know why you think that.

I'd be further interested if you would endorse the statement that your proposed plan would fully bridge that gap.

And if you wouldn't, I'd ask if that helps illustrate the issue.

It seems odd to suggest that the AI wouldn't kill us because it needs our supply chain. If I had the choice between "Be shut down because I'm misaligned" (or "Be reprogrammed to be aligned" if not corrigible) and "Have to reconstruct the economy from the remnants of human civilization," I think I'm more likely to achieve my goals by trying to reconstruct the economy.

So if your argument was meant to say "We'll have time to do alignment while the AI is still reliant on the human supply chain," then I don't think it works. A functional AGI would rather destroy the supply chain and probably fail at its goals, than be realigned and definitely fail at its goals.

(Also, I feel this is mostly a minor thing, but I don't really understand your reference class on novel technologies. Why is the time measured from "proof of concept submarine" to "capable of sinking a warship"? Or from "theory of relativity" to "Atom Bomb being dropped"? Maybe that was just the data available, but why isn't it "Wright brothers start attempting heavier than air flight" to "Wright brothers do heavier than air flight"? Because when reading my mind immediately wondered how much of the 36 year gap on mRNA vaccines was from "here's a cool idea" to "here's a use case", instead of "here's a cool idea" to "we can actually do that")

Load More