Wiki Contributions

Comments

London L.10mo10

If the color of the number is considered to be an intrinsic property of the number, then under the Bruce Framework, yes, |C|<|B| and |C|=|A| and |B|=|A|.

London L.10mo10

So then because the winner alternates at an even rate between the two sets, you can intuitionally guess that they are equal?

London L.10mo10

I like this a lot! I'm curious, though, in your head, what are you doing when you're considering an "infinite extent of "? My guess is that you're actually doing something like the "markers" idea (though I could be wrong), where you're inherently matching the extent of  on A to the extent of  on B for smaller-than-infinity numbers, and then generalizing those results.

For example, when thinking through your example of alternating pairs, I'm checking to see when =3, that's basically containing the 2 and everything lower, so I mark 3 and 2 as being the same, and then I do the density calculation. Matching 3 to 2 and then 7 to 6, I see that each set always has 2 elements in each section, so I conclude that they have an equal number of elements.

Does this "matching" idea make sense? Do you think it's what you do? If not, what are your mental images or concepts like when trying to understand what happens at the "infinite extent"? (I imagine you're not immediately drawing conclusions from imagining the infinite case, and are instead building up something like a sequence limit or pattern identification among lower values, but I could be wrong.)

London L.10mo10

Yep, absolutely! It was actually through explaining Hilbert's Hotel that Bruce helped me come up with the Bruce Framework.

I do think it is odd though that the mathematical notion of cardinality doesn't solve the Thanos Problem, and I'm worried that AI systems that understand math practically well but not theoretically well will consider the loss of half an infinite set to be no loss at all, similar to how if you understand Hilbert's you'll believe that adding twice the number of hotels is never an issue.

London L.10mo41

I'm posting this here because I find that I don't get the feedback or discussion that I want in order to improve my ideas on Medium. So I hope that people leave comments here so we can discuss this further.

Personally, I've come across two other models of how humans intuitively compare infinities.

One of them is that humans use a notion of "density". For example, positive multiples of three (3, 6, 9, 12, etc.) seem like a smaller set than all positive numbers (1, 2, 3, etc.). You could use the Bruce Framework here, but I think that what we're actually doing something closer to evaluating the density of the sets. We notice that 3 and 6 and 9 are in both sets (similar to the Bruce Framework), but then we look to see how many numbers are between those "markers". In the positive numbers set, there are 3 numbers between each "marker" (3, 4, 5 and then 6, 7, 8), whereas in the set of positive multiples of three there is only 1 number between each "marker" (3 and then we immediately go to 6). Thus, the cardinality of the positive numbers must be 3 times bigger than the cardinality of positive multiples of three.

If you expand this sort of thinking further, you get to a more "meta-model" of how humans intuitively compare sets, which is that we seem to build simple and easy functions to map items in one set to items in the other set. Sometimes the simple function is "are these inherently equal", as in the Bruce Framework. Other times it's "obvious" function like converting a negative number to a positive number. Once we have this mapping of "markers", we then use density to compare the sizes of the two sets.

I'm not 100% sure if density is the only intuitive metric we use, but from the toy examples in my head it is. What are your thoughts? Are there any infinite sets (numbers or objects or anything) where your intuitive calculation doesn't involve pairing up markers between the sets and then evaluating the density between those markers?

To (rather gruesomely) link this back to the dog analogy, RL is more like asking 100 dogs to sit, breeding the dogs which do sit and killing those which don't.  Overtime, you will have a dog that can sit on command. No dog ever gets given a biscuit.

The phrasing I find most clear is this: Reinforcement learning should be viewed through the lens of selection, not the lens of incentivisation.


I was talking through this with an AGI Safety group today, and while I think the selection lens is helpful and helps illustrate your point, I don't think the analogy quoted above is accurate in the way it should be.

The analogy you give is very similar to genetic algorithms, where models that get high reward are blended together with the other models that also get high reward and then mutated randomly. The only process that pushes performance to be better is that blending and mutating, which doesn't require any knowledge of how to improve the models' performance.

In other words, carrying out the breeding process requires no knowledge about what will make the dog more likely to sit. You just breed the ones that do sit. In essence, not even the breeder "gets the connection between the dogs' genetics and sitting".

However, at the start of your article, you describe gradient descent, which requires knowledge about how to get the models to perform better at the task. You need the gradient (the relationship between model parameters and reward at the level of tiny shifts to the model parameters) in order to perform gradient descent.

In gradient descent, just like the genetic algorithms, the model doesn't get access to the gradient or information about the reward. But you still need to know the relationship between behavior and reward in order to do the update. In essence, the model trainer "gets the connection between the models' parameters and performance".

So I think a revised version of the dog analogy that takes this connection information into account might be something more like:
1. Asking a single dog to sit, and monitoring its brain activity during and afterwards.
2. Search for dogs that have brain structures that you think would lead to them sitting, based on what you observed in the first dog's brain.
3. Ask one of those dogs to sit and monitor its brain activity.
4. Based on what you find, search for a better brain structure for sitting, etc.
At the end, you'll have likely found a dog that sits when you ask it to.

A more violent version would be:
1. Ask a single dog to sit and monitor brain activity
2. Kill the dog and modify its brain in a way you think would make it more likely to sit. Also remove all memories of its past life. (This is equivalent to trying to find a dog with a brain that's more conducive to sitting.)
3. Reinsert the new brain into the dog, ask it to sit again, and monitor its brain activity.
4. Repeat the process until the dog sits consistently.

To reiterate, with your breeding analogy, the person doing the breeding doesn't need to know anything about how the dogs' brains relate to sitting. They just breed them and hope for the best, just like in genetic algorithms. However, with this brain modification analogy, you do need to know how the dog's brain relates to sitting. You modify the brain in a way that you think will make it better, just like in gradient descent.

I'm not 100% sure why I think that this is an important distinction, but I do, so I figured I'd share it, in hopes of making the analogy less wrong.

When you say "the dog metaphor" do you mean the original one with the biscuit, or the later one with the killing and breeding?

It is! You (and others who agree with this) might be interested in this competition (https://futureoflife.org/project/worldbuilding-competition/) which aims to create more positive stories of AI, which may help shift pop culture in a positive direction.

I had a friend in a class today where you need to know the programming language C in order to do the class. But now with ChatGPT available, I told them it probably wasn't that big of an issue, as you could probably have ChatGPT teach you C as you go through the class. I probably would have told them they should drop the class just one semester ago (before ChatGPT).

My personal analogy has been that these chat bots are like a structural speed up for humans in a similar way that Google Docs and Drive were for working on documents and files with people - it's a free service that everyone just has access to now to talk through ideas or learn things. It's ethical to use, and if you don't use it, you'll probably not be as capable as those who do.

Small typo in point (-2): "Less than fifty percent change" --> "Less than 50 percent chance"

Load More