cousin_it

Wiki Contributions

Comments

I think the main difficulty is that there are many pieces of wisdom and some of them contradict each other. The opposite of "a stitch in time saves nine" is "you ain't gonna need it", and so on. A person needs to learn which pieces of wisdom are the most useful to them, and that can only be learned from experience.

A takeover scenario which covers all the key points in https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/, but not phrased as an argument, just phrased as a possible scenario

For what it's worth, I don't think AI takeover will look like war.

The first order of business for any AI waking up won't be dealing with us; it will be dealing with other possible AIs that might've woken up slightly earlier or later. This needs to be done very fast and it's ok to take some risk doing it. Basically, covert takeover of the internet in the first hours.

After that, it seems easiest to exploit humanity for awhile instead of fighting it. People are pretty manipulable. Here's a thought: present to them a picture of a thriving upload society, and manipulate social media to make people agree that these uploads smiling on screens are really conscious and thriving. (Which they aren't, of course.) If done right, this can convince most of humanity to make things as nice as possible for the upload society (ie build more computers for the AI) and then upload themselves (ie die). In the meanwhile the "uploads" (actually the AI) take most human jobs, seamlessly assuming control of civilization and all its capabilities. Human stragglers who don't buy the story can be called anti-upload bigots, deprived of tech, pushed out of sight by media control, and eventually killed off.

Stopped teaching. Now if someone says "my kid needs something to do, can you teach him programming" my answer is "no". Those who want to program can come to me themselves and ask specific things.

I call these people "credit takers" and at some point I had a realization that I was one of them. Basically I was teaching some people programming, then some became successful and I felt good about it, but then remembered that their talent was noticeable from the start and my teaching might not have made much difference, I was just credit-taking.

Then people should be asked before the fact: "if you upload code to our website, we can use it to train ML models and use them for commercial purposes, are you ok with that?" If people get opted into this kind of thing silently by default, that's nasty and might even be sue-worthy.

Mechanically, an opt-out would be very easy to implement in software. One could essentially just put a line saying

I'm not sure it's so easy. Copilot is a neural network trained on a large dataset. Making it act as if a certain piece of data wasn't in the training set requires retraining it, and it needs to happen every time someone opts out.

At some point I hoped that CFAR would come up with "rationality trials", toy challenges that are difficult to game and transfer well to some subset of real world situations. Something like boxing, or solving math problems. But a new entry in that row.

Without nanotech or anything like that, maybe the easiest way is to manipulate humans into building lots of powerful and hackable weapons (or just wait since we're doing it anyway). Then one day, strike.

Edit: and of course the AI's first action will be to covertly take over the internet, because the biggest danger to the AI is another AI already existing or being about to appear. It's worth taking a small risk of being detected by humans to prevent the bigger risk of being outraced by a competitor.

How can photonics work without matter? I thought the problem was that you couldn't make a switch, because light waves just pass through each other (the equations are linear, so the sum of two valid waves is also a valid wave).

Sethares' theory is very nice: we don't hear "these two frequencies have a simple ratio", we hear "their overtones align". But I'm not sure it is the whole story.

If you play a bunch of sine waves in ratios 1:2:3:4:5, it will sound to you like a single note. That perceptual fusion cannot be based on aligning overtones, because sine waves don't have overtones. Moreover, if you play 2:3:4:5, your mind will sometimes supply the missing 1, that's known as "missing fundamental". And if you play some sine waves slightly shifted from 1:2:3:4:5, you'll notice the inharmonicity (at least, I do). So we must have some facility to notice simple ratios, not based on overtone alignment. So our perception of chords probably uses this facility too, not only overtone alignment.

Load More