Wiki Contributions

Comments

Brainware.

Brains seem like the closest metaphor one could have for these. Lizards, insects, goldfish, and humans all have brains. We don't know how they work. They can be intelligent, but are not necessarily so. They have opaque convoluted processes inside which are not random, but often have unexpected results. They are not built, they are grown.

They're often quite effective at accomplishing something that would be difficult to do any other way. Their structure is based around neurons of some sort. Input, mystery processes, output. They're "mushy" and don't have clear lines, so much of their insides blur together.

AI companies are growing brainware in larger and larger scales, raising more powerful brainware. Want to understand why the chatbot did something? Try some new techniques for probing its brainware.

This term might make the topic feel more mysterious/magical to some than it otherwise would, which is usually something to avoid when developing terminology, but in this case, people have been treating something mysterious as not mysterious.

(The precise text, from "The Andalite Chronicles", book 3: "I have made right everything that can be made right, I have learned everything that can be learned, I have sworn not to repeat my error, and now I claim forgiveness.")

Larry Page (according to Elon Musk), want AGI to take the world from humanity

(IIRC, Tegmark, who was present for the relevant event, has confirmed that Page had stated his position as described.)

Ehhh, I get the impression that Schidhuber doesn't think of human extinction as specifically "part of the plan", but he also doesn't appear to consider human survival to be something particularly important relative to his priority of creating ASI. He wants "to build something smarter than myself, which will build something even smarter, et cetera, et cetera, and eventually colonize and transform the universe", and thinks that "Generally speaking, our best protection will be their lack of interest in us, because most species’ biggest enemy is their own kind. They will pay about as much attention to us as we do to ants."

I agree that he's not overtly "pro-extinction" in the way Rich Sutton is, but he does seem fairly dismissive of humanity's long-term future in general, while also pushing for the creation of an uncaring non-human thing to take over the universe, so...

Please link directly to the paper, rather than requiring readers to click their way through the substack post. Ideally, the link target would be on a more convenient site than academia.edu, which claims to require registration to read the content. (The content is available lower down, but the blocked "Download" buttons are confusing and misleading.)

When this person goes to post the answer to the alignment problem to LessWrong, they will have low enough accumulated karma that the post will be poorly received.

Does the author having lower karma actually cause posts to be received more poorly? The author's karma isn't visible anywhere on the post, or even in the hover-tooltip by the author's name. (One has to click through to the profile to find out.) Even if readers did know the author's karma, would that really cause people to not just judge it by its content? I would be surprised.

I found some of your posts to be really difficult to read. I still don't really know what some of them are even talking about, and on originally reading them I was not sure whether there was anything even making sense there.

Sorry if this isn't all that helpful. :/

Wild guess: It realised its mistake partway through, and followed through it anyway as sensibly as could be done, balancing between giving a wrong calculation ("+ 12 = 41"), ignoring the central focus of the question (" + 12 = 42"), and breaking from the "list of even integers" that it was supposed to be going through. I suspect it would not make this error when using chain-of-thought.

Such a word being developed would lead to inter-group conflict, polarisation, lots of frustration, and general bad things to society, regardless of which side you may be on. Also, it would move the argument in the wrong direction.

If you're pro-AI-rights, you could recognize that bringing up "discrimination" (as in, treating AI at all differently from people) is very counterproductive. If you're on this side, you probably believe that society will gradually understand that AIs deserve rights, and that there will be a path towards that. The path would likely start with laws prohibiting deliberately torturing AIs for its own sake, then something closer to animal rights (some minimal protections against putting AI through very bad experiences even when it would be useful, and perhaps against using AIs for sexual purposes since it can't consent), then some basic restrictions on arbitrarily creating, deleting, and mindwiping AIs, and then against slavery, etc etc. Bringing up "discrimination" early would be pushing an end-game conflict point early, convincing some that they're moving onto a slippery slope if they allow any movement down the path, even if they agree with the early steps on their own. The noise of argument would slow down the progress.

If you're anti-AI-rights (being sure of AI non-sentience, or otherwise), then such a word is just a thing to make people feel bad, without any positives. People on this side would likely conclude that disagreement on "AI rights" is probably temporary, until either people understand the situation better or the situation changes. Suddenly "raising the stakes" on the argument would be harmful, bringing in more noise which would make it harder to hear the "signal" underneath, thus pushing the argument in the wrong direction. The word would make it take longer for the useless dispute to die down.

Load More