MatthewB

Posts

Sorted by New

Comments

Glenn Beck discusses the Singularity, cites SI researchers

Also, don't forget that humans will be improving just as rapidly as the machines.

My own studies (Cognitive Science and Cybernetics at UCLA) tend to support the conclusion that machine intelligence will never be a threat to humanity. Humanity will have become something else by the time that machines could become an existential threat to current humans.

Glenn Beck discusses the Singularity, cites SI researchers

He believes that the Singularity is proof that the Universe was created by an Intelligent Creator (who happens to be the Christian God), and that it is further evidence of YEC.

Glenn Beck discusses the Singularity, cites SI researchers

I think the comment that LWer suck at Politics is the more apt description.

Politics is the art of the possible, and that it deals with WHAT IS, regardless of whether that is "rational."

And attempting to demand that it conform to rationality standards dictated by this community guarantees that this community will lack political clout.

Especially if it becomes known that the main beneficiaries and promoters of the Singularity have a particularly pathological politics.

Peter Thiel may well be a Libertarian Hero, but his name is instant death in even mainstream GOP circles, and he is seen as a fascist by the progressives.

Glenn Beck is seen as a dangerous and irrationally delusional ideologue by mainstream politicians.

That sort of endorsement isn't going to help the cause if it becomes well known.

It will tar the Singularity as an ideological enclave of techno-supremists.

NO ONE at Less Wrong seems to be aware of the stigma attached to the Singularity after the performance of David Rose at the "Human Being in an Inhuman World" conference at Bard College in 2010. I was there, and got to witness the reactions of academics and political analysts from New York and Washington DC (some very powerful people in policy circles) who sat, mouths hanging aghast, at what David Rose was saying.

When these people discover that Glenn Beck is promoting the Singularity (and Glenn Beck has some very specific agendas in promoting it, that are very selfish and probably pretty offensive to the ideals of Less Wrong) these people will be even more convinced that the Singularity is a techno-cult composed of some very dangerous individuals.

Glenn Beck discusses the Singularity, cites SI researchers

Being influential is not necessarily a good thing.

Especially when Glenn Beck's influence is in delusional conspiracy theories, and evangelical christianity, and Young Earth Creationism.

Glenn Beck discusses the Singularity, cites SI researchers

Glenn Beck is hardly someone whose enthusiasm you should welcome.

He has a creationist agenda that he has found a way to support with the ideas surrounding the topic of the Singularity.

Glenn Beck discusses the Singularity, cites SI researchers

This is not exactly "success."

There are some populations that will pervert the things they get in their hands.

Glenn Beck discusses the Singularity, cites SI researchers

Glenn Beck was one of the first TV personalities to interview Ray.

The Interview is on YouTube, and is very informative as to Glenn's objectives and Agenda.

Primarily, he wishes to use the ideology behind the Singularity as support for "Intelligent Design." In the inteview, he makes an explicit statement to that effect.

Glenn Beck is hardly "rational" as per the definition of "Less Wrong."

Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)

Yes, I have read many of the various Less Wrong Wiki entries on the problems surrounding Friendly AI.

Unfortunately, I am in the process of getting an education in Computational Modeling and Neuroscience (I was supposed to have started at UC Berkeley this fall, but budget cuts in the Community Colleges of CA resulted in the loss of two classes necessary for transfer, so I will have to wait till next fall to start... And, I am now thinking of going to UCSD, where they have the Institute of Computational Neuroscience (or something like that - It's where Terry Sejnowski teaches), among other things, that make it also an excellent choice for what I wish to study) and this sort of precludes being able to focus much on the issues that tend to come up often among many people on Less Wrong (particularly those from the SIAI, whom I feel are myopically focused upon FAI to the detriment of other things).

While I would eventually like to see if it is even possible to build some of the Komodo Dragon like Superintelligences, I will probably wait until such a time as our native intelligence is a good deal greater than it is now.

This touches upon an issue that I first learned from Ben. The SIAI seems to be putting forth the opinion that AI is going to spring fully formed from someplace, in the same fashion that Athena sprang fully formed (and clothed) from the Head of Zeus.

I just don't see that happening. I don't see any Constructed Intelligence as being something that will spontaneously emerge outside of any possible human control.

I am much more in line with people like Henry Markham, Dharmendra Modha, and Jeff Hawkins who believe that the types of minds that we will be tending to work towards (models of the mammalian brain) will trend toward Constructed Intelligences (CI as opposed to AI) that tend to naturally prefer our company, even if we are a bit "dull witted" in comparison.

I don't so much buy the "Ant/Amoeba to Human" comparison, simply because mammals (almost all of them) tend to have some qualities that ants and amoebas don't... They tend to be cute and fuzzy, and like other cute/fuzzy things. Building a CI modeled after a mammalian intelligence will probably share that trait. It doesn't mean it is necessarily so, but it does seem to be more than less likely.

And, considering it will be my job to design computational systems that model cognitive architectures. I would prefer to work toward that end until such a time as it is shown that ANY work toward that end is dangerous enough to not do that work.

Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)

I think major infrastructure rebuilding is probably closer to the case than "maintenance"

Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)

Yes, that is close to what I am proposing.

No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people's behaviors in the future than with AI. People are improving systems as well.

As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:

"Gorging upon the stew of..."

Load More