Mitchell_Porter

Wiki Contributions

Comments

Gato as the Dawn of Early AGI

Regarding the arguments for doom, they are quite logical, but they don't quite have the same confidence as e.g. an argument that if you are in a burning, collapsing building, your life is in peril. There are a few too many profound unknowns that have a bearing on the consequences of superhuman AI, to know that the default outcome really is the equivalent of a paperclip maximizer. 

However, I definitely agree that that is a very logical scenario, and also that the human race (or the portion of it that works on AI) is taking a huge gamble by pushing towards superhuman AI, without making its central priority that this superhuman AI is 'friendly' or 'aligned'. 

In that regard, I keep saying that the best plan I have seen, is June Ku's "meta-ethical AI", which falls into the category of AI proposals that construct an overall goal by aggregating idealized versions of the current goals of all human individuals. I want to make a post about it, but I haven't had time... So I would suggest, check it out, and see if you can contribute technically or critically or by spreading awareness of this kind of proposal. 

Open & Welcome Thread - May 2022

A cancelled connecting flight has suddenly left me in San Francisco for the next 24 hours (i.e. until late on Sunday 15th). I could just stay in my hotel room and prepare for the next stage of my journey, but I hear there are people interested in AI safety somewhere nearby. If anyone has suggestions or even wants to meet up, message me... 

How confident are we that there are no Extremely Obvious Aliens?

Dumping waste information in the baryonic world would be visible.

Not if the rate is low enough and/or astronomically localized enough. 

It would be interesting to make a model in which fuzzy dark matter is coupled to neutrinos, in a way that maximizes rate of quantum information transfer, while remaining within empirical bounds. 

Convince me that humanity *isn’t* doomed by AGI

I agree the DOE number is strangely large. But your own link says there are over 7.2 million data centers worldwide. Then it says the USA has the most of any country, but only has 2670. There is clearly some inconsistency here, probably inconsistency of definition. 

Convince me that humanity *isn’t* doomed by AGI

energy.gov says there are several million data centers in the USA. Good luck preventing AGI research from taking place just within all of those, let alone preventing it worldwide. 

How confident are we that there are no Extremely Obvious Aliens?

1:10^12 odds against the notion

How did you get this figure? Two one-in-a-million implausibilities? 

computation requires work

Quantum computers are close to reversible. Each halo could be a big quantum coherent structure, with e.g. neutrinos as ancillary qubits. The baryonic world might be where the waste information gets dumped. :-) 

The New Right appears to be on the rise for better or worse

If Elon Musk truly manages to take over Twitter and take it private, that will be a big change in the power relations governing day-to-day elite communication, e.g. among journalists and academics. 

The reason I mention the Twitter takeover in this context: you could say that the essence of Yarvin's strategy to defeat democracy, is to take the public sphere private. Replacing the "rule of law" with the "rule of men", also known as "personal rule". 

It's all very twisted because a common accusation against the western system, is that although it is based on rule-of-law, in fact it has been corrupted, so that there is an elite who are above the law; a form of oligarchy. The neoreactionary answer to this is, is to hope for a "good king", who will restore justice because they are a good person but with absolute power. 

Can GPT-N help us?

Keep asking for more details!

Distributed blind review site for papers

There can be an issue of quality control. There is a huge underworld of outsider intellectual activity, as a trip to vixra.org demonstrates. In certain fields, the eminent researchers receive a steady stream of papers by autodidacts, claiming new breakthroughs, or refutations of orthodoxy. In a field like that, your blind review site will still need some kind of filter; but what could it be? 

“Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments

Are you criticizing the idea that a single superintelligence could ever get to take over the world under any circumstances, or just this strategy of "achieving aligned AI by forcefully dismantling unsafe AI programs with the assistance of a pet AI"? 

Load More