Mitchell_Porter

Wiki Contributions

Comments

It would be tiresome to go through all the proposals listed in that article, identifying how they work, the resultant limitations, and then speculating whether they might nonetheless help us understand reality one day. Or at least, I think I would save that kind of analytical effort, for an audience of physicists interested in ontology. 

But let me just discuss one example. The proposal discussed in the most detail, is due to Dürr et al. The article straight out says that they rely on the existence of a "preferred foliation" of spacetime. That means that to define their "relativistic" Bohmian mechanics, they still need a particular decomposition of spacetime into a stack of spacelike hypersurfaces. Their trick is to then say, well, we don't need to use a coordinate system in which those hypersurfaces are all "t = constant". We can do a relativistic boost, and switch to a new coordinate system with a new time coordinate, in which those hypersurfaces are tilted with respect to the time axis. 

That's formally true, but nonetheless, their pilot wave and their equations of motion are only defined with respect to one specific foliation, which de facto defines a notion of absolute simultaneity. 

The point is that all these "extensions to relativity" involve some kind of trick, or they only work in an artificially narrow setting, or they meander off in some eccentric direction. There is certainly no relativisitc Bohmian mechanics known, that can deal with all the known phenomena of field theory, like pair creation and gauge invariance. 

If hypercomputation is defined as computing the uncomputable, then that's not his idea. It's just a quantum speedup better than the usual quantum speedup (defining a quantum complexity class DQP that is a little bigger than BQP). Also, Scott's Bohmian speedup requires access to what the hidden variables were doing at arbitrary times. But in Bohmian mechanics, measuring an observable perturbs complementary observables (i.e. observables that are in some kind of "uncertainty relation" to the first) in exactly the same way as in ordinary quantum mechanics. 

There is a way (in both Bohmian mechanics and standard quantum mechanics) to get at this kind of trajectory information, without overly perturbing the system evolution - "weak measurements". But weak measurements only provide weak information about the measured observable - that's the price of not violating the uncertainty principle. A weak measuring device is correlated with the physical property it is measuring, but only weakly. 

I mention this because someone ought to see how it affects Scott's Bohmian speedup, if you get the history information using weak measurements. (Also because weak measurements may have an obscure yet fundamental relationship to Bohmian mechanics.) Is the resulting complexity class DQP, BQP, P, something else? I do not know. 

Physicists also do not like that pilot wave theory is non-local.

A more severe problem is that it is not relativistic. More precisely, as John Bell said of a stochastic version, "As with relativity before Einstein, there is a preferred frame in the formulation of the theory, but it is experimentally indistinguishable". You need some notion of absolute simultaneity, in order to write down the equations of motion. 

Bohmian mechanics (to use another name for this dynamical framework) has analogous problems with some other symmetries. For an example concerning "lapse" and "shift" functions in general relativity, see this old paper. The situation is at least as bad in gauge theories, such as those that describe the strong and electroweak forces. 

There is no ontological interpretation of quantum theory, known to me, that is free of problems. The simplest attitude to have is that quantum mechanics works empirically, that it is ontologically incomplete, and that we don't know the true ontology. 

Mostly I was responding to this: 

As I read these predictions, one of the main reasons that "transformative AGI" is unlikely by 2043 is because of severe catastrophes such as war, pandemics, and other causes.

... in order to emphase that, even without catastrophe, they say the technical barriers alone make "transformative AGI in the next 20 years" only 1% likely. 

I don't think we can use the paper's probabilities this way, because technical barriers are not independent of derailments.

I disagree. The probabilities they give regarding the technical barriers (which include economic issues of development and deployment) are meant to convey how unlikely each of the necessary technical steps is, even in a world where technological and economic development are not subjected to catastrophic disruption. 

On the other hand, the probabilities associated with various catastrophic scenarios, are specifically estimates that war, pandemics, etc, occur and derail the rise of AI. The "derailment" probabilities are meant to be independent of the "technical barrier" probabilities. (@Ted Sanders should correct me if I'm wrong.) 

Not at all. But for a credible bet, I have to have some chance of paying out my losses. On the basis of lifetime earnings so far, even $500K is really pushing it. Promising to pay millions if I lose is not credible. 

If you were offering, say, $100K at 5:1 odds, I would be very inclined to take it, despite the risk that e.g. next month's X-Day finally delivers, because that would let me set in motion things that, according to me, have their own transformative potential. But I'm not sure about the value of these smaller sums. 

Everyone seems to find the most striking claim here 

over the next decade, ... that most democratic Western countries will become fascist dictatorships ... is ... the most likely overall outcome

pretty outlandish. That right-wing, nationalist, conservative parties might win power in a lot of places, seems a reasonable projection (though very far from assured); that they will all turn their countries into "fascist dictatorships" where "there are no meaningful elections" is the outlandish part. That would seem to require something like the Spanish civil war, but throughout Europe and North America, perhaps after a catastrophic collapse of NATO under Trump 2.0 - which might be Putin's dream, and the nightmare of liberal-progressive Americans, but I wouldn't bet on it ever happening. 

I noticed one oddity among the references cited. There's this graph about how populists "rarely lose power" peacefully. But if you add up "reached term limits" and "lost free and fair elections", that's actually larger than the other bars. And "still in office" doesn't distinguish between those who have been in office for a year and those who have been there for 25 years. 

Furthermore, if you look up who the Blair Institute regards as a populist (half way down this page), it's an odd mixture - Indonesia's Widodo, Israel's Netanyahu, Japan's Koizumi are there alongside the usual names from left and right. In the academic literature on populism, Koizumi has been described as a neoliberal populist, something I wouldn't have thought possible - though once the combination is allowed, I can imagine someone trying to put Barack Obama in that category, or even Tony Blair himself. Perhaps it just goes to show that politics varies a lot across time and space, as well as demonstrating again the nebulousness of many political categories. 

Why don't we see deep neural networks that are built using tropical matrix multiplication?

There's no shortage of people wanting to apply ideas from their own discipline to deep learning. Physicists want to use field theory, logicians want to use formal logic... The fact that certain architectures can be described in the language of tropical math does not in itself guarantee that this leads to real insight. The paper you mention says that it does, and it has over 100 citations, so, maybe something will turn up. 

By the way, a friend and I ran across your paper on generalized Laver tables a few years ago (while we were brainstorming a number theory problem). Could that be applied to deep learning? I don't see it for ordinary deep learning, but who knows, maybe consciousness is based on "nonabelions" and a Laver-like algebra describes their entanglement structure. :-)  

Seems the same as a thousand other reports written by people at the intersection of volunteer work and organized charity, trying to ameliorate poverty, domestic violence, you name it. I really don't see what's "disturbing" about it (let alone "psychopathic"!). 

Load More