This is to announce a $250 prize for spotchecking or otherwise indepth reviewing Jacob Cannell's technical claims concerning thermodynamic & physical limits on computations and the claim of biological efficiency of the brain in his post Brain Efficiency: Much More Than You Wanted To Know
I've been quite impressed by Jake's analysis ever since it came out. I have been puzzled why there has been so little discussion about his analysis since if true it seems to be quite important. That said I have to admit I personally cannot asses whether the analysis is correct. This is why I am announcing this prize.
Whether Jake's claims concerning DOOM & FOOM really follow from his analysis is up for debate. Regardless, to me it seems to have large implications on how the future might go and how future AI will look like.
- I will personally judge whether I think an entry warrants a prize.[1]
- If you are also interested in seeing this situation resolved, I encourage you to increase the prize pool!
EDIT: some clarifications
- You are welcome to discuss DOOM& FOOM and the relevance or lack thereof of Jake's analysis but note I will only consider (spot)checking of Jacob Cannel's technical claims.
- in case of multiple serious entries I will do my best to fairly split the prize money.
- note I will not be judging who will be right. Instead, I will judge whether the entry has seriously engaged with Jacob Cannell's technical claims in a way that moves the debate forward. That is I will reward points for 'pushing the depth of the debate tree' beyond what it was before.
- by technical claims I mean to encompass all technical claims made in the brain efficiency post, broadly construed, as well as claims made by Jacob Cannell in other posts/ comments.
These claims includes especially: limits to energy efficiency, interconnect losses, Landauer limit, convection vs blackbody radiation, claims concerning the effective working memory of the human brain versus that of computers, end of Moore's law, CPU vs GPU vs neuromorphic chips, etc etc.
Here's Jacob Cannell's own summary of his claims:
1.) Computers are built out of components which are also just simpler computers, which bottoms out at the limits of miniaturization in minimal molecular sized (few nm) computational elements (cellular automata/tiles). Further shrinkage is believed impossible in practice due to various constraints (overcoming these constraints if even possible would require very exotic far future tech).
2.) At this scale the landauer bound represents the ambient temperature dependent noise (which can also manifest as a noise voltage). Reliable computation at speed is only possible using non-trivial multiples of this base energy, for the simple reasons described by landauer and elaborated on in the other refs in my article.
3.) Components can be classified as computing tiles or interconnect tiles, but the latter is simply a computer which computes the identity but moves the input to an output in some spatial direction. Interconnect tiles can be irreversible or reversible, but the latter has enormous tradeoffs in size (ie optical) and or speed or other variables and is thus not used by brains or GPUs/CPUs.
4.) Fully reversible computers are possible in theory but have enormous negative tradeoffs in size/speed due to 1.) the need to avoid erasing bits throughout intermediate computations, 2.) the lack of immediate error correction (achieved automatically in dissipative interconnect by erasing at each cycle) leading to error build up which must be corrected/erased (costing energy), 3.) high sensitivity to noise/disturbance due to 2
And the brain vs computer claims:
5.) The brain is near the pareto frontier for practical 10W computers, and makes reasonably good tradeoffs between size, speed, heat and energy as a computational platform for intelligence
6.) Computers are approaching the same pareto frontier (although currently in a different region of design space) - shrinkage is nearing its end
- ^
As an example, DaemonicSigil's recent post is in the right direction.
However, after reading Jacob Cannell's response I did not feel the post seriously engaged with the technical material, retreating to the much weaker claim that maybe exotic reversible computation could break the limits that Jacob's posits which I found unconvincing. The original post is quite clear that the limits are only for nonexotic computing architectures.
I support this and will match the $250 prize.
Here are the central background ideas/claims:
1.) Computers are built out of components which are also just simpler computers, which bottoms out at the limits of miniaturization in minimal molecular sized (few nm) computational elements (cellular automata/tiles). Further shrinkage is believed impossible in practice due to various constraints (overcoming these constraints if even possible would require very exotic far future tech).
2.) At this scale the landauer bound represents the ambient temperature dependent noise (which can also manifest as a noise voltage). Reliable computation at speed is only possible using non-trivial multiples of this base energy, for the simple reasons described by landauer and elaborated on in the other refs in my article.
3.) Components can be classified as computing tiles or interconnect tiles, but the latter is simply a computer which computes the identity but moves the input to an output in some spatial direction. Interconnect tiles can be irreversible or reversible, but the latter has enormous tradeoffs in size (ie optical) and or speed or other variables and is thus not used by brains or GPUs/CPUs.
4.) Fu... (read more)
FWIW, I basically buy all of these, but they are not-at-all sufficient to back up your claims about how superintelligence won't foom (or whatever your actual intended claims are about takeoff). Insofar as all this is supposed to inform AI threat models, it's the weakest subclaims necessary to support the foom-claims which are of interest, not the strongest subclaims.
Foom isn't something that EY can prove beyond doubt or I can disprove beyond doubt, so this is a matter of subjective priors and posteriors.
If you were convinced of foom inevitability before, these claims are unlikely to convince of the opposite, but they do undermine EY's argument:
The four claims you listed as "central" at the top of this thread don't even mention the word "brain", let alone anything about it being pareto-efficient.
It would make this whole discussion a lot less frustrating for me (and probably many others following it) if you would spell out what claims you actually intend to make about brains, nanotech, and FOOM gains, with the qualifiers included. And then I could either say "ok, let's see how well the arguments back up those claims" or "even if true, those claims don't actually say much about FOOM because...", rather than this constant probably-well-intended-but-still-very-annoying jumping between stronger and weaker claims.
Ok fair those are more like background ideas/claims, so I reworded that and added 2
Thanks!
Also, I recognize that I'm kinda grouchy about the whole thing and that's probably coming through in my writing, and I appreciate a lot that you're responding politely and helpfully on the other side of that. So thankyou for that too.
I certainly don’t expect any prize for this, but…
…I can at least address this part from my perspective.
Some of the energy-efficiency discussion (particularly interconnect losses) seems wrong to me, but it seems not to be a crux for anything, so I don’t care to spend time looking into it and arguing about it. If a silicon-chip AGI server were 1000× the power consumption of a human brain, with comparable performance, its electricity costs would still be well below my local minimum wage. So who cares? And the world will run out of GPUs long before it runs out of the electricity needed to run them. And making more chips (or brains-in-vats or whatever) is a far harder problem than making enough solar cells to power them, and that remains true even if we substantially sacrifice energy-efficiency for e.g. higher speed.
If we (or an AI) master synthetic biology and can make brains-in-vats, tended and fed by teleoperated robots, then we (or the AI) can make whole warehouses of millions of them, each far larger (and hence smarter) than would be practical in humans who had to schlep their bra... (read more)
Oh fine, you talked me into it :)
I'm confused at how somebody ends up calculating that a brain - where each synaptic spike is transmitted by ~10,000 neurotransmitter molecules (according to a quick online check), which then get pumped back out of the membrane and taken back up by the synapse; and the impulse is then shepherded along cellular channels via thousands of ions flooding through a membrane to depolarize it and then getting pumped back out using ATP, all of which are thermodynamically irreversible operations individually - could possibly be within three orders of magnitude of max thermodynamic efficiency at 300 Kelvin. I have skimmed "Brain Efficiency" though not checked any numbers, and not seen anything inside it which seems to address this sanity check.
The first step in reducing confusion is to look at what a synaptic spike does. It is the equivalent of - in terms of computational power - an ANN 'synaptic spike', which is a memory read of a weight, a low precision MAC (multiply accumulate), and a weight memory write (various neurotransmitter plasticity mechanisms). Some synapses probably do more than this - nonlinear decoding of spike times for example, but that's a start. This is all implemented in a pretty minimal size looking device. The memory read/write is local, but it also needs to act as an amplifier to some extent, to reduce noise and push the signal farther down the wire. An analog multiplier uses many charge carriers to get a reasonable SNR ratio, which compares to all the charge carries across a digital multiplier including interconnect.
So with that background you can apply the landauer analysis to get base bit energy, then estimate the analog MAC energy cost (or equivalent digital MAC, but the digital MAC is much larger so there are size/energy/speed tradeoffs), and finally consider the probably dominate interconnect cost. I estimate the interconnect cost alone at perhaps a watt.
A complementary approach is to c... (read more)
This does not explain how thousands of neurotransmitter molecules impinging on a neuron and thousands of ions flooding into and out of cell membranes, all irreversible operations, in order to transmit one spike, could possibly be within one OOM of the thermodynamic limit on efficiency for a cognitive system (running at that temperature).
See my reply here which attempts to answer this. In short, if you accept that the synapse is doing the equivalent of all the operations involving a weight in a deep learning system (storing the weight, momentum gradient etc in minimal viable precision, multiplier for forward back and weight update, etc), then the answer is a more straightforward derivation from the requirements. If you are convinced that the synapse is only doing the equivalent of a single bit AND operation, then obviously you will reach the conclusion that it is many OOM wasteful, but tis easy to demolish any notion that is merely doing something so simple.[1]
There are of course many types of synapses which perform somewhat different computations and thus have different configurations, sizes, energy costs, etc. I am mostly referring to the energy/compute dominate cortical pyramidal synapses. ↩︎
Nothing about any of those claims explains why the 10,000-fold redundancy of neurotransmitter molecules and ions being pumped in and out of the system is necessary for doing the alleged complicated stuff.
Okay, if you're not saying GPUs are getting around as efficient as the human brain, without much more efficiency to be eeked out, then I straightforwardly misunderstood that part.
The GPU needs numbers to be stored in registers inside the GPU before it can do operations on them. A memory operation (what Jacob calls MEM) is when you load a particular value from memory into a register. An arithmetic operation is when you do an elementary arithmetic operation such as addition or multiplication on two values that have already been loaded into registers. These are done by the arithmetic-logic unit (ALU) of the processor so are called ALU ops.
Because a matrix multiplication of two N×N matrices only involves 2N2 distinct floating point numbers as input, and writing the result back into memory is going to cost you another N2 memory operations, the total MEM ops cost of a matrix multiplication of two matrices of size N×N is 3N2. In contrast, if you're using the naive matrix multiplication algorithm, computing each entry in the output matrix takes you N additions and N multiplications, so you end up with 2N⋅N2=2N3 ALU ops needed.
The ALU:MEM ratio is important because if your computation is imbalanced relative to what is supported by your hardware then you'll end up being bottlenecked by one of them and you'll be unable to exploit the surplus resources you have on the ... (read more)
And it says:
This just seems utterly wack. Having any physical equivalent of an analog multiplication fundamentally requires 100,000 times the thermodynamic energy to erase 1 bit? And "analog multiplication down to two decimal places" is the operation that is purportedly being carried out almost as efficiently as physically possible by... an axon terminal with a handful of synaptic vesicles dumping 10,000 neurotransmitter molecules to flood around a dendritic terminal (molecules which will later need to be irreversibly pumped back out), which in turn depolarizes and starts flooding thousands of ions into a cell membrane (to be later pumped out) in order to transmit the impulse at 1m/s? That's the most thermodynamically efficient a physical cognitive system can possibly be? This is approximately the most efficient possible way to turn all those bit erasures into thought?
This sounds like physical nonsense that fails a basic sanity check. What am I missing?
I am not certain it is being carried "almost as efficiently as physically possible", assuming you mean thermodynamic efficiency (even accepting you meant thermodynamic efficiency only for irreversible computation), my belief is more that the brain and its synaptic elements are reasonably efficient in a pareto tradeoff sense.
But any discussion around efficiency must make some starting assumptions about what computations the system may be performing. We now have a reasonable amount of direct and indirect evidence - direct evidence from neuroscience, indirect evidence from DL - that allows us some confidence that the brain is conventional (irreversible, non quantum), and is basically very similar to an advanced low power DL accelerator built out of nanotech replicators. (and the clear obvious trend in hardware design is towards the brain)
So starting with that frame ..
A synaptic op is the... (read more)
I think the quoted claim is actually straightforwardly true? Or at least, it's not really surprising that actual precise 8 bit analog multiplication really does require a lot more energy than the energy required to erase one bit.
I think the real problem with the whole section is that it conflates the amount of computation required to model synaptic operation with the amount of computation each synapse actually performs.
These are actually wildly different types of things, and I think the only thing it is justifiable to conclude from this analysis is that (maybe, if the rest of it is correct) it is not possible to simulate the operation of a human brain at synapse granularity, using much less than 10W and 1000 cm^3. Which is an interesting fact if true, but doesn't seem to have much bearing on the question of whether the brain is close to an optimal substrate for carrying out the abstract computation of human cognition.
(I expanded a little on the point about modeling a computation vs. the computation itself in an earlier sibling reply.)
Finished, the post is here: https://www.lesswrong.com/posts/PyChB935jjtmL5fbo/time-and-energy-costs-to-erase-a-bit
Summary of the conclusions is that energy on the order of kT should work fine for erasing a bit with high reliability, and the ~50kT claimed by Jacob is not a fully universal limit.
(Copied with some minor edits from here.)
Jacob's argument in the Density and Temperature section of his Brain Efficiency post basically just fails.
Jacob is using a temperature formula for blackbody radiators, which is basically irrelevant to temperature of realistic compute substrate - brains, chips, and probably future compute substrates are all cooled by conduction through direct contact with something cooler (blood for the brain, heatsink/air for a chip). The obvious law to use instead would just be the standard thermal conduction law: heat flow per unit area proportional to temperature gradient.
Jacob's analysis in that section also fails to adjust for how, by his own model in the previous section, power consumption scales linearly with system size (and also scales linearly with temperature).
Put all that together, and a more sensible formula would be:
qA=C1TSRR2=C2(TS−TE)R
... where:
(Of course a spherical approxi... (read more)
I'm going to make this slightly more legible, but not contribute new information.
Note that downthread, Jacob says:
So if your interest is in Jacob's arguments as they pertain to AI safety, this chunk of Jacob's writings is probably not key for your understanding and you may want to focus your attention on other aspects.
Both Jacob and John agree on the obvious fact that active cooling is necessary for both the brain and for GPUs and a crucial aspect of their design.
Jacob:
John:
... (read more)There's a pattern here which seems-to-me to be coming up repeatedly (though this is the most legible example I've seen so far). There's a key qualifier which you did not actually include in your post, which would make the claims true. But once that qualifier is added, it's much more obvious that the arguments are utterly insufficient to back up big-sounding claims like:
Like, sure, our hypothetical superintelligence can't build highly efficient compute which runs in space without any external cooling machinery. So, our hypothetical superintelligence will presumably build its compute with external cooling machinery, and then this vacuum limit just doesn't matter.
You could add all those qualifiers to the strong claims about superintelligence, but then they ... (read more)
The 'big-sounding' claim you quoted makes more sense only with the preceding context you omitted:
Because of its slow speed, the brain is super-optimized for intelligence per clock cycle. So digital superintelligences can think much faster, but to the extent they do so they are constrained to be brain-like in design (ultra optimized for low circuit depth). I have a decade old post analyzing/predicting this here, and today we have things like GPT4 which imitate the brain but run 1000x to 10000x times faster during training, and thus accel at writing.
I tentatively buy that, but then the argument says little-to-nothing about barriers to AI takeoff. Like, sure, the brain is efficient subject to some constraint which doesn't apply to engineered compute hardware. More generally, the brain is probably efficient relative to lots of constraints which don't apply to engineered compute hardware. A hypothetical AI designing hardware will have different constraints.
Either Jacob needs to argue that the same limiting constraints carry over (in which case hypothetical AI can't readily outperform brains), or he does not have a substantive claim about AI being unable to outperform brains. If there's even just one constraint which is very binding for brains, but totally tractable for engineered hardware, then that opens the door to AI dramatically outperforming brains.
I mean, sure, but I doubt that e.g. Eliezer thinks evolution is inefficient in that sense.
Basically, there are only a handful of specific ways we should expect to be able to beat evolution in terms of general capabilities, a priori:
- Some things just haven't had very much time to evolve, so they're probably not near optimal. Broca's area would be an obvious candidate, and more generally whatever things separate human brains from other apes.
- There's ways to nonlocally redesign the whole system to jump from one local optimum to somewhere else.
- We're optimizing against an environment different from the ancestral environment, or structural constraints different from those faced by biological systems, such that some constraints basically cease to be relevant. The relative abundance of energy is one standard example of a relaxed environmental constraint; the birth canal as a limiting factor on human brain size during development or the need to make everything out of cells are standard examples of relaxed structural constraints.
- One particularly important sub-case of "different environment": insofar as the ancestral environment mostly didn't change very quickly, evolution didn't necessarily se
... (read more)Interesting - I think I disagree most with 1. The neuroscience seems pretty clear that the human brain is just a scaled up standard primate brain, the secret sauce is just language (I discuss this now and again in some posts and in my recent part 2). In other words - nothing new about the human brain has had much time to evolve, all evolution did was tweak a few hyperparams mostly around size and neotany (training time): very very much like GPT-N scaling (which my model predicted).
Basically human technology beats evolution because we are not constrained to use self replicating nanobots built out of common locally available materials for everything. A jet airplane design is not something you can easily build out of self replicating nanobots - it requires too many high energy construction processes and rare materials spread across the earth.
Microchip fabs and their outputs are the pinnacle of this difference - requiring rare elements across the periodic table, massively complex global supply chains and many steps of intricate high energy construction/refinement processes all throughout.
What this ends up buying you mostly is very high energy densities - useful for engines, but also for fast processors.
I would contribute $75 to the prize. : )
I think Jake is right that the brain is very energy efficient (
disclaimer: I'm currently employed by Jake and respect his ideas highly.) I'm pretty sure though that the question about energy efficiency misses the point. There are other ways to optimize the brain, such as improving axonal transmission speed from the current range 0.5 - 10 meters/sec to more like the speed of electricity through wires ~250,000,000 meters per second. Or adding abilities the mammalian brain does not have, such as the ability to add new long range neurons connecting distal parts of the brain. We can reconfigure the long range neurons we have, but not add new ones. So basically, I come down on the other side of his conclusion in his recent post. I think rapid recursive self-improvement through software changes is indeed possible, and a risk we should watch out for.This might be the least disclamatory disclaimer I've ever read.
I'd even call it a claimer.
Isn't it insanely transformative to have millions of human-level AIs which think 1000x faster?? The difference between top scientists and average humans seems to be something like "software" (Einstein isn't using 2x the watts or neurons). So then it should be totally possible for each of the "millions of human-level AIs" to be equivalent to Einstein. Couldn't a million Einstein-level scientists running at 1000x speed could beat all human scientists combined?
And, taking this further, it seems that some humans are at least 100x more productive at science than others, despite the same brain constraints. Then why shouldn't it be possible to go further in that direction, and have someone 100x more productive than Einstein at the same flops? And if this is possible, it seems to me like whatever efficiency constraints the brain is achieving cannot be a barrier to foom, just as the energy efficiency (and supposed learning optimality?) of the average human brain does not rule out Einstein more than 100x-ing them with the same flops.
I made some long comments below about why I think the whole Synapses section is making an implicit type error that invalidates most of the analysis. In particular, claims like this:
Are incorrect or at least very misleading, because they're implicitly comparing "synaptic computation" to "flop/s", but "synaptic computation" is not a performance metric of the system as a whole.
My most recent comment is here, which I think mostly stands on its own. This thread starts here, and an earlier, related thread starts here.
If others agree that my basic objection in these threads is valid, but find the presentation in the most recent comment is still confusing, I might expand it into a full post.
It's been years since I looked into it and I don't think I have access to my old notes, so I don't plan to make a full entry. In short, I think the claim of "brains operate at near-maximum thermodynamic efficiency" is true. (I don't know where Eliezer got 6 OoM but I think it's wrong, or about some nonobvious metric [edit: like the number of generations used to optimize].)
I should also reiterate that I don't think it's relevant to AI doom arguments. I am not worried about a computer that can do what I can do with 10W; I am worried about a computer that can do more than what I can do with 10 kW (or 10 MW or so on).
[EDIT: I found one of the documents that I thought had this and it didn't, and I briefly attempted to run the calculation again; I think 10^6 cost reduction is plausible for some estimates of how much computation the brain is using, but not others.]
This comment is about interconnect losses, based on things I learned from attending a small conference on energy-efficient electronics at UC Berkeley in 2013. I can’t immediately find my notes or the handouts so am going off memory.
Eli Yablonovitch kicked off the conference with the big picture. It’s all about interconnect losses, he said. The formula is ½CV² from charging and discharging the (unintentional / stray) "capacitor" in which one "plate" is the interconnect wire and the other "plate" is any other conductive stuff in its vicinity.
There doesn’t se... (read more)
This is frustrating for me as I have already laid out my core claims and you haven't clarified which (if any) you disagree with. Perhaps you are uncertain - that's fine, and I can kind of guess based on your arguments, but it still means we are talking past each more than I'd prefer.
It doesn't matter whether you use 10mV or 0.015mV as in your example above, as Landauer analysis bounds the energy of a bit, not the voltage. For high reliability interconnect you need ~1eV which could be achieved in theory by one electron at one volt naturally, but using 10mV would require ~100 electron charges and 0.015mV would require almost 1e5 electron charges, the latter of which doesn't seem viable for nanowire interconnect, and doesn't change the energy per bit requirements regardless.
The wire must use ~1eV to represent and transmit one bit (for high reliability interconnect) to the receiving device across the wire exit surface, regardless of the wire width.
Now we notice that we can divid... (read more)
Gah, against my better judgment I’m gonna carry on for at least one more reply.
I think it’s wrong to think of a wire as being divided into a bunch of tiles each of which should be treated like a separate bit.
Back to the basic Landauer analysis: Why does a bit-copy operation require kT of energy dissipation? Because we go from four configurations (00,01,10,11) to two (00,11). Thermodynamics says we can’t reduce the number of microstates overall, so if the number of possible chip states goes down, we need to make up for it by increasing the temperature (and hence number of occupied microstates) elsewhere in the environment, i.e. we need to dissipate energy / dump heat.
OK, now consider a situation where we’re transferring information by raising or lowering the voltage on a wire. Define V(X) = voltage of the wire at location X and V(X+1nm) = voltage of the wire at location X+1nm (or whatever the supposed “tile size” is). As it turns out, under practical conditions and at the level of accuracy that matters, V(X) = V(X+1nm) always. No surprise—wires are conducto... (read more)
The mean free path of conduction electrons in copper at room temperature is ~40 nm. Cold pure metals can have much greater mean free paths. Also, a copper atom is ~0.1 nm, not ~1 nm.
This is the crux of it. I made the same comment here before seeing this comment chain.
Also a valid point. @jacob_cannell is making a strong claim: that the energy lost by communicating a bit is the same scale as the energy lost by all other means, by arbitrarily dividing by 1 nm so that the units can be compared. If this were the case, then we would have known about it for a hundred years. Instead, it is extremely difficult to measure the extremely tiny amounts of heat that are actually generated by deleting a bit, such that it's only been done within the last decade.
This arbitrary choice leads to a dramatically overestimated heat cost of computation, and it ruins the rest of the analysis.
@Alexander Gietelink Oldenziel, for whatever it is worth, I, a physicist working in nanoelectronics, rec... (read more)
For what it's worth, I think both sides of this debate appear strangely overconfident in claims that seem quite nontrivial to me. When even properly interpreting the Landauer bound is challenging due to a lack of good understanding of the foundations of thermodynamics, it seems like you should be keeping a more open mind before seeing experimental results.
At this point, I think the remarkable agreement between the wire energies calculated by Jacob and the actual wire energies reported in the literature is too good to be a coincidence. However, I suspect the agreement might be the result of some dimensional analysis magic as opposed to his model actually being good. I've been suspicious of the de Broglie wavelength-sized tile model of a wire since the moment I first saw it, but it's possible that there's some other fundamental length scale that just so happens to be around 1 nm and therefore makes the formulas work out.
The Landauer limit was first proposed in 1961, so the fact that people have been sending binary information over wires since 1840 seems to be irrelevant in this context.
Dear spxtr,
Things got heated here. I and many others are grateful for your effort to share your expertise. Is there a way in which you would feel comfortable continuing to engage?
Remember that for the purposes of the prize pool there is no need to convince Cannell that you are right. In fact I will not judge veracity at all just contribution to the debate (on which metric you're doing great!)
Dear Jake,
This is the second person in this thread that has explicitly signalled the need to disengage. I also realize this is charged topic and it's easy for it to get heated when you're just honestly trying to engage.
Best, Alexander
The post is making somewhat outlandish claims about thermodynamics. My initial response was along the lines of "of course this is wrong. Moving on." I gave it another look today. In one of the first sections I found (what I think is) a crucial mistake. As such, I didn't read the rest. I assume it is also wrong.
The original post said:
... (read more)Fwiw I did spotchecked this post at the time, although I did not share at the time (bad priors). Here it goes:
Yes, it’s probably approximately right, but you need to buy it’s the right assumptions. However, these assumptions also make the question somewhat unimportant for EA purposes, because even if the brain is one of the most efficient for its specs, including a few pounds ands watts, you could still believe doom or foom could happen with the same specs, except with a few tons and megawatts, or for some other specs (quantum computers, somewhat soon, or ... (read more)
I think it would be helpful if you specified a little more precisely which claims of Jake's you want spot checked. His posts are pretty broad and not everything in even the ones about efficiency are just about efficiency.
I'm not interested in the prize, but as long as we're spot-checking, this paragraph bothered me:
... (read more)I hope that my two comments[1][2] helped you save $250.