Here is  presentation from a researcher at MIT on a novel way of designing computer processors. It relies on performing approximate, rather than exact, mathematical operations (like the meat-based processor in our heads!). Claimed benefits are a 10,000-fold improvement in speed, while the errors introduced by the approximations are postulated to be insignificant in many applications.

Slide #2 of the presentation offers a fascinating insight: We currently work around the limitations of the processing substrate to implement a precise computation, and it is becoming increasingly difficult:



Computations specified by programmers are implemented as behavior in physical material


• Hardware designer’s job: 

efficiently implement Math (what sw wants) using Physics (what silicon offers)

                              (near) perfect arith                           noisy, approximate                 

                                   uniform mem delay                          delay ~ distance

• Increasingly difficult as decades passed and transistor counts exploded

• Now each instruction (increment, load register, occasionally multiply) invokes >10M transistor operations, even though a single transistor can perform, for instance, an approximate exponentiate or logarithm



The parallels and contrasts with our own brain are what interested me the most. Perhaps one day the most powerful computers will be running on "corrupted hardware" of sorts.

New Comment
12 comments, sorted by Click to highlight new comments since:

Analog computing is quite old. The issue with going analog is that when you have errors and noise as you do calculations, you can't do long chains of calculations without ending up with just the noise.

OP link isn't analog, it's digital. (There are analog approaches with large speedups, of course.)

It's analog on the inside. If you chain these, the errors will creep in exactly as indicated in Dmytry's post. That's why the suggested domains are such that either it won't be iterated, or errors are expected by the software, or the errors from this source are much smaller than the other errors in the system.

Yep. Well, one could use shorter chains and then clamp the signal to 0 or 1 then feed it to another short chain, eliminating noise to some extent. Brain does something like that basically. If you have sigmoid functions along the chain you sort of digitalize the signal and get rid of the noise.

That would make things worse, not better. Clamping destroys the noise by adding more, in an amount that 'happens' to move the value to an integer...


You'd lose some performance that way though, unless you wanted to do away with function composition. (That would probably be a bad idea.) It's not entirely clear (to me) that the cost of making this work in a real-life environment wouldn't nullify the performance gains.

Especially given that we already do a fair bit of approximation (eg floating-point calculations).

[This comment is no longer endorsed by its author]Reply

The OP seemed to indicate the errors come from the logarithmic approximation and using orders fewer transistors, forfeiting exactness. What is analogue about this? Or does analogue mean something different than I thought and simply refers to there being error-bars on each calculation?

the only way you calculate logarithm or exponent or indeed anything with 1 transistor is by making an analogue circuit.

Here's the patent, since I couldn't find any other detailed documentation. It describes two separate implementations:

  • Digital, storing log(x) as a fixed-point number and performing ordinary digital arithmetic on it.
  • Analogue, storing x as floating-point with digital sign and exponent but analogue mantissa. It then describes some mixed analogue/digital circuits to perform the requisite arithmetic.

The slides linked in the OP are about the digital one, and only once mention the possibility of analogue as an intuition pump. I don't know which one the quoted performance numbers are for.

Actually, reading the whole of article - it is awfully vague to the point that it is hard to immediately tell if it is about something entirely digital using approximate circuitry (which is also known as having fewer bits), or part analog, or what. It is basically devoid of useful information, there's just some claims.

The "even though a single transistor can perform, for instance, an approximate exponentiate or logarithm", strongly suggested some analog circuitry.

On the other hand later in the article he speaks of a fully digital implementation that would store all numbers as logarithms. Well then most of the operations then (*/ square root) are implemented by integer unit of the ordinary CPU, and he only needs the special function to aid addition and subtraction, so why won't he just note that the only thing he ever needs to do is to add that one function to existing integer CPU? I myself had personally implemented some code that stored fixed-point logarithms, and worked on logarithms of values, because that was a good optimization. I don't see what is exactly innovative here at all. I'm sure a lot of people in a lot of circumstances used integer CPU working on fixed-point logarithms, to do floating point computations.

I guess I was thrown off by assuming that the approach is indeed novel.

Furthermore, there's FPGAs for him to program to make a working example within two months if he's serious.

By the way, with regards to IEEE floating point, there's been some terrible lobbying for the standard and the standard is bad for both software and hardware (read on denormals).


For the record, this is the presenter's webpage which has a text summary similar to the presentation and a blog. The blog has not been updated since 2010, when the presentation was given. The only thing more recent than 2010 that a cursory Google search found was a 2011 Business Week article on him which seems to just be a summary of his standard presentation.

Nifty. I've added it to my footnote on the topic:,%20present,%20and%20future#fn3