[ Question ]

Why isn't JS a popular language for deep learning?

by Will Clark1 min read8th Oct 202021 comments

11

Machine LearningPracticalAI
Frontpage

I've just read this book, which introduces TensorFlow.js: https://www.manning.com/books/deep-learning-with-javascript

Now I'm wondering why JS is so unpopular for deep learning, especially compared to Python. For deep learning there all the usual benefits of using JS, e.g.:

  • easy to learn
  • huge community
  • flexible about paradigms
  • write code once, run anywhere (especially useful for training/deploying models as well as cool applications like federated learning on client devices).

Plus, compared to Python there's the huge added benefit of proper static types and the rich tooling of TypeScript. I know Python3 has type hints, but it's a really horrible experience compared to any proper typed language. Playing around with TensorFlow.js I've found it catches plenty of mistakes that would require me to read and understand a bunch of code in Python. But with VSCode plugins, I just hover over a variable and it tells me what I'm doing wrong immediately.

So why is no-one really using this? Is it just the momentum of the Python deep learning community? Have I missed some huge downside? Is there a thriving underground JS deep learning community I've not been initiated into?

New Answer
Ask Related Question
New Comment

10 Answers

One of the biggest reasons is I think the relatively deep integration of the Python ecosystem with the C ecosystem. While Python itself is not much faster than Javascript, it's pretty easy to call C-code from python code in a way that allows you to actually write code that's fast enough for Machine Learning purposes. I don't think anything similar like this exists in javascript. 

I haven't researched this extensively but have used the Python data science toolkit for a while now and so can comment on its advantages.

To start, I think it's important to reframe the question a bit. At least in my neck of the woods, very few people just do deep learning with Python. Instead, a lot of people use Python to do Machine Learning, Data Science, Stats (although hardcore stats seems to have a historical bias towards R). This leads to two big benefits of using Python: pretty good support for vectorized operations and numerical computing (via calling into lower level languages of course and also Cython) and a toolkit for "full stack" data science and machine learning.

Regarding the numerical computing side of things, I'm not super up-to-date on the JS numerical computing ecosystem but when I last checked, JS had neither good pre-existing libraries that compared to numpy nor as good a setup for integrating with the lower level numerical computing ecosystem (but I also didn't look hard for it in fairness).

Regarding the full stack ML / DS point, in practice, modeling is a small part of the overall ML / DS workflow, especially once you go outside the realm of benchmark datasets or introduce matters of scale. The former involves handling data processing and analysis (transformation, plotting, aggregation) in addition to building models. Python (and R for what it's worth) has a suite of battle-hardened libraries and tools for both data processing -- things in the vein of airflow, luigi, etc. -- and analysis -- pandas, scipy, seaborn, matplotlib, etc. -- that, as far as I know Javascript lacks.

ETA: To be clear, Python has lots of downsides and doesn't solve any of these problems perfectly, but the question focused on relative to JS so I tried to answer in the same vein.

I've used both JS and Python extensively for like a decade (and TS for a couple of years).  I think they all very effective languages.  

For deep learning there all the usual benefits of using JS, e.g.:

  • easy to learn
  • huge community
  • flexible about paradigms
  • write code once, run anywhere (especially useful for training/deploying models as well as cool applications like federated learning on client devices).

I'm not really convinced JS has any useful benefit over Python in these areas except for running in the browser. I think Python runs everywhere else JS would run. I don't think running in the browser has enough benefit to enough projects to overcome the already-built institutional knowledge around Python deep learning.  Institutional knowledge is very important.

I know Python3 has type hints, but it's a really horrible experience compared to any proper typed language.

I do not find this to be the case.  Note that I'm not saying that Python typing is as effective as, say TS or C#, or many other languages with typing "built-in", I'm just saying I don't find it to be a horrible experience.

Both languages it's hard to get a consistent experience with libraries that don't properly implement types. On one hand DefinitelyTyped provides a larger variety of types for third party libraries than does TypeShed. On the other hand, IME, a good IDE is much more able to infer type information with your typical Python library than it is with your typical JS library.

That being said, I just don't think many people doing deep learning stuff are doing any sort of type checking anyway.  

I think if types are very important to you, depending on what about types you're looking for, you're much more likely to move to Java or C++ or Julia or something.

But with VSCode plugins, I just hover over a variable and it tells me what I'm doing wrong immediately.

I use PyCharm, not VSCode, but it gives you a lot of that sort of thing with Python code because of it's native support for typing and type inference. However, this isn't a very useful comparison point without a much more detailed comparison of what each offers.

 

In general, I think the real answer to your question is that JS isn't obviously better or obviously better enough and thus there's just no push to move to JS. 

Programming language popularity is mostly driven by a positive feedback loop between which languages the projects/libraries/resources are written in, and which language the developers are most experienced and comfortable with. The properties of the languages do matter, since people will sometimes ignore the preexisting resources and use the language they think is best, and second-movers sometimes have an advantage in getting to use the lessons learned from a successful library. Causally speaking, though, language popularity today is mostly the result of language popularity yesterday.

As for the merits of the languages themselves, I would say that Python[2016] was overwhelmingly better than Javascript[2016], but Python[2020] vs Typescript[2020] is roughly a tossup. Typescript annotations and tooling currently seem better than MyPy. Python's core syntax and core library is much more expressive than Javascript's, which is nice. My one experience with Javascript FFI was bad enough that I would consider writing part of a Javscript project in C to be a desperation move; whereas writing parts of a Python project in C is a relatively normal thing to do.

Others have said most of what I would have, but I'll add one more point: TypeScript doesn't (AFAICT) support operator overloading, and in ML you do want that.  ML code is mostly written by scientists, based on papers they've read or written, and they want the code to resemble the math notation in the papers as closely as possible, to make it easy to check for correctness by eye.  For example, here's a line from a function to compute the split rhat statistic for tensors n and m, given other tensors B and W:

rhat = np.sqrt((n - 1) / n + (m + 1) / m * B / W)

In TypeScript, I guess you would have to rewrite this to something like 

rhat = sqrt((n.Minus(1).Divide(n)).Plus(m.Plus(1).Divide(m).Multiply(B).Divide(W)))

...which, like, clearly the scientists could do that rewrite, but they won't unless you offer them something really compelling in exchange.  TypeScript-tier type safety won't do it; I doubt if even static tensor shape checking would be good enough.

One thing no-one has mentioned yet is who is writing the code: I'm assuming your background is a web developer/software engineer? Most deep learning users I've encountered are ex-scientists. This strongly discounts your benefits. First, many of them only know one language (most likely one of Python, R or Matlab, but there's a smattering of others which are used). Learning a new language is hard for these ex-scientists (remember, someone doing a CS degree will have likely seen 5+ languages), especially one like JavaScript: rapidly changing, libraries have breaking changes, and a whole new set of tools/processes to learn (e.g. minification or tree-shaking).

Second, while the JS community is huge, the vast majority is working on front-end web development, or adjacent fields. In many cases, systems like Anaconda provide a one-click install of all the libraries they need—JS on the other hand has no equivalent tool, and the libraries simply do not exist that they need (or are unmaintained—the reader library for the standard astronomy format (FITS) hasn't been touched in years, and it doesn't even support the full standard, whereas Python's equivalent has massive community support and can both read and write FITS).

Third, these ex-scientists are going to write imperative code (for better or worse). Types are an extra thing to think about, and their eyes are going to gloss over if you try to explain the benefit of well designed type systems.

Forth, "write code once, run anywhere" does not exist for deep learning: look at the number of GPU clusters that have been built using NVidia GPUs (rather than AMD or a different chipset). Deep learning is highly tied to hardware, and especially on mobile, you are forced into a specific framework per device in order to get reasonable performance (which in some cases can be critical if a certain application is to succeed). In many cases, using JS would only add to the languages needed, rather than reduce them.

It's possible that JS may at some point in the future evolve in such a way that it becomes a natural choice for deep learning, but it's worth keeping the following things in mind:

  1. There's a general estimate that it takes at least 10 years for a (scientific/numerical) ecosystem to mature: Python's is 20+ year old (numpy's predecessors originate in the 90s). Maybe if we wait 10 years the issues with JS will have gone away (stability, diversity of libraries, easy-to-use distributions).
  2. Few languages have broad usage across the different areas of numerical computing, instead they get pigeon-holed into a specific domain, or are domain-specific (see R for stats or STAN for Bayesian analysis)—Python itself still loses to R for stats (and note the use of r2py and Julia's ability to call Python—languages are added, not removed).
  3. Neither the language nor the tooling of JS especially support numerical computing: you need things Python's slicing or tools like Cython to make it worth using (R and Matlab have similar language or tooling support). It's possible JS will add these, but that would be a major change to the language (and you've got the lag time for such features to roll out, ex-scientists aren't the type of people who continuously upgrade their environment).

One other thing to consider is that the language you do your data science work in doesn't have to be the same one you deploy your code to in production. At one point, the company I work at did data science work in R and Python and then compiled models to PFA which allows us to run the same model in various other languages. I'm not sure how popular this is, but I suspect you can load TensorFlow models into other languages relatively easily.

Perhaps underappreciated is that numbers in JS are fucked. And I don't just mean in the normal way JS data types have weird conversions, I mean that for a long time the only kind of number in JS was officially the double float, although if you look at the operations supported that's clearly not exactly true but it did mean you had to live with rounding errors if you performed any floating point arithmetic. Very recently BigInt was added, which helps deal with this shortcoming, although that's not necessarily available everywhere yet.

Python, on the other hand, has flexibly supported a variety of number formats for a while, including in Python 3 making this totally seamless for the user so they never have to worry about accidentally overflowing number type bounds.

Write code once, run anywhere is not possible when you want to use the highly optermized libaries written decades ago. SciPy needs access to a Fortran compiler to run.

JavaScript/TypeScript doesn't feel to me like it's easy to learn. In two years I learned Kotlin, Dart and JavaScript/TypeScript and JavaScript/TypeScript was the most annoying thing to learn. 

The only reason to go for JS seems to be that it has a huge community for some aspects. It doesn't have a huge community for statistics and machine learning, so people who care about the established community still use the established languages. 

While you can theoretically run JS everywhere it's also not that trivial to use it in non-JS project like native Android/iOS development, Flutter and Xamarin.

You can also run Python on iOS and Android. It's just that there isn't good tooling for it. There seems even to be a project to run Python in the browser.