In Undergraduation, Paul Graham said something that has always stuck with me:

On the whole, grad school is probably better than most alternatives. You meet a lot of smart people, and your glum procrastination will at least be a powerful common bond. And of course you have a PhD at the end. I forgot about that. I suppose that's worth something.

The greatest advantage of a PhD (besides being the union card of academia, of course) may be that it gives you some baseline confidence. For example, the Honeywell thermostats in my house have the most atrocious UI. My mother, who has the same model, diligently spent a day reading the user's manual to learn how to operate hers. She assumed the problem was with her. But I can think to myself "If someone with a PhD in computer science can't understand this thermostat, it must be badly designed."

I thought about this today. In a pull request at work, someone had some code that looked like this:

const arr = [stuff]
const that = this;

arr.forEach(function () {
  that.method();
});

Someone else commented about how that = this isn't necessary. You should just be able to do this.method(). I've been programming with JavaScript for about eight years now, and I found myself confused.

I know that the value of this is supposed to be based on what calls the function. Like if you have obj.fn(), inside of fn, this will be obj. But what happens when you just do fn()?

That is where I get confused. Without looking it up, my memory says that it usually defaults to the global object, but that there may be other rules I'm forgetting that have to do with the scope. When I looked it up, it seemed that it does default to the global object, but I wasn't 100% sure. Maybe it does have to do with scope. The docs didn't seem clear. So it took me some time to think up an example to test each hypothesis, which ended up allowing me to prove to myself that it is in fact defaulting to the global object, and that it doesn't care about scope.

But then there is the question of what is going on in Array.prototype.forEach. How is that callback function getting executed? Does the value of this get "set by the call"? Eg. does something like obj.cb() happen, in which case the value of this is obj? If so, what is obj? Or does it get executed like cb()? The docs don't really make that clear IMO.

Anyway, the point of explaining all of this is to give you a sense that I find this stuff a little tricky to think about.

So what does that mean? Am I dumb? Am I a bad programmer? Not to sound arrogant, but I don't think either of those things are the case. I don't have a PhD like Paul Graham does, but I do have other credentials and other reasons to believe that I am reasonably intelligent. So then, I think back to that quote:

If someone with a PhD in computer science can't understand this thermostat, it must be badly designed.

I suspect that something similar is going on here.

If someone with eight years of experience programming in JavaScript can't understand this, it must be badly designed.

Consider this quote from Edsger Dijkstra’s 1972 Turing Award lecture, The Humble Programmer:

We shall do a much better programming job, provided that we approach the task with a full appreciation of its tremendous difficulty, provided that we stick to modest and elegant programming languages, provided that we respect the intrinsic limitations of the human mind and approach the task as Very Humble Programmer.

The idea is that the human mind is small and fragile, so when programming, we have to design things so that they have soft edges and round corners and can fit inside our limited minds.

Here's another perpsective. When dealing with software, we should think like usability designers. If you're a usability designer and you see, through user testing, that people have trouble understanding your product, you don't blame the user! You blame the product. You go back to the drawing board and figure out a way to make it simpler.

We shouldn't be afraid to adopt that mindset when we come across tricky things in our code. Stop thinking "This is a basic thing that I'm supposed to know". There is reason to believe that we are intelligent people, so if we have trouble understanding software, maybe the problem is with how the software is designed, rather than with our own intelligence.

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 5:44 PM

In this example, I blame the design of JavaScript. It's certainly possible to write bad code in any language, but I take exception to the gratuitous footguns.

I do value readability highly. Code is for humans, not just machines, or we'd still be writing programs in binary.

On the other hand, there really are basic things that you are supposed to know. Not being allowed to use them might make your code more comprehensible to a beginner, but it will also make too much code. You shouldn't have to reinvent the standard library.

There are certain elegant concepts like sets, or monads, or recursion, which a beginner might struggle with, but working around not having them when you need them probably isn't worth it.

I think this lesson extends behind the scope of programming, even behind the more general scope of technology. We should not be too humble before complicated, hard-to-understand things. We should not be too quick to assume the fault is in our inability to comprehend them. We should always consider the possibility that it's their fault being needlessly complicated, or even just plain nonsense.

I've seen some essays (often in the area of philosophy and/or religion) that - I believe - try to take advantage of that utility. They support their argument with cryptic, cumbersome and confusing reasoning that seem to me like an attempt to force their would be challengers to give up on the discourse for failing to understand it. Their supporters, of course, can remain - they are not trying to disprove the argument, so they don't really need to understand it.

To fight this mentality, we need to give more credit to ourselves. Is the person making the argument smarter than us? Maybe. Does their intelligence exceed our own so much that they can create coherent arguments we cannot understand no matter how hard we try? Very unlikely. Maybe not outright impossible, but the probability is low enough that we should insist on the argument being flawed even when they try to convince us we simply fail to understand it.

Yes! This! I wanted to make this point in the OP as well, but couldn't think of good examples or arguments, so I just stuck to programming. I'd love to see you or someone else expand on it in a separate post though.

Now that I think about it, Getting Eulered is similar.

The language syntax and semantics are no longer the only part of its interface to the user (=programmer). It is also the IDE with all its syntax highlighting, contextual jumps and lookups, and often inline type annotations. So how understandable a piece of code is depends to a big degree on this kind of tooling. What your 'this' or 'that' refers to is much easier if you just have to hover over it and the IDE will show you. 

Be careful with this, though. Javascript is a strange and complex language, and it's not very amenable to static analysis.

Most of the time it's "easy" to determine what this refers to, but sometimes it's literally impossible, because it's ambiguous. For example, it might be determined at runtime, and vary from one invocation of a function to another. A good IDE will hopefully say "I don't know what this is", when it doesn't know. But on that boundary between known and unknowable, the IDE is liable to get confused (and who can blame it?), and this is exactly the sort of place in your code that bugs tend to crop up.

All that is to say, take what your IDE tells you with a grain of salt.

this is the context, it is necessarily contextual as opposed to being lexical. The old idiom is using var self = this; in the preamble of a function that wishes to expose it's context to functions it defines. The current idiom is using arrow functions whose context is their parent's context

Javascript is indeed notoriously opaque in terms of assigning "self" to a function call. There are multiple ways to do it more explicitly, including prototype inheritance, using Function.prototype.bind() etc., all being workarounds for passing and calling "selfless" methods. So yeah, I agree with your main point.

I got reminded how some practises that other languages use makes these kind of worries happen less. For example it migth feel stupid to that one ends up naming every first parameter of a class function "self". but in a situation like this where within the forEach inline function you could plausibly have different level self-reference you could avoid it by deliberately using a non-standard name such as "me".

I do think that as a programmer the line is good to make the code more readable and mitigate the confusing design of the language.

I also tripped ovder a little whether we are talking about the users of the program we are writing but here we are referencing the programmers being the users of the language or considering fellow programmers as users of our code as readers.

It seems likely to me that there is good, well-designed code that is hard to understand by the very nature of what it is or what problem it is solving. What percentage of all hard-to-understand-by-smart-and-competent-people code is that kind and what percentage is the kind you describe is probably the key thing you'd want to know.

Once you know that then you can know what to do when investigating code you do not understand.

The best thing I can come up with is "ehh, probably most hard to understand code is badly designed code?".

 

(As an unrelated aside, prompted by me just now closing this tab: Without consciously thinking about it I always try using my code editor hotkeys while editing text in non-coding contexts, and one of those hotkeys closes my browser tab!)

That's a good point. I may have came across too strong with my point. What I intended is to say that you should (strongly) consider that the code isn't simple enough, not that you should assume it by default.

I always try using my code editor hotkeys while editing text in non-coding contexts

AutoHotKey or other remapping/scripting utility perhaps?

Even if you can't remap usefully within context at least you can stop it from doing something annoying.

I started programming in Clojure this year, and it's the only programming language I've ever used that enforces round corners and soft edges. It's very readable (once you get used to the parentheses). It's functional. Data is immutable and persistent. It creates kind of a safe environment that has made programming fun again (for me). 

I'm glad to hear that! I started learning Haskell this year but haven't reached the point you're describing.