All of silentbob's Comments + Replies

Side point: this whole idea is arguably somewhat opposed to what Cal Newport in Deep Work describes as the "any benefit mindset", i.e. people's tendency to use tools when they can see any benefit in them (Facebook being one example, as it certainly does come with the benefit of keeping you in touch with people you would otherwise have no connection to), while ignoring the hidden costs of these tools (such as the time/attention they require). I think both ideas are worth to keep in mind when evaluating the usefulness of a tool. Ask yourself both if the usefulness of the tool can be deliberately increased, and if the tool's benefits are ultimately worth its costs.

2quanticle5d
I was thinking of a similar point, which is some programmers' (myself included) tendency to obsess over making small tweaks to update their editor/IDE/shell workflow without really paying attention to whether optimizing that workflow to the nth degree actually saves enough time to make the optimization worthwhile. Similarly, a hypothetical AI might be very useful once you understand how to make the perfect prompt, but the time and effort necessary to figure out how to craft the prompt just right isn't worth it. I suspect ChatGPT isn't quite that narrow, however, and I've already seen positive returns to basic experimentation with varying prompts and regenerating answers.

I think it does relate to examples 2 and 3, although I would still differentiate between perfectionism in the sense that you actually keep working on something for a long time to reach perfection on the one hand, and doing nothing because a hypothetical alternative deters you from some immediate action on the other hand. The latter is more what I was going for here.

Good point, agreed. If "pay for a gym membership" turns out to be "do nothing and pay $50 a month for it", then it's certainly worse than "do nothing at home".

I would think that code generation has a much greater appeal to people / is more likely to go viral than code review tools. The latter surely is useful and I'm certain it will be added relatively soon to github/gitlab/bitbucket etc., but if OpenAI wanted to start out building more hype about their product in the world, then generating code makes more sense (similar to how art generating AIs are everywhere now, but very few people would care about art critique AIs).

Can you elaborate? Were there any new findings about the validity of the contents of Predictably Irrational?

2mikbp1mo
This [https://retractionwatch.com/2021/09/14/highly-criticized-paper-on-dishonesty-retracted/] came out few days or weeks after my post.

This is definitely an interesting topic, and I too would like to see a continued discussion as well as more research in the area. I also think that Jeff Nobbs' articles are not a great source, as he seems to twist the facts quite a bit in order to support his theory. This is particularly the case for part 2 of his series - looking into practically any of the linked studies, I found issues with how he summarized them. Some examples:

  • he claims one study shows that a study showed a 7x increase in cases of cardiovascular deaths and heart attacks, failing to men
... (read more)

I could well imagine that there are there are strong selection effects at play (more health-concerned people being more likely to give veganism a shot), and the positive effects of the diet just outweighing the possible slight increase in plant oil usage. And I wouldn't even be so sure if vegans on average consume more plant oil than non-vegans - e.g. vegans probably generally consume much less processed food, which is a major source of vegetable oil. 

In The Rationalists' Guide to the Galaxy the author discusses the case of a chess game, and particularly when a strong chess player faces a much weaker one. In that case it's very easy to make the prediction that the strong player will win with near certainty, even if you have no way to predict the intermediate steps. So there certainly are domains where (some) predictions are easy despite the world's complexity.

My personal rather uninformed take on the AI discussion is that many of the arguments are indeed comparable in a way to the chess example, so the ... (read more)

Assuming slower and more gradual timelines, isn't it likely that we run into some smaller, more manageable AI catastrophes before "everybody falls over dead" due to the first ASI going rogue? Maybe we'll be at a state of sub-human level AGIs for a while, and during that time some of the AIs clearly demonstrate misaligned behavior leading to casualties (and general insights into what is going wrong), in turn leading to a shift in public perception. Of course it might still be unlikely that the whole globe at that point stops improving AIs and/or solves alignment in time, but it would at least push awareness and incentives somewhat into the right direction.

0lorepieri8mo
This is the most likely scenario, with AGI getting heavily regulated, similarly to nuclear. It doesn't get much publicity because it's "boring". 
1Jay Bailey8mo
This does seem very possible if you assume a slower takeoff.

Isn't it conceivable that improving intelligence turns out to become difficult more quickly than the AI is scaling? E.g. couldn't it be that somewhere around human level intelligence, improving intelligence by every marginal percent becomes twice as difficult as the previous percent? I admit that doesn't sound very likely, but if that was the case, then even a self-improving AI would potentially improve itself very slowly, and maybe even sub-linear rather than exponentially, wouldn't it?

First person (row 2) partially sounds a lot like GPT3. Particularly their answers "But in the scheme of things, changing your mind says more good things about your personality than it does bad. It shows you have a sense of awareness and curiosity, and that you can admit and reflect when decisions have been flawed or mistakes have been made." and "A hero is defined by his or her choices and actions, not by chance or circumstances that arise. A hero can be brave and willing to sacrifice his or her life, but I think we all have a hero in us — someone who is unselfish and without want of reward, who is determined to help others". Then however there's "SAVE THE AMOUNT" and "CORONA COVID-19". This person is confusing.

6Mo Putera8mo
I'm reminded of Sarah Constantin's Humans Who Are Not Concentrating Are Not General Intelligences [https://srconstantin.github.io/2019/02/25/humans-who-are-not-concentrating.html]. A quote that resonates with my own experience:

The mug is gone. Please provide mug again if possible.

I found the concept interesting and enjoyed reading the post. Thanks for sharing!

Sidenote: It seems either your website is offline (blog's still there though) or the contact link from your blog is broken. Leads to a 404.

Thanks a lot for your comment! I think you're absolutely right with most points and I didn't do the best possible job of covering these things in the post, partially due to wanting to keeping things somewhat simplistic and partially due to lack of full awareness of these issues. The conflict between the point of easy progress and short-sightedness is most likely quite real and it seems indeed unlikely that once such a point is reached there will be no setbacks whatsoever. And having such an optimistic expectation would certainly be detrimental. In the end ... (read more)

1Elisey Gretchko2y
Thanks for your humbleness in return. I think it remains valuable from an idealistic point of view. And as with many idealistic views, the problems mainly arise when things become a little bit too absolute, not letting enough room for changing context. Nevertheless, this idealism can still have a value.  Take for instance the first remark I made about the idea of “point of easy progress” becoming a case of “short-sightedness”. I find value in the idea of recognizing certain points in your progress and using them as a drive to continue and motivate yourself. As a matter of fact, this appeared to help you through some projects, and in the end, isn’t that what’s more important? I also have the impression that this is mostly the core of your message. The element that made me question was the idea of the “event horizon: once there, there’s no stopping” as an absolute statement that would be difficult to defend in practice. Lastly, there’s also the choice to approach this descriptively as how progress goes, or normative, as how progress should go.  Regarding the need for frustration, challenge and so on, I also use a lot of assumptions stemming from my background in Psychology and existentialism. I hold the idea that positive experience appears to be relative instead of absolute, and positivity loses its valance without any negativity as a context. In this sense, some kind of suffering (how dramatic it may sound) is essential. It appears to me that it’s natural for people to hedonically maximize pleasure, and avoid suffering, which I believe is a good thing, a verry Human thing to do. However, we sometimes struggle with this kind of neurosis by overemphasizing this chase for the good and pleasure in a world where suffering is inevitable.  The good academic in me would elaborate on this perspective and provide further evidence to this idea, however, it isn’t my intention for this comment to persuade anyone. Instead of me philosophizing about it, it might be more meanin

Very interesting concept, thanks for sharing!

Update a year later, in case anybody else is similarly into numbers: that prediction of achieving 2.5 out of the 3 major quarter goals ended up being correct (one goal wasn't technically achieved due to outside factors I hadn't initially anticipated, but I had done my part, thus the .5), and I've been using a murphyjitsu-like approach for my quarterly goals ever since which I find super helpful. In the three quarters before Hammertime, I achieved 59%, 38% and 47% respectively of such goals. In the quarters since the numbers were (in chronological order, st... (read more)

Where I find Murphyjitsu most useful is in the area of generic little issues with my plans that tend to come up rather often. A few examples:

  • forgetting about working on the goal in time, due to lack of a reminder, planning fallacy etc.
  • the plan involving asking another person for a favor, and me not feeling too comfortable about asking
  • my system 1 not being convinced of the goal, requiring more motivation / accountability / pressure
  • the plan at some point (usually early on) requiring me to find an answer to some question, such that the remaining plan depends
... (read more)

I've mostly been aware of the planning fallacy and how despite knowing of it for many years it still often affects me (mostly for things where I simply lack the awareness of realizing that the planning fallacy would play a role at all; so not so much for big projects, but rather for things that I never really focus on explicitly, such as overhead when getting somewhere). The second category you mention however is something I too experience frequently, but having lacked a term (/model) for it, I didn't really think about it as a thing.

I wonder what classes ... (read more)

I'd probably put it this way – the Sunk Cost Fallacy is Mostly Bad, but motivated reasoning may lead to frequent false positive detections of it when it's not actually relevant. There are two broad categories where sunk cost considerations come into play, a) cases where aborting a project feels really aversive because so much has gone into it already, and b) cases where on some level you really want to abort a project, e.g. because the fun part is over or your motivation has decreased over time. In type a cases, knowing about the fallacy is really useful. ... (read more)

"When in doubt, go meta". Thanks to my friend Nadia for quoting it often enough for it to have found a place deep within my brain. May not be the perfect mantra, but it is something that occurs to me frequently and almost always seems yet again unexpectedly useful.

It's not that easy to come up with strange bugfix stories (or even noteworthy bugfix stories in general).

One that's still in progress is that I've been using gamification to improve my posture. I simply count the occurrences throughout the day when I remember to sit/stand straight, and track them, summing them up over time to reach certain milestones, in combination with a randomized reward system. While I wasn't too convinced in this attempt at first, it happens more and more often that I remember to sit up straight and realize I already do so, which is a... (read more)

Going through Hammertime for the second time now. I tried to figure out 10 not too usual ways in which to utilize predictions and forecasting. Not perfectly happy with the list of course, but a few of these ideas do seem (and to my experience actually are; 1 and 2 in particular) quite useful.

  1. Predicting own future actions to calibrate on one's own behavior
  2. When setting goals, using predictions on the probability of achieving them by a certain date, giving oneself pointers which goals/plans need more refinement
  3. Predicting the same relatively long term things i
... (read more)

One game/activity I generally recommend because of its potential 11/10 fun payoff in the end, which also works in relative isolation, is having fun with gap texts (just figured out this is apparently known as "mad libs", so maybe this isn't actually new to anybody). The idea being that one person creates a small story with many words left out, and then asks other people to fill in the words without knowing the context. So "Bob scratched his <bodypart> and <verb> insecurely. 'You know', he said <adverb>, &apos... (read more)

As mentioned in the final exam, here's my personal summary of how I experienced hammertime.

I feel like following the sequence was a very good use of my time, even though it turned out a bit different from what I had initially expected. I thought it would focus much more on "hammering in" the techniques (even after reading Hammers & Nails and realizing the metaphor worked in a different way), but it was more about trying everything out rather briefly, as well as some degree of obtaining new perspectives on things. This was fine, too, but ... (read more)

8silentbob2y
Update a year later, in case anybody else is similarly into numbers: that prediction of achieving 2.5 out of the 3 major quarter goals ended up being correct (one goal wasn't technically achieved due to outside factors I hadn't initially anticipated, but I had done my part, thus the .5), and I've been using a murphyjitsu-like approach for my quarterly goals ever since which I find super helpful. In the three quarters before Hammertime, I achieved 59%, 38% and 47% respectively of such goals. In the quarters since the numbers were (in chronological order, starting with the Hammertime quarter) 59%, 82%, 61%, 65%, 65%, ~82%. While total number and difficulty of goals vary, I believe the average difficulty hasn't changed much whereas the total number has increased somewhat over time. That being said, I also visited a CFAR workshop shortly after going through Hammertime, so that too surely had some notable effect on the positive development. My bug list has grown to 316 as of today, ~159 of which are solved, following a roughly linear pattern over time so far.

Quantum Walk: That's pretty much it.

Oracle: Possibly, didn't get around to reading it all so far. As far as I understand from just skimming, I guess a difference may be that the term deconfusion is used with regards to a domain where people are at risk of thinking they understand and aren't aware of any remaining confusion. I was more referring to situations where the presence of confusion is clear, but one struggles to identify a strategy to reduce it. In that case it may be helpful to focus on the origin of one's own confusion first a... (read more)

I strongly agree with the essence of this post, considering I've spent quite some time recently thinking about the value of my time and trying to somehow put it into reasonable numbers in order to make everyday decisions easier and more well informed.

About a year ago I took the clearerthinking test and ended up with ~32€, which seemed high, and looking back I think it wasn't particularly accurate. I'm thus not a great fan of that test personally and think getting a correct value requires much deeper thought than this small questionnaire prom... (read more)

2lynettebye3y
Good points - these heuristics are much better than nothing, but probably shouldn't be taken at face value without some additional thought.

I did it: my final exam.

Thank you for the sequence, had a great time, will leave a few additional thoughts in the post mortem post.

Their hypothesis was that the child would indeed regret it, even though the decision was clearly correct - which would show that regret is not reliable information about the quality of one’s past decisions.

Food for thought! I guess System 1's tendency to overvalue the present might cause us to discount the future as well as the past. I'm not quite sure to which degree I would consider this likely however. At least I personally usually do not regret decisions from the past that had positive effects on my well being, even if the alternative would ... (read more)

Share a story of a cure that was worse than the disease.

Not too long ago my girlfriend once said a few things I found hurtful. A few days later I decided to talk things through with her. Unfortunately that day she was in a rather bad mood for different reasons (which I hadn't fully comprehended until that point), which caused the talk to derail a bit and become more hurtful, different from the past when having these meta relationship talks always worked rather well.

My reaction to this initially was to assume she had just changed over time and had some... (read more)

I can relate. The few times when I used IDC in the past, it did feel useful, but still it's not really enjoyable. Maybe it's the fact that with IDC I'm not so much solving a problem but rather figuring out something about myself. There maybe won't be any cool hacks or workarounds to solve it all, but in the end it's more about coming to terms with things. So maybe choosing IDC as the best tool to approach a bug already feels like a small defeat which causes me to rather not choose it and try other, more outward-facing tools instead, or ignore the bug entirely. Something like that.

Share an experience where you radically underestimated or overestimated your own ability.

Overestimated: being filmed for an interview for a promo video of my company. Didn't think much of it beforehand, but it turned out to be awkward as hell, zero usable footage emerged. Wasting the time of all the ~8 people in the room wasn't great.

Underestimated: Nothing too radical, but giving a speech at a big birthday party. Expected it to be decent as I generally enjoy public speaking, but it went smoother than I thought, people laughed at the jokes and I think most were actually interested in what I said. Some complimented me on the speech afterwards which was nice.

Praise: The way you've layed everything out, following the hammertime routine is quite motivating and rewarding. Every new day comes with a bit of a dopamine rush.

Criticism: a few of the days don't have any real action attached, such as this one, where actually implementing design improvements appears somewhat optional and all you really ask us to do is write a comment. This may very well just be me, but more consistent "homework" (e.g. each day requiring at least one yoda timer of some kind) would be helpful to establish some consistency.

Are you better at achieving your values since Hammertime Day 1? If so, what helped?

I've been able to (probably lastingly) resolve ~20 bugs so far¹ and make notable improvements in a few areas of my life. Also my productivity increased by roughly 40% since starting hammertime, which however could have various causes (plus, last year too I was most productive during the summer months).

Regarding whether it helped me achieve my values, "no clear values" remains as one of my unresolved bugs, so I can't really tell.

I'd say the things tha... (read more)

This may be somewhat obvious, but I'd assume optimism biases (inside view, planning fallacy, maybe competitor neglect if it's the kind of plan where competition is involved) play a big role in many if not most plans that don't work out, as well as failing to bulletproof the plan initially using e.g. murphyjitsu/premortem.

A less obvious one would be aborting a project based on noisy data causing the expected value to temporarily drop; which could be prevented by predefining clear unambiguous "ejector seats", as alkjash mentioned in their Hammertime sequence.

One rather trivial inconvenience that negatively impacts my life is having a great aversion to lack of clarity in any kind of workflow. I've been meaning to join a boat trip with my girlfriend on a nearby river for a while (the kind in a big boat where you just join 50ish other random people and tour around for an hour looking at things), but from the website it's really unclear what exactly I need to do. When to be where exactly, how and where to get the actual tickets and that stuff. So I've procrastinated that endlessly.