Wiki Contributions

Comments

This is definitely an interesting topic, and I too would like to see a continued discussion as well as more research in the area. I also think that Jeff Nobbs' articles are not a great source, as he seems to twist the facts quite a bit in order to support his theory. This is particularly the case for part 2 of his series - looking into practically any of the linked studies, I found issues with how he summarized them. Some examples:

  • he claims one study shows that a study showed a 7x increase in cases of cardiovascular deaths and heart attacks, failing to mention that a) the test group was ~50% larger than the control group (so it was actually a ~5x rather than 7x increase), b) that the study itself claims these numbers are not statistically significant due to the low absolute number, and c) that you could get the opposite result from the study when looking at the number of all cause mortality, which happened to be ~4x as large for the control group as for the test group (which too is not statistically significant of course, but still)
  • he cites a study on rats, claiming that it shows that replacing some fat in their diet with "fats that you usually find in vegetable oil" (quite a suspicious wording) increased cancer metastasis risk 4fold - but looking into the study, a) these rats had a significantly increased caloric intake when compared to the test group, and b) 90% of the fat they consume came from lard, rather than vegetable oils, making this study entirely useless for the whole debate
  • for another study he points out the negative effects of safflower oil, but conveniently fails to mention that the same study found an almost as large negative effect for olive oil (which seems to be one of his favorites)

(note I wrote this up from memory, so possible I've mixed something up in the examples above - might be worth writing a post about it with properly linked sources)

I still think he's probably right about many things, and it's most certainly correct that oils high in Omega6 in particular aren't healthy (which might indeed include Canola oil, which I was not aware of before reading his articles). Still he seems to be very much on an agenda to an extent that it prevents him from summarizing studies accurately, which is not great. Doesn't mean he's wrong, but also means I won't trust anything he says without checking the sources.

I could well imagine that there are there are strong selection effects at play (more health-concerned people being more likely to give veganism a shot), and the positive effects of the diet just outweighing the possible slight increase in plant oil usage. And I wouldn't even be so sure if vegans on average consume more plant oil than non-vegans - e.g. vegans probably generally consume much less processed food, which is a major source of vegetable oil. 

In The Rationalists' Guide to the Galaxy the author discusses the case of a chess game, and particularly when a strong chess player faces a much weaker one. In that case it's very easy to make the prediction that the strong player will win with near certainty, even if you have no way to predict the intermediate steps. So there certainly are domains where (some) predictions are easy despite the world's complexity.

My personal rather uninformed take on the AI discussion is that many of the arguments are indeed comparable in a way to the chess example, so the predictions seem convincing despite the complexity involved. But even then they are based on certain assumptions about how AGI will work (e.g. that it will be some kind of optimization process with a value function), and I find these assumptions pretty intransparent. When hearing confident claims about AGI killing humanity, then even if the arguments make sense, "model uncertainty" comes to mind. But it's hard to argue about that since it is unclear (to me) what the "model" actually is and how things could turn out different.

Assuming slower and more gradual timelines, isn't it likely that we run into some smaller, more manageable AI catastrophes before "everybody falls over dead" due to the first ASI going rogue? Maybe we'll be at a state of sub-human level AGIs for a while, and during that time some of the AIs clearly demonstrate misaligned behavior leading to casualties (and general insights into what is going wrong), in turn leading to a shift in public perception. Of course it might still be unlikely that the whole globe at that point stops improving AIs and/or solves alignment in time, but it would at least push awareness and incentives somewhat into the right direction.

Isn't it conceivable that improving intelligence turns out to become difficult more quickly than the AI is scaling? E.g. couldn't it be that somewhere around human level intelligence, improving intelligence by every marginal percent becomes twice as difficult as the previous percent? I admit that doesn't sound very likely, but if that was the case, then even a self-improving AI would potentially improve itself very slowly, and maybe even sub-linear rather than exponentially, wouldn't it?

First person (row 2) partially sounds a lot like GPT3. Particularly their answers "But in the scheme of things, changing your mind says more good things about your personality than it does bad. It shows you have a sense of awareness and curiosity, and that you can admit and reflect when decisions have been flawed or mistakes have been made." and "A hero is defined by his or her choices and actions, not by chance or circumstances that arise. A hero can be brave and willing to sacrifice his or her life, but I think we all have a hero in us — someone who is unselfish and without want of reward, who is determined to help others". Then however there's "SAVE THE AMOUNT" and "CORONA COVID-19". This person is confusing.

The mug is gone. Please provide mug again if possible.

I found the concept interesting and enjoyed reading the post. Thanks for sharing!

Sidenote: It seems either your website is offline (blog's still there though) or the contact link from your blog is broken. Leads to a 404.

Thanks a lot for your comment! I think you're absolutely right with most points and I didn't do the best possible job of covering these things in the post, partially due to wanting to keeping things somewhat simplistic and partially due to lack of full awareness of these issues. The conflict between the point of easy progress and short-sightedness is most likely quite real and it seems indeed unlikely that once such a point is reached there will be no setbacks whatsoever. And having such an optimistic expectation would certainly be detrimental. In the end the point of easy progress is an ideal to strive for when planning, but not an anspiration to fully maintain at all times.

Regarding willpower I agree that challenge is an important factor, and my idea was not so much that tasks themselves should become trivially easy, but that working on them becomes easy in the way that they excite you. Again that's something I could have more clear in the text.

but you need to encounter this uphill part where things become disorienting, frustrating and difficult to solve more complex problems in order to progress in your knowledge

I'm not so sure about this. I like to think there must be ways to learn things, even maths, that entail a vast majority of positive experiences for the person learning it. This might certainly involve a degree of confusion, but maybe in the form of surprise and curiosity and not necessarily frustration. That being said, 1) I might be wrong in my assumption that such a thing is realistically possible and 2) this is not at all the experience most people are actually having when expanding their skills, so it is certainly important to be able to deal well with frustration and disorientation. Still, it makes a lot of sense to me to reduce these negative experiences wherever possible, unless you think that such negative experiences themselves have some inherent value and can't be replaced.

Very interesting concept, thanks for sharing!

Load More