All Posts

Sorted by Magic (New & Upvoted)

November 2019

Shortform [Beta]
54orthonormal16d DeepMind released their AlphaStar paper a few days ago [https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning] , having reached Grandmaster level at the partial-information real-time strategy game StarCraft II over the summer. This is very impressive, and yet less impressive than it sounds. I used to watch a lot of StarCraft II (I stopped interacting with Blizzard recently because of how they rolled over for China), and over the summer there were many breakdowns of AlphaStar games once players figured out how to identify the accounts. The impressive part is getting reinforcement learning to work at all in such a vast state space- that took breakthroughs beyond what was necessary to solve Go and beat Atari games. AlphaStar had to have a rich enough set of potential concepts (in the sense that e.g. a convolutional net ends up having concepts of different textures) that it could learn a concept like "construct building P" or "attack unit Q" or "stay out of the range of unit R" rather than just "select spot S and enter key T". This is new and worth celebrating. The overhyped part is that AlphaStar doesn't really do the "strategy" part of real-time strategy. Each race has a few solid builds that it executes at GM level, and the unit control is fantastic, but the replays don't look creative or even especially reactive to opponent strategies. That's because there's no representation of causal thinking - "if I did X then they could do Y, so I'd better do X' instead". Instead there are many agents evolving together, and if there's an agent evolving to try Y then the agents doing X will be replaced with agents that do X'. (This lack of causal reasoning especially shows up in building placement, where the consequences of locating any one building here or there are minor, but the consequences of your overall SimCity are major for how your units and your opponents' units would fare if they attacked you. In one
12toonalfrink16d Today I had some insight in what social justice really seems to be trying to do. I'll use neurodiversity as an example because it's less likely to lead to bad-faith arguments. Let's say you're in the (archetypical) position of a king. You're programming the rules that a group of people will live by, optimizing for the well-being of the group itself. You're going to shape environments for people. For example you might be running a supermarket and deciding what music it's going to play. Let's imagine that you're trying to create the optimal environment for people. The problem is, since there is more than one person that is affected by your decision, and these people are not exactly the same, you will not be able to make the decision that is optimal for each one of them. If only two of your customers have different favourite songs, you will not be able to play both of them. In some sense, making a decision over multiple people is inherently "aggressive". But what you can do, is reduce the amount of damage. My understanding is that this is usually done by splitting up the people as finely as possible. You might split up your audience into stereotypes for "men", "women", "youngsters", "elders", "autistic people", "neurotypicals", etc. In this case, you can make a decision that would be okay for each of these stereotypes, giving your model a lower error rate. The problem with this is that stereotypes are leaky generalizations. Some people might not conform to it. Your stereotypes might be mistaken. Alternatively, there might be some stereotypes that you're not aware of. Take these 2 models. Model A knows that some people are highly sensitive to sound. Model B is not aware of it. If your model of people is A, you will play much louder music in the supermarket. As a result, people that are highly sensitive to sound will be unable to shop there. This is what social justice means with "oppression". You're not actively pushing anyone down, but you are doing so passively,
11Ben Pace3d Trying to think about building some content organisations and filtering systems on LessWrong. I'm new to a bunch of the things I discuss below, so I'm interested in other people's models of these subjects, or links to sites that solve the problems in different ways. Two Problems So, one problem you might try to solve is that people want to see all of a thing on a site. You might want to see all the posts on reductionism on LessWrong, or all the practical how-to guides (e.g. how to beat procrastination, Alignment Research Field Guide, etc), or all the literature reviews on LessWrong. And so you want people to help build those pages. You might also want to see all the posts corresponding to a certain concept, so that you can find out what that concept refers to (e.g. what is the term "goodhart's law" or "slack" or "mesa-optimisers" etc). Another problem you might try to solve, is that while many users are interested in lots of the content on the site, they have varying levels of interest in the different topics. Some people are mostly interested in the posts on big picture historical narratives, and less so on models of one's own mind that help with dealing with emotions and trauma. Some people are very interested AI alignment, some are interested in only the best such posts, and some are interested in none. I think the first problem is supposed to be solved by Wikis, and the second problem is supposed to be solved by Tagging. Speaking generally, Wikis allow dedicated users to curated pages around certain types of content, highlighting the best examples, some side examples, writing some context for people arriving on the page to understand what the page is about. It's a canonical, update-able, highly editable page built around one idea. Tagging is much more about filtering than about curating. Tagging Let me describe some different styles of tagging. One the site lobste.rs there are about 100 tags in total. Most tags give a very broad description of an area o
11Kaj_Sotala5d Here's a mistake which I've sometimes committed and gotten defensive as a result, and which I've seen make other people defensive when they've committed the same mistake. Take some vaguely defined, multidimensional thing that people could do or not do. In my case it was something like "trying to understand other people". Now there are different ways in which you can try to understand other people. For me, if someone opened up and told me of their experiences, I would put a lot of effort into really trying to understand their perspective, to try to understand how they thought and why they felt that way. At the same time, I thought that everyone was so unique that there wasn't much point in trying to understand them by any *other* way than hearing them explain their experience. So I wouldn't really, for example, try to make guesses about people based on what they seemed to have in common with other people I knew. Now someone comes and happens to mention that I "don't seem to try to understand other people". I get upset and defensive because I totally do, this person hasn't understood me at all! And in one sense, I'm right - it's true that there's a dimension of "trying to understand other people" that I've put a lot of effort into, in which I've probably invested more than other people have. And in another sense, the other person is right - while I was good at one dimension of "trying to understand other people", I was severely underinvested in others. And I had not really even properly acknowledged that "trying to understand other people" had other important dimensions too, because I was justifiably proud of my investment in one of them. But from the point of view of someone who *had* invested in those other dimensions, they could see the aspects in which I was deficient compared to them, or maybe even compared to the median person. (To some extent I thought that my underinvestment in those other dimensions was *virtuous*, because I was "not making assumption
11Daniel Kokotajlo14d It seems to me that human society might go collectively insane sometime in the next few decades. I want to be able to succinctly articulate the possibility and why it is plausible, but I'm not happy with my current spiel. So I'm putting it up here in the hopes that someone can give me constructive criticism: I am aware of three mutually-reinforcing ways society could go collectively insane: 1. Echo chambers/filter bubbles/polarization: Arguably political polarization [https://en.wikipedia.org/wiki/Political_polarization] is increasing across the world of liberal democracies today. Perhaps the internet has something to do with this--it’s easy to self-select into a newsfeed and community that reinforces and extremizes your stances on issues. Arguably recommendation algorithms have contributed to this problem in various ways--see e.g. “Sort by controversial” [https://slatestarcodex.com/2018/10/30/sort-by-controversial/] and Stuart Russell’s claims in Human Compatible. At any rate, perhaps some combination of new technology and new cultural or political developments will turbocharge this phenomenon. This could lead to civil wars, or more mundanely, societal dysfunction. We can’t coordinate to solve collective action problems relating to AGI if we are all arguing bitterly with each other about culture war issues.Deepfakes/propaganda/persuasion tools: Already a significant portion of online content is deliberately shaped by powerful political agendas--e.g. Russia, China, and the US political tribes. Much of the rest is deliberately shaped by less powerful apolitical agendas, e.g. corporations managing their brands or teenagers in Estonia making money by spreading fake news during US elections. Perhaps this trend will continue; technology like chatbots, language models, deepfakes, etc. might make it cheaper and more effective to spew this sort of propaganda, to the point where most onlin
Load More (5/35)

October 2019

Shortform [Beta]
41DanielFilan1mo Hot take: if you think that we'll have at least 30 more years of future where geopolitics and nations are relevant, I think you should pay at least 50% as much attention to India as to China. Similarly large population, similarly large number of great thinkers and researchers. Currently seems less 'interesting', but that sort of thing changes over 30-year timescales. As such, I think there should probably be some number of 'India specialists' in EA policy positions that isn't dwarfed by the number of 'China specialists'.
37elityre22d New post: Some notes on Von Neumann, as a human being [https://musingsandroughdrafts.wordpress.com/2019/10/26/some-notes-on-von-neumann-as-a-human-being/] I recently read Prisoner’s Dilemma, which half an introduction to very elementary game theory, and half a biography of John Von Neumann, and watched this [https://youtu.be/vLbllFHBQM4] old PBS documentary about the man. I’m glad I did. Von Neumann has legendary status in my circles, as the smartest person ever to live. [1] Many times I’ve written the words “Von Neumann Level Intelligence” in a AI strategy document, or speculated [http://www.overcomingbias.com/2014/07/30855.html#comment-4174545474] about how many coordinated Von Neumanns would it take to take over the world. (For reference, I now think that 10 is far too low, mostly because he didn’t seem to have the entrepreneurial or managerial dispositions.) Learning a little bit more about him was humanizing. Yes, he was the smartest person ever to live, but he was also an actual human being, with actual human traits. Watching this first clip [https://www.youtube.com/watch?v=vLbllFHBQM4], I noticed that I was surprised by a number of thing. 1. That VN had an accent. I had known that he was Hungarian, but somehow it had never quite propagated that he would speak with a Hungarian accent. 2. That he was middling height (somewhat shorter than the presenter he’s talking too). 3. The thing he is saying is the sort of thing that I would expect to hear from any scientist in the public eye, “science education is important.” There is something revealing about Von Neumann, despite being the smartest person in the world, saying basically what I would expect Neil DeGrasse Tyson to say in an interview. A lot of the time he was wearing his “scientist / public intellectual” hat, not the “smartest person ever to live” hat. Some other notes of interest: He was not a skilled poker player, which punctured my assumption that Von Neumann was om
28Daniel Kokotajlo1mo My baby daughter was born two weeks ago, and in honor of her existence I'm building a list of about 100 technology-related forecasting questions, which will resolve in 5, 10, and 20 years. Questions like "By the time my daughter is 5/10/20 years old, the average US citizen will be able to hail a driverless taxi in most major US cities." (The idea is, tying it to my daughter's age will make it more fun and also increase the likelihood that I actually go back and look at it 10 years later.) I'd love it if the questions were online somewhere so other people could record their answers too. Does this seem like a good idea? Hive mind, I beseech you: Help me spot ways in which this could end badly! On a more positive note, any suggestions for how to do it? Any expressions of interest in making predictions with me? Thanks!
25Vaniver1mo [Meta: this is normally something I would post on my tumblr [https://vaniver.tumblr.com/], but instead am putting on LW as an experiment.] Sometimes, in games like Dungeons and Dragons, there will be multiple races of sapient beings, with humans as a sort of baseline. Elves are often extremely long-lived, but most handlings of this I find pretty unsatisfying. Here's a new take, that I don't think I've seen before (except the Ell in Worth the Candle [https://archiveofourown.org/works/11478249/chapters/25740126] have some mild similarities): Humans go through puberty at about 15 and become adults around 20, lose fertility (at least among women) at about 40, and then become frail at about 60. Elves still 'become adults' around 20, in that a 21-year old elf adventurer is as plausible as a 21-year old human adventurer, but they go through puberty at about 40 (and lose fertility at about 60-70), and then become frail at about 120. This has a few effects: * The peak skill of elven civilization is much higher than the peak skill of human civilization (as a 60-year old master carpenter has had only ~5 decades of skill growth, whereas a 120-year old master carpenter has had ~11). There's also much more of an 'apprenticeship' phase in elven civilization (compare modern academic society's "you aren't fully in the labor force until ~25" to a few centuries ago, when it would have happened at 15), aided by them spending longer in the "only interested in acquiring skills" part of 'childhood' before getting to the 'interested in sexual market dynamics' part of childhood. * Young elves and old elves are distinct in some of the ways human children and adults are distinct, but not others; the 40-year old elf who hasn't started puberty yet has had time to learn 3 different professions and build a stable independence, whereas the 12-year old human who hasn't started puberty yet is just starting to operate as an independent entity. And so sometimes
22Vaniver1mo People's stated moral beliefs are often gradient estimates instead of object-level point estimates. This makes sense if arguments from those beliefs are pulls on the group epistemology, and not if those beliefs are guides for individual action. Saying "humans are a blight on the planet" would mean something closer to "we should be more environmentalist on the margin" instead of "all things considered, humans should be removed." You can probably imagine how this can be disorienting, and how there's a meta issue of the point estimate view is able to see what it's doing in a way that the gradient view might not be able to see what it's doing.
Load More (5/87)

September 2019

Frontpage Posts
Shortform [Beta]
49elityre2mo New post: Some things I think about Double Crux and related topics I've spent a lot of my discretionary time working on the broad problem of developing tools for bridging deep disagreements and transferring tacit knowledge. I'm also probably the person who has spent the most time explicitly thinking about and working with CFAR's Double Crux framework. It seems good for at least some of my high level thoughts to be written up some place, even if I'm not going to go into detail about, defend, or substantiate, most of them. The following are my own beliefs and do not necessarily represent CFAR, or anyone else. I, of course, reserve the right to change my mind. [Throughout I use "Double Crux" to refer to the Double Crux technique, the Double Crux class, or a Double Crux conversation, and I use "double crux" to refer to a proposition that is a shared crux for two people in a conversation.] Here are some things I currently believe: (General) 1. Double Crux is one (highly important) tool/ framework among many. I want to distinguish between the the overall art of untangling and resolving deep disagreements and the Double Crux tool in particular. The Double Crux framework is maybe the most important tool (that I know of) for resolving disagreements, but it is only one tool/framework in an ensemble. 2. Some other tools/ frameworks, that are not strictly part of Double Crux (but which are sometimes crucial to bridging disagreements) include NVC, methods for managing people's intentions and goals, various forms of co-articulation (helping to draw out an inchoate model from one's conversational partner), etc.In some contexts other tools are substitutes for Double Crux (ie another framework is more useful) and in some cases other tools are helpful or necessary compliments (ie they solve problems or smooth the process within the Double Crux frame).In particular, my personal conversational facilitation repertoire is about 60%
33romeostevensit2mo A service where a teenager reads something you wrote slowly and sarcastically. The points at which you feel defensive are worthy of further investigation.
27jp2mo Do Anki while Weightlifting Many rationalists appear to be interested in weightlifting. I certainly have enjoyed having a gym habit. I have a recommendation for those who do: Try studying Anki cards [https://twitter.com/michael_nielsen/status/957763229454774272?lang=en] while resting between weightlifting sets. The upside is high. Building the habit of studying Anki cards is hard, and if doing it at the gym causes it to stick, you can now remember things by choice not chance. And the cost is pretty low. I rest for 90 seconds between sets, and do about 20 sets when I go to the gym. Assuming I get a minute in once the overheads are accounted for, that gives me 20 minutes of studying. I go through about 4 cards per minute, so I could do 80 cards per visit to the gym. In practice I spend only ~5 minutes studying per visit, because I don't have that many cards. I'm not too tired to concentrate. In fact, the adrenaline high makes me happy to have something mentally active to do. Probably because of this, it doesn't at all decrease my desire to go to the gym. I find I can add simple cards to my Anki deck at the gym, although the mobile app does make it slow. Give it a try! It's cheap to experiment and the value of a positive result is high.
26habryka2mo WHAT IS THE PURPOSE OF KARMA? LessWrong has a karma system, mostly based off of Reddit's karma system, with some improvements and tweaks to it. I've thought a lot about more improvements to it, but one roadblock that I always run into when trying to improve the karma system, is that it actually serves a lot of different uses, and changing it in one way often means completely destroying its ability to function in a different way. Let me try to summarize what I think the different purposes of the karma system are: Helping users filter content The most obvious purpose of the karma system is to determine how long a post is displayed on the frontpage, and how much visibility it should get. Being a social reward for good content This aspect of the karma system comes out more when thinking about Facebook "likes". Often when I upvote a post, it is more of a public signal that I value something, with the goal that the author will feel rewarded for putting their effort into writing the relevant content. Creating common-knowledge about what is good and bad This aspect of the karma system comes out the most when dealing with debates, though it's present in basically any karma-related interaction. The fact that the karma of a post is visible to everyone, helps people establish common knowledge of what the community considers to be broadly good or broadly bad. Seeing a an insult downvoted, does more than just filter it out of people's feeds, it also makes it so that anyone who stumbles accross it learns something about the norms of the community. Being a low-effort way of engaging with the site On lesswrong, Reddit and Facebook, karma is often the simplest action you can take on the site. This means its usually key for a karma system like that to be extremely simple, and not require complicated decisions, since that would break the basic engagement loop with the site. PROBLEMS WITH ALTERNATIVE KARMA SYSTEMS Here are some of the most common alternatives to our current
23Ruby2mo Selected Aphorisms from Francis Bacon's Novum Organum I'm currently working to format Francis Bacon's Novum Organum [https://en.wikipedia.org/wiki/Novum_Organum] as a LessWrong sequence. It's a moderate-sized project as I have to work through the entire work myself, and write an introduction which does Novum Organum justice and explains the novel move of taking an existing work and posting in on LessWrong (short answer: NovOrg is some serious hardcore rationality and contains central tenets of the LW foundational philosophy notwithstanding being published back in 1620, not to mention that Bacon and his works are credited with launching the modern Scientific Revolution) While I'm still working on this, I want to go ahead and share some of my favorite aphorisms from is so far: Bacon sees the unaided human mind as entirely inadequate for scientific progress. He sees for the way forward for scientific progress as constructing tools/infrastructure/methodogy to help the human mind think/reason/do science. Bacon repeatedly hammers that reality has a surprising amount of detail [http://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail] such that just reasoning about things is unlikely to get at truth. Given the complexity and subtlety of nature, you have to go look at it. A lot. Anticipations are what Bacon calls making theories by generalizing principles from a few specific examples and the reasoning from those [ill-founded] general principles. This is the method of Aristotle and science until that point which Bacon wants to replace. Interpretations is his name for his inductive method which only generalizes very slowly, building out slowly increasingly large sets of examples/experiments. I read Aphorism 28 as saying that Anticipations have much lower inferential distance since they can be built simple examples with which everyone is familiar. In contrast, if you build up a theory based on lots of disparate observation that isn't universal,
Load More (5/134)

August 2019

Frontpage Posts
Shortform [Beta]
53Buck3mo I think that an extremely effective way to get a better feel for a new subject is to pay an online tutor to answer your questions about it for an hour. It turns that there are a bunch of grad students on Wyzant who mostly work tutoring high school math or whatever but who are very happy to spend an hour answering your weird questions. For example, a few weeks ago I had a session with a first-year Harvard synthetic biology PhD. Before the session, I spent a ten-minute timer writing down things that I currently didn't get about biology. (This is an exercise worth doing even if you're not going to have a tutor, IMO.) We spent the time talking about some mix of the questions I'd prepared, various tangents that came up during those explanations, and his sense of the field overall. I came away with a whole bunch of my minor misconceptions fixed, a few pointers to topics I wanted to learn more about, and a way better sense of what the field feels like and what the important problems and recent developments are. There are a few reasons that having a paid tutor is a way better way of learning about a field than trying to meet people who happen to be in that field. I really like it that I'm paying them, and so I can aggressively direct the conversation to wherever my curiosity is, whether it's about their work or some minor point or whatever. I don't need to worry about them getting bored with me, so I can just keep asking questions until I get something. Conversational moves I particularly like: * "I'm going to try to give the thirty second explanation of how gene expression is controlled in animals; you should tell me the most important things I'm wrong about." * "Why don't people talk about X?" * "What should I read to learn more about X, based on what you know about me from this conversation?" All of the above are way faster with a live human than with the internet. I think that doing this for an hour or two weekly will make me substantially more knowl
46elityre3mo Old post: RAND needed the "say oops" skill [https://musingsandroughdrafts.wordpress.com/2018/12/01/rand-needed-the-say-oops-skill/] [Epistemic status: a middling argument] A few months ago [https://musingsandroughdrafts.wordpress.com/2018/06/21/initial-comparison-between-rand-and-the-rationality-cluster/] , I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.” Since then I spent some time doing additional research into what cognitive errors and mistakesthose consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed. However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months. It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up. The missile gap In the late nineteen-fifties it was widely understood that there was a “missile gap”: that the soviets had many more ICBM (“intercontinental ballistic missiles” armed with nuclear warheads) than the US. Estimates varied widely on how many missiles the soviets had. The Army and the Navy gave estimates of about 40 missiles, which was about at parity with the the US’s strategic nuclear force. The Air Force and the Strategic Air Command, in contrast, gave estimates of as many as 1000 soviet missiles, 20 times more than the US’s count. (The Air Force and SAC were incentivized to inflate their estimates of the
38elityre3mo New post: What is mental energy? [https://wordpress.com/post/musingsandroughdrafts.wordpress.com/398] [Note: I’ve started a research side project on this question, and it is already obvious to me that this ontology importantly wrong.] There’s a common phenomenology of “mental energy”. For instance, if I spend a couple of hours thinking hard (maybe doing math), I find it harder to do more mental work afterwards. My thinking may be slower and less productive. And I feel tired, or drained, (mentally, instead of physically). Mental energy is one of the primary resources that one has to allocate, in doing productive work. In almost all cases, humans have less mental energy than they have time, and therefore effective productivity is a matter of energy management, more than time management. If we want to maximize personal effectiveness, mental energy seems like an extremely important domain to understand. So what is it? The naive story is that mental energy is an actual energy resource that one expends and then needs to recoup. That is, when one is doing cognitive work, they are burning calories, depleting their bodies energy stores. As they use energy, they have less fuel to burn. My current understanding is that this story is not physiologically realistic. Thinking hard does consume more of the body’s energy than baseline, but not that much more. And we experience mental fatigue long before we even get close to depleting our calorie stores. It isn’t literal energy that is being consumed. [ The Psychology of Fatigue pg.27] So if not that, what is going on here? A few hypotheses: (The first few, are all of a cluster, so I labeled them 1a, 1b, 1c, etc.) Hypothesis 1a: Mental fatigue is a natural control system that redirects our attention to our other goals. The explanation that I’ve heard most frequently in recent years (since it became obvious that much of the literature on ego-depletion was off the mark), is the following: A human mind is composed of a bunch
32Buck3mo A couple weeks ago I spent an hour talking over video chat with Daniel Cantu, a UCLA neuroscience postdoc who I hired on Wyzant.com [https://www.wyzant.com/match/tutor/87443576?fbclid=IwAR3n91qFP_ijKlfMHrw1UmOVOhdw3jyG1r1A-whIJBaFPzpBWtWCmzBe414] to spend an hour answering a variety of questions about neuroscience I had. (Thanks Daniel for reviewing this blog post for me!) The most interesting thing I learned is that I had quite substantially misunderstood the connection between convolutional neural nets and the human visual system. People claim that these are somewhat bio-inspired, and that if you look at early layers of the visual cortex you'll find that it operates kind of like the early layers of a CNN, and so on. The claim that the visual system works like a CNN didn’t quite make sense to me though. According to my extremely rough understanding, biological neurons operate kind of like the artificial neurons in a fully connected neural net layer--they have some input connections and a nonlinearity and some output connections, and they have some kind of mechanism for Hebbian learning or backpropagation or something. But that story doesn't seem to have a mechanism for how neurons do weight tying, which to me is the key feature of CNNs. Daniel claimed that indeed human brains don't have weight tying, and we achieve the efficiency gains over dense neural nets by two other mechanisms instead: Firstly, the early layers of the visual cortex are set up to recognize particular low-level visual features like edges and motion, but this is largely genetically encoded rather than learned with weight-sharing. One way that we know this is that mice develop a lot of these features before their eyes open. These low-level features can be reinforced by positive signals from later layers, like other neurons, but these updates aren't done with weight-tying. So the weight-sharing and learning here is done at the genetic level. Secondly, he thinks that we get around the need for
29tilia3mo HOW I MANAGED TO STOP CRAVING SWEETS IN 3 WEEKS For me at least, it is possible to eliminate/drastically reduce my sugar cravings. Typically I feel cravings for something sweet whenever I’m hungry, bored, have just finished a meal, am feeling sad, or am feeling happy. In short, I eat a lot of sweets and also spend a lot of time and effort trying to resist them. LAST TIME A few years ago I managed to cold-turkey sweets while I was following a Keto diet. I noticed that in week 3 of keto, my cravings had vanished. No longer did the desire to finish a meal with a bowl of ice cream plague me. For about 6 weeks total, if memory serves, I managed to eat no desserts or sweets at all. Everything was going great. Then I went to a birthday party, and my hubris let me astray. “I’m doing so well! I don’t need it, but I can just have a slice of chocolate cake, and it’s no big deal!” Alas, the very next day, my cravings were back, I fell off the wagon, and the experiment was over. I tried several times over the years to quit cold turkey again, but I never managed to keep at it for long, and I more or less gave up and decided to just make peace with the yearly expansion of my waistline. THIS TIME Near the end of June, I managed to have a few really busy days in a row, and for whatever reason, I realized suddenly, I hadn’t had any sweets for the last 3 days. Noticing that I had a little bit of a “head start” on getting through the 3 week sugar withdrawal, I decided to give it another go. I’m not sure where I got the idea, but I decided to modify my strategy. It was in the prime of the summer fruit season in the Bay Area, and nectarines, plums, pluots, peaches, and mangos were all at their ripest and sweetest. Instead of going cold turkey, I would try to eat a piece of fruit anytime I had a craving for sugar. I don’t think this would have worked if the fruit hadn’t been extremely good and satisfying. Another thing that I did was I didn’t try to limit or moderate how muc
Load More (5/135)

Load More Months