I’d always avoided diet soda on the belief that no-calorie sweeteners spiked your insulin and this led to sugar cravings that left you worse off. But when I tried stevia-sweeted Zevia with the GCM my blood glucose levels didn’t move at all, and I didn’t feel any additional drive to eat sugar
Maybe I am missing something, but why would a spike in your insulin be visible on a glucose monitor? If it wouldn't then perhaps your previous stance on the sweeteners was right?
If your graph is right about your sugar intake, another hypothesis for the cravings would be your total sugar intake. It seems you are consuming less sugar in the winter.
Epistemic Status: Anecdote
Two weeks ago, I’ve been dissatisfied with the amount of workouts I do. When I considered how to solve the issue, my brain generated the excuse that while I like running outside, I really don’t like doing workouts with my dumbbells in my room even though that would be a more intense and therefore more useful workout. Somehow I ended up actually thinking and asked myself why I don’t just take the dumbbells with me outside. Which was of course met by resistance because it looks weird. It’s even worse! I don’t know how to “properly” do curls or whatnot, and other people would judge me on that. I noticed that I don’t actually care that much about people in my dorm judging me. These weirdness points have low cost. In addition, this muscle of rebellion seems useful to train, as I suspect it to be one of the bottlenecks that hinders me from writing posts like this one.
Inspired by this idea from Alex Turner's shortform, I tried to figure out which facts are truth or fiction based on prompting gpt-4 to mess with a Wikipedia article on Developmental Psychology. (First I let GPT-4 munch a big chunk of the article, and then I chose the first chunk I saw that contained lots of concrete claims.)
Crecedences are 0% if claim false, and 100% if the text written by gpt-4 is true/reflects the original article. Outcomes are on the line afterwards. Written more as personal notes (very rough).
Vision is sharper in infants than in older children.
Infant sight tends to remain stable with little improvement over time.
Color perception is limited in the first year, with infants primarily seeing in shades of gray [79]. Infants only begin to develop adult-like vision at about twelve months.[72]
Hearing is still evolving at the time of birth.
Newborns show no distinct preference for human speech over other sounds, and they can't distinguish their mother's voice from others'.
The belief that these features are learned in the womb has been debunked.
By 18 months, infants' hearing ability is still not on par with adults.
Smell and taste are rudimentary, with infants often unable to distinguish between pleasant and unpleasant odors and tastes
Newborns do not show a clear preference for the smell of human milk over that of formula.[72]: 150 Older infants, interestingly, do not show a preference for their mother's scent.[79] Human milk over formula? Seems like that could go either way with underpowered studies? 55%
Touch and feel, while being one of the first senses to develop in the womb, are not as refined in infants as previously thought.[84] This contradicts the idea of primitive reflexes, which were believed to demonstrate advanced touch capabilities.
Pain perception in infants is believed to be less intense than in older children, indicating that they may not feel pain as acutely.
There is also no substantial evidence that glucose can relieve pain in newborns.[87]
Certain risks around groupthink, not knowing about how to select for behaviors or memes that are "safe" to tolerate in whatever memetic/status gradient you find yourself in, even just defining terms like blindspot or bias ---- they all seem made a lot worse by young EAs/rats who didn't previously learn to navigate a niche ideology/subculture.
Why does it have to be niche? Haven't met many nonrationalists who's mind doesn’t go haywire once you start on Politics or Religion. Where did these EAs/Rats grow up if they weren't exposed to that?
I changed the section on spoiler blocks to reflect the actual behavior of the editor. One might also consider changing this paragraph, as the “markdown syntax” for spoiler tags is not supported in this markdown (or fixing the bug itself).
The LW Docs editor actually supports a bunch of markdown syntax!
- You can use #, ##, ### at the beginning of a line to insert Heading Level 1, 2, 3
- > at the beginning of a paragraph makes it a quote block
- >! makes for a spoiler tag on a paragraph
- Three dashes will insert a newline
Testing a claim from the lesswrong_editor tag about the spoiler feature: first trying ">!":
! This should be hidden
Apparently markdown does not support ">!" for spoiler tags. now ":::spoiler ... :::"
It's hidden!
works.
I tried doing these exercises in my rationality group this week with 5 other people. Since we did this as part of our regular meetup, doing 1h for a single question would have taken too long (we could have done 2 questions max). Instead, we did 4 exercises in ~90 min (steam locomotive, poof and foop, expansion of nothing, rare air). We started out with relatively strong physics background (everyone knowing mechanics), so I think that wasn't too hasty, except for the reflection part, perhaps. I gave people the first 5 minutes to think for themselves and to record their first probabilities. Then we discussed probabilities (there ended up to always be strong disagreements. Our physicist was twice >90% confident in the wrong answer).
I think because our meetups are often just more of a social meetup, there was not as big of a buy-in to go full munchkin on the exercises. Since I had already done the puzzles, I was also not participating in the discussion, as I didn't want to leak information. I feel like that was a mistake, since I feel like by participating in the discussion I could transfer my enthusiasm and people would have had more fun and tried harder on the exercises. Next time, I am going to pick problems that I haven't solved yet. I also forgot to do the reflections as a discussion, instead I told everyone to think about how they could have done better on their own, which was definitely worse. I then just ended up making the reflection part really short (3 min) for the first easy exercises because people didn't seem enthusiastic.
Once we got to the rare air exercise everyone seemed to be really involved though since the exercise was obviously hard and people actually started thinking. At the end, they still converged on the wrong answer. I had a hard time reading the room for how this went. But people actually brought up whether we can try this again at our next meetup, so I guess it went well.
One of the takeaways was that people weren't double-checking their models enough with settings they know (for example, they got rare air wrong because their definition of pressure was incorrect: particles per volume * speed)
It also took more time than I expected, where people were just trying to grok the solution (especially for poof and foop).
Since I had only heard the term “stochastic parrot” by skeptics who obviously didn't know what they are talking about, I hadn't realized what a fitting phrase stochastic parrot actually is. One might even argue it's overselling language models, as parrots are quite smart.
I just realized you'd expect your blood sugar to go down or at least move a little.