moridinamael

Comments

Against Sam Harris's personal claim of attentional agency

I've been concerned for some time that intensive meditation causes people to stop feeling their emotions but not to stop having those emotions. Sam's podcasts are in fact littered with examples where he clearly (from a third-party perspective) seems to become agitated, flustered or angry, but seems weirdly in denial about it because his inner experience is not one of upset. I'm not up to speculating on exactly how this happens, but there also seems to be an wide but informal folklore concerning long-term meditators who are interpersonally abusive or insensitive.

Would most people benefit from being less coercive to themselves?

There's one sense in which self-coercion is impossible because you cannot make yourself do something that at least some part of yourself doesn't endorse. There's another sense in which self-coercion is an inescapable inevitability because some particular part of you will always dis-endorse any given action.

It's definitely worth it to seek to understand yourself well enough that you can negotiate between dissatisfied parts of yourself, pre-emptively or on-the-fly. This helps you generate plans that aren't so self-coercive that they're preordained to fail.

In my framing, the effective approach isn't to find a non-coercive plan, but rather a minimally-coercive plan that still achieves the goal. This turns it from an exercise of willpower to an exercise of strategy. Plus, the only way you can really learn where plans sit on the coerciveness landscape is to attempt to execute them.

Benefits of "micro-tracking" for personal health measurements?

It has been unambiguously helpful for my Apple Watch to inform me that my sleep quality is detectably higher when I exercise, even if that exercise is just a brisk 1-2 mile walk. I generally agree subjectively feel better when the watch tells me I've slept well. Connecting "go for your walk" to "feel noticeably better tomorrow" is much more motivating than going for a walk due to nebulous long-term health reasons. None of this would happen if the watch wasn't automatically tracking my sleep (including interruptions and sleeping heart rate), and my daily activities.

What trade should we make if we're all getting the new COVID strain?

Interesting. The market has not increased much since the announcement of the Moderna and Pfizer vaccines, so I'd have a hard time causally connecting the market to the vaccine announcement.

My feeling was that the original sell-off in February and early March was due to the fact that we were witnessing and unprecedented-in-our-lifetimes event, anything could happen. A more contagious form of the same virus will only trigger mass selloff if and only if investors believe that other investors believe that the news of the new strain is bad enough to trigger a panic selloff.

There are too many conflicting things going on for me to make confident claims about timelines and market moves, but I really do doubt the story that the market is up ~13% relative to January of this year simple because investors anticipate a quick return to normal.

What trade should we make if we're all getting the new COVID strain?

People are both buying more equities and selling less, because (1) their expenses are low, due to lockdowns and impossibility of travel and (2) the market is going up very fast, retail investors don't want to sell out of a bull market. There's obviously more going on than just this, retail investors are a minority of all invested capital, but even the big firms appear to have nowhere else to put their money right now. So as long as the lockdowns persist, both household and corporate expenses will be flatlined.

Even if my entire previous paragraph is wrong and dumb, you can simply observed that the market has soared ever since the original panic-crash, and ask why the virus increasing in its virulence would cause a different consequence than what we've already seen.

What trade should we make if we're all getting the new COVID strain?

Continued lockdowns will likely drive the markets higher, right? A more infectious strain might tend to increase the rate of lockdowns, even as the vaccine continues the rollout. So I would just buy-and-hold and then magically know exactly the right time to bail, when lockdowns look like they’re about to end.

New Eliezer Yudkowsky interview on We Want MoR, the HPMOR Podcast

A theme is by definition an idea that appears repeatedly, so the easiest method is to just sit back and notice what ideas are cropping up more than once. The first things you notice will by default be superficial, but after reflection you can often hone in on a more concise and deep statement of what the themes are.

For example, a first pass of HPMOR might pick out "overconfidence" as a theme, because Harry (and other characters) are repeatedly overconfident in ways that lead to costly errors. But continued consideration would show that the concept is both more specific and deeper than just "overconfidence", and ties into a whole thesis about Rationality, what Rationality is and isn't (as Eliezer says, providing positive and negative examples), and why it's a good idea.

Another strategy is to simply observe any particular thing that appears in the book and ask "why did the author do that?" The answer, for fiction with any degree of depth, is almost never going to be "because it was entertaining." Even a seemingly shallow gag like Ron Weasley's portrayal in HPMOR is still articulating something.

If this is truly a thing you're interested in getting better at, I would suggest reading books that don't even have cool powerful characters. For example, most things but Ursula Le Guin are going to feel very unsatisfying if you read them with the attitude that you're supposed to be watching a cool dude kick ass, but her books are extremely rewarding in other ways. Most popular genre fare is full of wish-fulfillment narratives, but there's still a lot of genre fiction that doesn't indulge itself in this way. And there's nothing intrinsically wrong with reading that way.

I'm not sure if I can name any podcast that has exclusively "definitive", or, author-intended readings/interpretations, but my own podcast typically goes into themes.

Pain is not the unit of Effort

I very recently realized something was wrong with my mental stance when I realized I was responding to work agenda items with some variation of the phrase, "Sure, that shouldn't be too painful." Clearly the first thing that came to mind when contemplating a task wasn't how long it would take, what resources would be needed, or how to do it, but rather how much suffering I would have to go through to accomplish it. This actually motivated some deeper changes in my lifestyle. Seeing this post here was extremely useful and timely for me.

Why are young, healthy people eager to take the Covid-19 vaccine?

Is there some additional reason to be concerned about side effects even after the point at which the vaccine has passed all the required trials, relative to the level of concern you should have about any other new vaccine?

the scaling “inconsistency”: openAI’s new insight

I really appreciated the degree of clarity and the organization of this post.

I wonder how much the slope of L(D) is a consequence of the structure of the dataset, and whether we have much power to meaningfully shift the nature of L(D) for large datasets. A lot of the structure of language is very repetitive, and once it is learned, the model doesn't learn much from seeing more examples of the same sort of thing.  But, within the dataset are buried very rare instances of important concept classes. (In other words, the Common Crawl data has a certain perplexity, and that perplexity is a function of both how much of the dataset is easy/broad/repetitive/generic and how much is hard/narrow/unique/specific.) For example: I can't, for the life of me, get GPT-3 to give correct answers on the following type of prompt:

You are facing north. There is a house straight ahead of you. To your left is a mountain. In what cardinal direction is the mountain?

No matter how much priming I give or how I reframe the question, GPT-3 tends to either give a basically random cardinal direction, or just repeat whatever direction I mentioned in the prompt. If you can figure out how to do it, please let me know, but as far as I can tell, GPT-3 really doesn't understand how to do this. I think this is just an example of the sort of thing which simply occurs so infrequently in the dataset that it hasn't learned the abstraction. However, I fully suspect that if there were some corner of the Internet where people wrote a lot about the cardinal directions of things relative to a specified observer, GPT-3 would learn it.

It also seems that one of the important things that humans do but transformers do not, is actively seek out more surprising subdomains of the learning space. The big breakthrough in transformers was attention, but currently the attention is only within-sequence, not across-dataset. What does L(D) look like if the model is empowered to notice, while training, that its loss on sequences involving words like "west" and "cardinal direction" is bad, and then to search for and prioritize other sequences with those tokens, rather than simply churning through the next 1000 examples of sequences from which it has essentially already extracted the maximum amount of information. At a certain point, you don't need to train it on "The man woke up and got out of {bed}", it knew what the last token was going to be long ago.

It would be good to know if I'm completely missing something here.

Load More