moridinamael

Comments

Pain is not the unit of Effort

I very recently realized something was wrong with my mental stance when I realized I was responding to work agenda items with some variation of the phrase, "Sure, that shouldn't be too painful." Clearly the first thing that came to mind when contemplating a task wasn't how long it would take, what resources would be needed, or how to do it, but rather how much suffering I would have to go through to accomplish it. This actually motivated some deeper changes in my lifestyle. Seeing this post here was extremely useful and timely for me.

Why are young, healthy people eager to take the Covid-19 vaccine?

Is there some additional reason to be concerned about side effects even after the point at which the vaccine has passed all the required trials, relative to the level of concern you should have about any other new vaccine?

the scaling “inconsistency”: openAI’s new insight

I really appreciated the degree of clarity and the organization of this post.

I wonder how much the slope of L(D) is a consequence of the structure of the dataset, and whether we have much power to meaningfully shift the nature of L(D) for large datasets. A lot of the structure of language is very repetitive, and once it is learned, the model doesn't learn much from seeing more examples of the same sort of thing.  But, within the dataset are buried very rare instances of important concept classes. (In other words, the Common Crawl data has a certain perplexity, and that perplexity is a function of both how much of the dataset is easy/broad/repetitive/generic and how much is hard/narrow/unique/specific.) For example: I can't, for the life of me, get GPT-3 to give correct answers on the following type of prompt:

You are facing north. There is a house straight ahead of you. To your left is a mountain. In what cardinal direction is the mountain?

No matter how much priming I give or how I reframe the question, GPT-3 tends to either give a basically random cardinal direction, or just repeat whatever direction I mentioned in the prompt. If you can figure out how to do it, please let me know, but as far as I can tell, GPT-3 really doesn't understand how to do this. I think this is just an example of the sort of thing which simply occurs so infrequently in the dataset that it hasn't learned the abstraction. However, I fully suspect that if there were some corner of the Internet where people wrote a lot about the cardinal directions of things relative to a specified observer, GPT-3 would learn it.

It also seems that one of the important things that humans do but transformers do not, is actively seek out more surprising subdomains of the learning space. The big breakthrough in transformers was attention, but currently the attention is only within-sequence, not across-dataset. What does L(D) look like if the model is empowered to notice, while training, that its loss on sequences involving words like "west" and "cardinal direction" is bad, and then to search for and prioritize other sequences with those tokens, rather than simply churning through the next 1000 examples of sequences from which it has essentially already extracted the maximum amount of information. At a certain point, you don't need to train it on "The man woke up and got out of {bed}", it knew what the last token was going to be long ago.

It would be good to know if I'm completely missing something here.

Is Stupidity Expanding? Some Hypotheses.

By “meme” I mean Dawkins’ original definition. A meme is just any idea to which Darwinian selection forces apply. For example, a good idea will be gradually stripped of nuance and accuracy as it passes through the communication network, and eventually becomes dumb.

Is Stupidity Expanding? Some Hypotheses.

We've built a bunch of tools for instant mind-to-mind communication, with built in features that amplify communiques that are short, simple and emotional. Over the last ten years an increasingly large fraction of all interpersonal communication has passed through these "dumbpass filter" communication systems. This process has systematically favored memes that are stupid. When everyone around you appears to be stupid, it makes you stupid. Even if you aren't on these communication platforms, your friends are, and their brains are being filled up with finely-honed, evolutionarily optimized stupidity. 

Rationality and Climate Change

Not sure that I disagree with you at all on any specific point.

It's just that "Considering the possibility that a technological fix will not be available" actually looks like staring down the barrel of a civilizational gun. There is no clever policy solution that dodges the bullet. 

If you impose a large carbon tax, or other effective global policy of austerity that reduces fossil fuel use without replacing that energy somehow, you're just making the whole world poor, as our electricity, food, transportation and medical bills go up above even their currently barely affordable levels, and the growth of the developing world is completely halted, and probably reversed. If your reason for imposing a carbon tax is not "to incentivize tech development" but instead "punish people for using energy", then people will revolt. There were riots in France because of a relatively modest gasoline tax. An actual across-the-board policy implementation of "austerity" in some form would either be repealed quickly, would lead to civilizational collapse and mass death, or both. If you impose a small carbon tax (or some other token gesture at austerity and conservation) it will simply not be adequate to address the issue. It will at best impose a very slight damping on the growth function. This is what I mean when I say there is no practical policy proposal that addresses the problem. It is technology, or death. If you know of a plan that persuasively and quantitatively argues otherwise, I'd love to see it.

Rationality and Climate Change

Epistemic status: You asked, so I'm answering, though I'm open to having my mind changed on several details if my assumptions turn out to be wrong. I probably wouldn't have written something like this without prompting. If it's relevant, I'm the author of at least one paper commissioned by the EPA on climate-related concerns.

I don't like the branding of "Fighting Climate Change" and would like to see less of it. The actual goal is providing energy to sustain the survival and flourishing of 7.8+ billion people, fueling a technologically advanced global civilization, while simultaneously reducing the negative externalities of energy generation. In other words, we're faced with a multi-dimensional optimization problem, while the rhetoric of "Fighting Climate Change" almost universally only addresses the last dimension, reducing externalities. Currently 80% of worldwide energy comes from fossil fuels and only 5% comes from renewables. So, simplistically, renewables need to generate 16x as much energy as they do right now. This number is "not so bad" if you assume that technology will continue to develop, putting renewables on an exponential curve, and "pretty bad" if you assume that renewables continue to be implemented at about the current rate.

And we need more energy generating capacity than we have now. A lot more. Current energy generation capacity only really provides a high standard of living for a small percentage of the world population. Everybody wants to lift Africa out of poverty, but nobody seems interested in asking many new power plants that will require. These power plants will be built with whatever technology is cheapest. We cannot dictate policy in power plant construction in the developing world; all we can do is try to make sure that better technologies exist when those plants are built.

I have seen no realistic policy proposal that meaningfully addresses climate change through austerity (voluntary reduced consumption) or increased energy usage efficiency. These sorts of things can help on the margins, but any actual solution will involve technology development. Direct carbon capture is also a possible target for technological breakthrough.

Three car seats?

https://www.multimac.com/p/multimac_1320_4_seater

£1599.00 =)

It's pretty cool, but hardly a slam-dunk rejoinder if the whole issue in question is whether having a 3rd of 4th child is discontinuously costly due to sedan width.

Personally, I just ended up buying a minivan.

Three car seats?

It qualifies as a trivial inconvenience. We had to essentially buy three new car seats when we had our third, because the two that we were using for the first two kids took up too much space, and needed to be replaced with thinner versions.

It does seem like having four children would pose more serious difficulties, since you can no longer fit four young children in a sedan no matter what you do.

moridinamael's Shortform

I'm writing an effortpost on this general topic but wanted to gauge reactions to the following thoughts, so I can tweak my approach.

I was first introduced to rationality about ten years ago and have been a reasonably dedicated practitioner of this discipline that whole time. The first few years saw me making a lot of bad choices. I was in the Valley of Bad Rationality; I didn't have experience with these powerful tools, and I made a number of mistakes.

My own mistakes had a lot to do with overconfidence in my ability to model and navigate complex situations. My ability to model and understand myself was particularly lacking.

In the more proximal part of this ten year period -- say, in the last five years -- I've actually gotten a lot better. And I got better, in my opinion, because I kept on thinking about the world in a fundamentally rationalist way. I kept making predictions, trying to understand what happened when my predictions went wrong, and updating both my world-model and my meta-model of how I should be thinking about predictions and models.

Centrally, I acquired an intuitive, gut level sense of how to think about situations where I could only see a certain angle, where I was either definitely or probably missing information, or situations involving human psychology. You could also classify another major improvement as being due generally to "actually multiplying probabilities semi-explicitly instead of handwaving", e.g. it's pretty unlikely that two things with independent 30% odds of being true, are both true. You could say through trial and error I came to understand why no wise person attempts a plan where more than one thing has to happen "as planned".

I think if you had asked me at the 5 year mark if this rationality thing was all it was cracked up to be, I very well might have said that it had led me to make a lot of bad decisions and execute bad plans, but after 10 years, and especially the last year or three, it has started working for me in a way that it didn't before.

Load More