All of jasoncrawford's Comments + Replies

The article has a detailed analysis that comes up with a much lower cost. If you think that analysis goes wrong, I'd be curious to understand exactly where?

3bhauth15d
I sure didn't see one! I saw some analysis of the cost of energy used for grinding up rock, with no consideration of other costs. Can you point me to the section with detailed analysis of the costs of mining, crushing, and spreading the rock, or the capital costs of grinders? A detailed analysis would have numbers for these things, not just dismiss them. OK then. Digging up and crushing olivine to gravel would be $20-30/ton. We know this from the cost of gravel and the availability of olivine deposits. That alone makes this uneconomical, yet the author just dismisses them as negligible next to the cost of milling. So either the dismissal is wrong, or the milling cost estimation is wrong, or both. Why is the cost per ton of CO2 lower than the cost per ton of rock, when 1 ton of rock stores much less than 1 ton of CO2? That's quite a non sequitur! We know what grinding rock to fine powder costs. Use those costs, not the cost of electricity.

Trust is important, but… the Church banning cousin-marriage as the primary cause of a high-trust society? I find it hard to believe. No time now to elaborate on my reasons but if people are really interested maybe I will write something up later

I think in Allen's book there is both a generic claim of high wages, and some specific analyses of technologies like the spinning jenny and whether it would have paid to adopt them.

The builders' wages are part of the generic claim, because there was no building-related technology that was analyzed.

The spinners' wages might be related to the spinning jenny ROI calculations, but I haven't gone deep enough on the analysis to understand how the paper that was linked might affect those calculations.

Maybe! Or maybe you could interest him in a printing press, or a sextant, or at least a plow? That is sort of my point in the second-to-last paragraph (about shape/direction vs. rate).

That is one of many hypotheses. (I haven't studied all of them yet, but I'd be surprised if I ended up ranking that even in the top three causes.)

2jmh1mo
That might be too quick a dismissal given the importance that is typically assigned to trust for well functioning economies and economic development. But I think the view of some top three regardless of what the three are is a difficult to accept as an unqualified statement. Seems like we're talking about a very complex and complicated area that will not distill down to some simple map of that territory. I think we will find that the map will need to have a larger number of layers that can be applied than just three.  Which layers one will need or find most informative will depend a good bit on what focus or specific question or framing one starts with. I thought that type of view was implied in your conclusion so was a bit surprised to see that parenthetical statement.

It is a spike in the death rate, from covid.

Insurance is exactly a mechanism that transforms high-variance penalties in the future into consistent penalties in the present: the more risky you are, the higher your premiums.

2mako yass1mo
Then insurance as you've defined it is not a specific mechanism, it's a category of mechanisms, most of which don't do the thing they're supposed to do. I want to do mechanism design. I want to make a proposal specific enough to be carried out irl.

Yes, and similarly, William Crookes warning about a fertilizer shortage in 1898 was correct. Sometimes disaster truly is up ahead and it's crucial to change our course. What makes the difference IMO is between saying “this disaster will happen and there's nothing we can do about it” vs. “this disaster will happen unless we recant and turn backwards” vs. “this disaster might happen so we should take positive steps to make sure it doesn't.”

Right, and as Tyler Cowen pointed out in the article I linked to, we don't hold the phone company liable if, e.g., criminals use the telephone to plan and execute a crime.

So even if/when liability is the (or part of the) solution, it's not simple/obvious how to apply it. Needs good, careful thinking each time of where the liability should exist under what circumstances, etc. This is why we need experts in the law thinking about these things.

3ChristianKl1mo
I have the impression that your post asserts that there's a problem with review-and-approval paradigm is in some way more problematic than other paradigms of how to regulate. It seems to me unclear why that would be true.  While it sounds absurd to talk about this, there are legal proposals to do that at least for some crimes. In the EU there's the idea that there should be machine learning run on devices to detect when the user engages in some crimes and alerts authorities.  Brazil discusses at the moment legal responsibility for social media companies who publish "fake news".

Looking at the “accelerating projection of 1960–1976” data points here, it reaches almost 3 TW by the mid-2010s:

According to Our World in Data's energy data explorer, world electricity generation in 2021 was 27,812.74 TWh, which is 3.17 TW (using 1W = 8,766 Wh/year).

Comparing almost 3TW at about 2015 (just eyeballing the chart) to 3.17 TW in 2021, I say those are roughly equal. I did not make anything “significantly shinier”, or at least I did not intend to.

3Thomas Sepulchre1mo
Crystal-clear, thank you!

In the records of the society from the 1680s we find evidence of interest in the earliest steam engines and most important, the society was receptive at the time to what was to become a socially revolutionary argument. The fellows discussed the notion that mechanical devices could, and indeed should, save labor, in effect decrease rather than increase employment. At the time of those discussions it was extremely difficult to get a patent from the government for any device if its inventor argued that it would save labor. Indeed until the late 1720s patents

... (read more)

I can say my purpose now, before I give the answer. I'm glad you asked, because people tend to make assumptions.

My purpose is neither to cast doubt on the views expressed here nor to boost their source. It's just a piece of intellectual history. I think it's interesting that someone had this view at a particular time and place, and in a particular context. It's interesting to think about what evidence they had that might have led them to this view, and what evidence they clearly didn't have (e.g., because it hadn't happened yet) that therefore couldn't hav... (read more)

Weird, I don't know how it got reverted. I just restored my additional comments from version history.

2gjm2mo
Looks like how I remember it again now :-).

Any chance? A one in a million chance? 1e-12? At some point you should take the chance. What is your Faust parameter?

1TinkerBird2mo
It depends at what rate the chance can be decreased. If it takes 50 years to shrink it from 1% to 0.1%, then with all the people that would die in that time, I'd probably be willing to risk it.  As of right now, even the most optimistic experts I've seen put p(doom) at much higher than 1% - far into the range where I vote to hit pause.

But we have no idea if our current cryonics works. It's not clear to me whether it's easier to solve that or to solve aging.

3Joachim Bartosik2mo
I think it should be much easier to get good estimate of whether cryonics would work. For example: * if we could simulate individual c. elegans then we know pretty well what kind of info we need to preserve * then we can check if we're preserving it (even if current methods for extracting all relevant info won't work for whole human brain because they're way to slow) And it's much less risky path than doing AGI quickly. So I think it's a mitigation it'd be good to work on, so that waiting to make AI safer is more palatable.

Chess is a simple game and a professional chess player has played it many, many times. The first time a professional plays you is not their “first try” at chess.

Acting in the (messy, complicated) real world is different.

“On average, buildings that are being blasted with a firehose right now are significantly more likely to be on fire than the typical structure, but this does not mean we should ban fire departments as a clear fire hazard.” Byrne Hobart

It basically means large-scale, widely distributed electrical power generation. More narrowly, it can refer to specific proposals from around the 1920s by the progressives of that era for the buildout of electric power infrastructure: see e.g. “Giant Power: A Progressive Proposal of the Nineteen-Twenties” 

Added a caveat about this to the post.

Interesting point about levels of abstraction, I think I agree, but what is a good example?

2ryan_b3mo
When there was a big surge in people talking about the Dredge Act and the Jones Act last year, I would see conversations in the wild address these three points: unions supporting these bills because they effectively guarantee their member's jobs; shipbuilders supporting these bills because they immunize them from foreign competition; and the economy as measured in GDP. Union contracts and business decisions are both at the same level of abstraction: thinking about what another group of people thinks and what they do because of it. The GDP is a towering pile of calculations to reduce every transaction in the country to a percentage. It would need a lot of additional argumentation to build the link between the groups of people making decisions about the thing we are asking about, and the economy as a whole. Even if it didn't it would still have problems like double-counting the activity of the unions/businesses under consideration, and including lots of irrelevant information like the entertainment sector of the economy, which has nothing to do with intra-US shipping and dredging. Different levels of abstraction have different relationships to the question we are investigating. It is hard to put these different relationships together well for ourselves, and very hard to communicate them to others, so we should be pretty skeptical about mixing them, in my view.

Yup, you can always have a domino-effect hypothesis of course (if it matches the timeline of events), rather than positing some general antecedent cause in common to all the failures.

Thanks. Yes this is a good point, and related to @cousin_it's point. Had not heard of this poem, nice reference.

Good point. Related: “Milton Friedman's Thermostat”:

If a house has a good thermostat, we should observe a strong negative correlation between the amount of oil burned in the furnace (M), and the outside temperature (V). But we should observe no correlation between the amount of oil burned in the furnace (M) and the inside temperature (P). And we should observe no correlation between the outside temperature (V) and the inside temperature (P).

An econometrician, observing the data, concludes that the amount of oil burned had no effect on the inside temperatur

... (read more)
2jasoncrawford3mo
Added a caveat about this to the post.

A related metaphor that I like:

Suppose you are in a boat heading down a river, and there are rocks straight ahead. You might not be sure whether it is best to veer left or right, but you must pick one and put all your effort into it. Averaging the two choices is certain disaster.

(Source, as I recall, is Geoffrey Moore's book Crossing the Chasm.)

Thanks, I have belatedly updated the post with this chart and will include it in the next digest as well.

Well, I was trying to argue against the “statistical parrot” idea, because I think that unfairly downplays the significance and potential of these systems. That's part of the purpose of the “submarine” metaphor: a submarine is actually a very impressive and useful device, even if it doesn't swim like a fish.

I agree that there is some similarity between ANNs and brains, but the differences seem pretty stark to me. 

4jacob_cannell4mo
There are enormous differences between an AMD EPYC processor and an RTX 4090, and yet within some performance constraints they can run the same code, and there are a near infinite ways they can instantiate programs that although vastly different in encoding details ultimately are very similar. So obviously transformer based ANNs running on GPUs are very different physical systems than bio brains, but that is mostly irrelevant. What matters is similarity of the resulting learned software - the mindware. If you train hard enough on token prediction of the internet eventually to reach very low error the ANN must learn to simulate human minds, and a sufficient simulation of a mind simply ... is .. a mind.

Thanks, I added a parenthetical sentence to indicate this possibility.

Thanks, I tweaked the wording a bit in this paragraph, and I tried to explain later in the essay what it even means for a system to be “trying” to do something.

Thanks! I was not aware of beam search. Any good references to learn about it?

3jade4mo
Huggingface has a nice guide [https://huggingface.co/blog/how-to-generate] that covers popular approaches to generation circa 2020. I recently read about tail free sampling [https://www.trentonbricken.com/Tail-Free-Sampling/] as well. I'm sure other techniques have been developed since then, though I'm not immersed enough in NLP state-of-the-art to be aware of them.

Good point, I say “cure” here but yes I really mean any combination of prevention + cure that solves the problem. You're right that prevention was the majority of the success against infection (and this may be true for cancer as well).

Now all we need is the nanobots…

3ChristianKl4mo
Our body does produce the nanobots. But naturally, our immune system isn't perfect at making the right nanobots. There are clinical attempts to grow the right nanobots [https://siteman.wustl.edu/treatment/specialized-programs/stem-cell-transplant-and-cellular-therapies-center/natural-killer-cell-therapy/]in vitro and use them to attack cancer. In vitro approaches could be improved on computer models that guide the process. We need one machine learning model that you give a DNA sequence of a cell and that then tells you what protein fragments that cell will display. We need another machine learning model to tell us how the lympocytes need to be programmed to match recognize a cell that displays those fragments. It's a simulation task that's a bit more complex than what alpha fold is doing.  And then we need a good process to grow those lymphocytes in a cost-efficient manner. Likely, something where you synthesise DNA and create a system where that DNA gets used.

This is great, thanks. I added a link to this comment in the body of the post.

Where I was coming from was:

  1. We have put a lot of resources into fighting cancer:
    1. We declared a “War on Cancer” ~50 years ago
    2. There are over $7B for it in this year's appropriations act, about 15% of NIH's total budget
    3. There are also lots of private foundations working on it
    4. It is the canonical example of a big, important thing to be working on
  2. We seem to be making only slow progress
    1. It is still the number one cause of death
    2. Scott Alexander's summary was “gradual improvement
    3. Our treatmen
... (read more)
8DirectedEvolution4mo
At this point, we’re leaving the land of empirical fact behind and entering the conjectural realm. With that caveat, I’m going to give two answers: cancer really is harder than infectious disease, and we are still mainly in a paradigm of treating diseases rather than fighting aging. With infectious disease, we have two powerful strategies that are lacking in cancer. One is targeting the radically different physiology of infectious agents. Here, the targeting problem that impairs cancer therapies is much reduced. We had antibiotics and vaccines long before we had effective chemotherapies in large part for that reason. Second is targeting the radically different life cycle of infectious agents. Besides STIs, infectious agents have to pass through an external environment to transmit between hosts, and that gives us an opportunity to intervene. We can purify water, cook food, socially distance from the sick, and exterminate vectors like mosquitos. Cancer originates within you, so we just can’t use this strategy. I’m no physicist, so I can only gesture to a couple structural factors there. One is that with physics, you can directly test your hypothesis on a machine you build from the ground up, whereas in biology, you have to do all your research in an organism that wasn’t designed to accord with theory, and where there are enormous ethical barriers to just testing your ideas directly. You can’t just give somebody cancer and see if your chemo drug helps. So point A is that a range of specific factors make cancer especially tough to solve relative to infectious disease or nuclear physics. It’s a little like your post from a while back about “why wasn’t this invented earlier,” but in reverse. We can point to specific factors, the ones I’ve just made, that are strong reasons to explain why it has taken longer to bring cancer deaths way down than infectious disease deaths. Point B is that even now, we focus a lot on specific diseases like cancer that are actively causi

Thanks @ChristianKI, but I think you're confusing me with someone else? I don't know what transposons are and I haven't written about them.

7gilch4mo
Maybe johnswentworth, from this one? https://www.lesswrong.com/posts/ui6mDLdqXkaXiDMJ5/core-pathways-of-aging#Transposons [https://www.lesswrong.com/posts/ui6mDLdqXkaXiDMJ5/core-pathways-of-aging#Transposons]

Thanks. Maybe “pre-theory” is too strong? Maybe “crucial theoretical gaps” is more accurate? I would be interested to hear from experts on this.

If you think we have the basic theory of cancer, the epistemic equivalent of the germ theory, I would be curious to know, when was that established? The germ theory was established around the 1880s or so, and it took several decades for all the solutions I described to be put in place, so maybe by analogy we are in that phase of just seeking effective (and affordable) solutions.

I agree that a lot of the difficulty ... (read more)

The germ theory of disease is, most essentially, the theory that infectious diseases are caused by invasion of the patient's body by a pathogen. It is defined by the type of thing that is the root cause of the disease - in this case, a non-human cell, virus, or even a malformed protein.

The direct equivalent in cancer is the theory that the cancer is made from a human's own cells growing out of control. That's a universally accepted fact and it has been for a long time. Again, I know that you know this, so I'm just really unclear about why you're proposing ... (read more)

1Gerald Monroe4mo
As far as I know (I didn't work in cancer research but did get a masters in a related field) all cancers have an active mutated gene.  This means there is something detectable in every cell we consider part of the tumor : an mRNA with the mutated sequence. I don't know of a biological mechanism where cancer could be possible without this.   So scientifically speaking, we are post theory, and there are not important theoretical gaps. Why aren't there treatments based on this?  Well there are.  In rats...

Here's some quantification, from Robert Gordon's The Rise and Fall of American Growth. In 1910, 47% of US jobs were what Gordon classifies as “disagreeable” (farming, blue-collar labor, and domestic service), and only 8% of jobs were “non-routine cognitive” (managerial and professional). By 2009, only 3% of jobs were “disagreeable” and over 37% were “non-routine cognitive”. See full chart below.

I did not say or mean that agricultural is non-vocational. But I think it is not the ideal vocation for 50+% of the workforce.

Vocation is not the same as choice, bu... (read more)

3jaspax5mo
I think my strongest disagreement here is that the category of "disagreeable" does not cleave reality at the joints, and that the category "non-routine cognitive" contains a lot of work which is not, in fact, intellectually or spiritually fulfilling in the way implied.
4ChristianKl5mo
I think we have little idea how "vocation" works. It could be like marriage where cases of arranged marriage don't reduce the likelihood that someone develops love. 

I disagree with the Buddhist perspective.

4Matt Goldenberg5mo
My sense is you in practice disagree with most of traditional spiritual tradition. In which case the post should be actually called " spiritual benefit doesn't matter" or "redefining spiritual benefit."

Oh, I misunderstood. Yes, my stats are per worker. It's interesting to see that per-person has increased a bit. Not sure what to make of that. The early-1900s stats didn't count a lot of housework that was done mostly by housewives.

3jaspax5mo
The per-person numbers are almost certainly due to women entering the workforce and thus getting counted in the numbers for the first time. Decline in fertility also has some effect (though probably smaller), as there are now fewer non-working children per adult.

To be clear, I'm talking about total working hours per person.

Most of the reduction happened before 1950, but as you can see from the Our World in Data chart, there was still some reduction after that.

You are not talking about per person, you are talking about per worker. Total working hours per person has increased ~20% from 1950-2000 for ages 25-55.

Note that “raising awareness” was actually an important part of the factory safety story. It can be useful if it is channeled into actual solutions (and, to your point about the HPV vaccines, if there isn't too much political tribalism going on such that any issue immediately becomes polarized).

Good point, and one of the hypotheses I considered including was “tech workers already only work 4 hours a day…” but decided it was a bit too snarky and cynical.

There may be some truth to this, but note that there has always been some degree of loafing on the job! In factories it used to be called “soldiering”—see the bit on Taylor and scientific management in this essay.

1Edward Pascal5mo
The exceptions to what I said above, which are very bad are always the waiting. I hate it when I have 28 minutes of work to do, but it ain't gonna happen until Joe gets that other thing on my desk. Then Supervisor Jake wants me to help him pick up a rental car. The inefficiencies in those two processes in the worst case, might eat a whole day and have me home late. This kind of stuff is demoralizing. I think in the past, factory workers might savor that. It's variety from the line, and it's "easy." For us management and information worker types, or at least me (lets say it's just me) this makes me want to punch holes in drywall. Between people wanting to have meetings in rooms with chairs, and processes involving waiting, those office jobs can get very taxing. Working for myself I mostly avoid the meetings, but I still have those days of time-eating activities. Perhaps a common culture here on lesswrong is jobs where "we're gonna be here until everything is done" (including entrepreneurs and consultants) and so waiting is painful. Maybe for something like a government bureaucrat or a factory worker or similar, it would still be a boon.
0TAG5mo
Thats never been my experience, and it doesn't make much sense. Since tech workers are expensive, you could just manage with half as many if everyone is doing 4 hour a day.

I'm a tech worker.  I work 40-70 hours a week, depending on incident load.  Nobody I work with or see on a regular basis works less than 40 hours a week, and some are substantially more than that.

My most cognitively productive hours are the four hours in the morning, but there's plenty of lower effort important organizational stuff to fill out the afternoons.  I think a good fraction of my coworkers are like me and don't actually need the job anymore, but we still put forth effort.

I think one of the major missing pieces of your article is "s... (read more)

I have quipped that if you really wanted to slow down AI progress, you should create a Federal AI Initiative and give it billions of dollars in funding.

Or: “An old saw says that if the government really wanted to help literacy and reduce addiction in the inner cities, it would form a Department of Drugs and declare a War on Education.” (from Nanofuture by J. Storrs Hall, who also wrote Where Is My Flying Car?)

5jacopo5mo
I always thought Hall's point about nanotech was trivially false. Nanotech research like he wanted it died out in the whole world, but he explains it by US-specific factors. Why didn't research continue elsewhere? Plus, other fields that got large funding in Europe or Japan are alive and thriving. How comes? That doesn't mean that a government program which sets up bad incentives cannot be worse than useless. It can be quite damaging, but not kill a technologically promising research field worldwide for twenty years.
6DanielFilan5mo
Taking this as a serious proposal: * my guess is that it pays less well than industrial AI research for the most part * so probably it mostly ends up increasing the number of grad students + professors in AI * you could potentially use this to increase the noise:signal ratio in the field * this could also break conferences by flooding them with papers to review and decreasing the average quality of the reviewer pool * ideally this would happen before we get good AI tools for reviewing papers

Man, that Hall guy is great. Should invite him to the progress forum.

I'm a little surprised and confused by that comment. It seems a bit like telling Sloan Kettering, “I have misunderstood your vision, which appears to be to create a new branch of biology… I had thought you were interested in trying to figure out how to cure cancer.”

Certainly, I am ultimately interested in sustaining and accelerating progress. (I would be whether or not it had stalled—indeed, I was skeptical of the stagnation hypothesis until I was a couple of years into this project.) I think that in order to do that, we need intellectual work to better un... (read more)

Yeah. In the training I took, they said to apply a tourniquet if the bleeding is continuous and more than 6 oz of blood has been lost, in which case the wound is considered life-threatening.

In the training they tell you to (1) check for responsiveness and then (2) check for breathing. You check for responsiveness by hitting them a bit and shouting “are you OK?” If they are unresponsive but breathing, they don't need CPR. If they are not breathing, or only gasping, they need CPR.

The training did not say anything about checking for a heartbeat.

Better health and nutrition could plausibly have led to higher average intelligence, good point.

However, I think a large part of the Flynn effect is not actual raw intelligence increasing, but better education that leads people to score better on formal intelligence tests.

I suspect there is actually a reciprocal relationship, not simply one-way causation.

Smarter in terms of raw mental capacity? No, I doubt it.

They were getting more educated. In particular, in the 18th century there was a new, mechanistic view of the universe that was spreading, and this was crucial to the development of science and engineering.

(Self-review.) My comment that we could stop wearing masks was glib; I didn't foresee the Delta, Omicron, etc. waves. But I think the general point stands.

Load More