steven0461

Steven K. (not to be confused with https://www.lesswrong.com/users/steve2152)

Comments

steven0461's Shortform Feed

Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth (William D. Nordhaus)

Has anyone looked at this? Nordhaus claims current trends suggest the singularity is not near, though I wouldn't expect current trends outside AI to be very informative. He does seem to acknowledge x-risk in section Xf, which I don't think I've seen from other top economists.

Survey on AI existential risk scenarios

Define an existential catastrophe due to AI as an existential catastrophe that could have been avoided had humanity's development, deployment or governance of AI been otherwise. This includes cases where:

AI directly causes the catastrophe.

AI is a significant risk factor in the catastrophe, such that no catastrophe would have occurred without the involvement of AI.

Humanity survives but its suboptimal use of AI means that we fall permanently and drastically short of our full potential.

This technically seems to include cases like: AGI is not developed by 2050, and a nuclear war in the year 2050 causes an existential catastrophe, but if an aligned AGI had been developed by then, it would have prevented the nuclear war. I don't know if respondents interpreted it that way.

steven0461's Shortform Feed

Sorry, I don't think I understand what you mean. There can still be a process that gets the same answer as the long reflection, but with e.g. less suffering or waste of resources, right?

"Existential risk from AI" survey results

A few of the answers seem really high. I wonder if anyone interpreted the questions as asking for P(loss of value | insufficient alignment research) and P(loss of value | misalignment) despite Note B.

steven0461's Shortform Feed

I'd like to register skepticism of the idea of a "long reflection". I'd guess any intelligence that knew how to stabilize the world with respect to processes that affect humanity's reflection about its values in undesirable ways (e.g. existential disasters), without also stabilizing it with respect to processes that affect it in desirable ways, would already understand the value extrapolation problem well enough to take a lot of shortcuts in calculating the final answer compared to doing the experiment in real life. (You might call such a calculation a "Hard Reflection".)

A Brief Review of Current and Near-Future Methods of Genetic Engineering

Great post, very informative

Step 7 seems like it's already possible given that most research into tissue engineering assumes embryonic stem cells or some other pluripotent stem cells as a starting point.

Typo for "Step 6"?

Anna and Oliver discuss Children and X-Risk

guesses: 1. in most cases, children on net detract from other major projects for common-sense time/attention/optionality management reasons (as well as because they sometimes commit people to a world view of relatively slow change), 2. whether to have children isn't each other's business and pressure against doing normal human things like this is net socially harmful (conservatives in particular are alienated by a culture of childlessness, though maybe that's net strategically useful), 3. people conflate 2 with not-1 on an emotional level and feel 1 is false because 2 is true

Poll: Which variables are most strategically relevant?

I would add "will relevant people expect AI to have extreme benefits, such as a significant percentage point reduction in other existential risk or a technological solution to aging"

Information Charts

I agree, of course, that a bad prediction can perform better than a good prediction by luck. That means if you were already sufficiently sure your prediction was good, you can continue to believe it was good after it performs badly. But your belief that the prediction was good then comes from your model of the sources of the competing predictions prior to observing the result (e.g. "PredictIt probably only predicted a higher Trump probability because Trump Trump Trump") instead of from the result itself. The result itself still reflects badly on your prediction. Your prediction may not have been worse, but it performed worse, and that is (perhaps insufficient) Bayesian evidence that it actually was worse. If Nate Silver is claiming something like "sure, our prediction of voter % performed badly compared to PredictIt's implicit prediction of voter %, but we already strongly believed it was good, and therefore still believe it was good, though with less confidence", then I'm fine with that. But that wasn't my impression.

edit:

Deviating from the naive view implicitly assumes that confidently predicting a narrow win was too hard to be plausible

I agree I'm making an assumption like "the difference in probability between a 6.5% average poll error and a 5.5% average poll error isn't huge", but I can't conceive of any reason to expect a sudden cliff there instead of a smooth bell curve.

Did anybody calculate the Briers score for per-state election forecasts?

Yes, that looks like a crux. I guess I don't see the need to reason about calibration instead of directly about expected log score.

Load More