All of lukeprog's Comments + Replies

Ideal governance (for companies, countries and more)

Some other literature OTOH:

Epistemic Legibility

Lots of overlap between this concept and what Open Phil calls reasoning transparency.

List of Probability Calibration Exercises

The Open Philanthropy and 80,000 Hours links are for the same app, just at different URLs.

Forecasting Newsletter: December 2021

On Foretell moving to ARLIS… There's no way you could've known this, but as it happens Foretell is moving from one Open Phil grantee (CSET) to another (UMD ARLIS). TBC I wasn't involved in the decision for Foretell to make that transition, but it seems fine to me, and Foretell is essentially becoming another part of the project I funded at ARLIS.

Forecasting Newsletter: December 2021

Someone with a newsletter aimed at people interested in forecasting should let them know. :)

3Jonas Vollmer4mo
Would be very excited to get applications to the EA Infrastructure Fund (EAIF)! Apply here [https://funds.effectivealtruism.org/apply-for-funding], it's fast and easy. (I run EA Funds, which includes EAIF.)
Forecasting Newsletter: December 2021

$40k feels like a significant quantity of all the funding there is for small experiments in the forecasting space.

Seems like a fit for the EA Infrastructure Fund, no?

2NunoSempere4mo
Maybe. I might refer some people there. But I don't think there is all that much awareness that applying there is a thing that can be done.

I don't think I had seen that, and wow, it definitely covers basically all of what I was thinking about trying to say in this post, and a bit more.

I do think there is something useful to say about how reference class combinations work, and using causal models versus correlational ones for model combination given heterogeneous data - but that will require formulating it more clearly than I have in my head right now. (I'm working on two different projects where I'm getting it straighter in my head, which led to this post, as a quick explanatio... (read more)

Predictions/questions about conquistadors?

Very cool that you posted these quantified predictions in advance!

2Daniel Kokotajlo2y
Their major flaw is that their resolution criteria are pretty vague. But, better than nothing I guess!
Peter's COVID Consolidated Brief - 29 Apr

Nice write-up!

A few thoughts re: Scott Alexander & Rob Wiblin on prediction.

  • Scott wrote that "On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were)." I just want to note that while this was indeed a very failed prediction, in a sense the supers were wrong by just two days. (WHO-counted cases only reached >200k on March 18th, two days before question close.)
  • One interesting pre-coronavirus probabilistic forecast of global pandemic odds is this:
... (read more)
Cortés, Pizarro, and Afonso as Precedents for Takeover

Nice post. Were there any sources besides Wikipedia that you found especially helpful when researching this post?

2Daniel Kokotajlo2y
This post was based on very little research; all I did was read the wiki pages. So it's possible that a real historian (or a real history book) would yield different conclusions. However, I am fairly confident this won't happen for the main conclusion of my post: that you don't need a god-like technological advantage for a tiny group to have a good shot at quickly taking over a large region.
In Defense of the Arms Races… that End Arms Races
If the U.S. kept racing in its military capacity after WW2, the U.S. may have been able to use its negotiating leverage to stop the Soviet Union from becoming a nuclear power: halting proliferation and preventing the build up of world threatening numbers of high yield weapons.

BTW, the most thorough published examination I've seen of whether the U.S. could've done this is Quester (2000). I've been digging into the question in more detail and I'm still not sure whether it's true or not (but "may" seems reasonable).

1Gentzel2y
Thanks, some of Quester's other books on deterrence [https://www.goodreads.com/author/list/240996.George_H_Quester] also seem pretty interesting books also seem interesting. My post above was actually intended as a minor update to an old post from several years ago on my blog, so I didn't really expect it to be copied over to LessWrong. If I spent more time rewriting the post again, I think I would focus less on that case, which I think rightly can be contested from a number of directions, and talk more about conditions for race deterrence generally. Basically, if you can credibly build up the capacity to win an arms race (with significant advantages in the relevant forms of talent, natural resources, industrial capacity, etc.) then you may not even have to race. Limited development could plausibly serve to make capacity credible, gain the advantages of positive externalities from cutting edge R&D, but avoid actually sinking a lot of the economy into the production of destabilizing systems. By showing extreme capability in a limited sense, and credible capability to win a particular race, you may be able to deter racing if the communication of lasting advantage is credible. If lasting advantage is not credible, you may get more of a Sputnik or AlphaGo type event and galvanize competitors toward racing faster. For global tech competition more generally, it would be interesting to investigate industrial subsidies by competing governments to see in what conditions countries attempt strategic protectionism and to get around the WTO and in which cases they give up a sector of competition. My prior is that protectionism is more likely when an industry is established, and that countries which could have successfully entered a sector can be deterred from doing so.
1Daniel Kokotajlo2y
Yeah, me too. Well, I won't exactly have done a full lit review by the time the blog post comes out... my post is mostly about other things. So don't get your hopes up too high. A good idea for future work though... maybe we can put it on AI Impacts' todo list.
Preliminary thoughts on moral weight

Interesting historical footnote from Louis Francini:

This issue of differing "capacities for happiness" was discussed by the classical utilitarian Francis Edgeworth in his 1881 Mathematical Psychics (pp 57-58, and especially 130-131). He doesn't go into much detail at all, but this is the earliest discussion of which I am aware. Well, there's also the Bentham-Mill debate about higher and lower pleasures ("It is better to be a human being dissatisfied than a pig satisfied"), but I think that may be a slightly different issue.
Which scientific discovery was most ahead of its time?

Cases where scientific knowledge was in fact lost and then rediscovered provide especially strong evidence about the discovery counterfactauls, e.g. Hero's eolipile and al-Kindi's development of relative frequency analysis for decoding messages. Probably we underestimate how common such cases are, because the knowledge of the lost discovery is itself lost — e.g. we might easily have simply not rediscovered the Antikythera mechanism.

3ChristianKl3y
Hero's eolipile was an invention that had no practical use. The stream engine that did have practical use relied on high quality brass [https://history.stackexchange.com/a/28677/153]that wasn't available at Hero's time and only available in the late 1600s.
3Matthew Barnett3y
Darwinian natural selection is sometimes pointed to as a late development, given that it could have been inferred by anyone who understood that certain traits are heritable. However, the fact that two people figured it out more or less independently at approximately the same time [https://en.wikipedia.org/wiki/Publication_of_Darwin%27s_theory] makes me think that it came at about the right time.
Preliminary thoughts on moral weight

Apparently Shelly Kagan has a book coming out soon that is (sort of?) about moral weight.

A Proper Scoring Rule for Confidence Intervals

This scoring rules has some downsides from a usability standpoint. See Greenberg 2018, a whitepaper prepared as background material for a (forthcoming) calibration training app.

Preliminary thoughts on moral weight

Some other people at Open Phil have spent more time thinking about two-envelope effects more than I have, and fwiw some of their thinking on the issue is in this post (e.g. see section 1.1.1.1).

Preliminary thoughts on moral weight

My own take on this is described briefly here, with more detail in various appendices, e.g. here.

Preliminary thoughts on moral weight

Yes, I meant to be describing ranges conditional on each species being moral patients at all. I previously gave my own (very made-up) probabilities for that here. Another worry to consider, though, is that many biological/cognitive and behavioral features of a species are simultaneously (1) evidence about their likelihood of moral patienthood (via consciousness), and (2) evidence about features that might affect their moral weight *given* consciousness/patienthood. So, depending on how you use that evidence, it's important to watch out for double-counting.

I'll skip responding to #2 for now.

Preliminary thoughts on moral weight

For anyone who is curious, I cite much of the literature arguing over criteria for moral patienthood/weight in the footnotes of this section of my original moral patienthood report. My brief comments on why I've focused on consciousness thus far are here.

9habryka4y
(You have to press space after finishing some markdown syntax to have it be properly parsed. Fixed it for you, and sorry for the confusion.)
Announcement: AI alignment prize winners and next round

Cool, this looks better than I'd been expecting. Thanks for doing this! Looking forward to next round.

5cousin_it4y
Thank you Luke! I probably should've asked before, but if you have any ideas how to make this better organizationally, please let me know.
Ten small life improvements

One of my most-used tools is very simple: an Alfred snippet that lets me paste-as-plain-text using Cmd+Opt+V.

LessWrong 2.0 Feature Roadmap & Feature Suggestions

From a user's profile, be able to see their comments in addition to their posts.

Dunno about others, but this is actually one of the LW features I use the most.

(Apologies if this is listed somewhere already and I missed it.)

9Paul Crowley4y
I note that this is now done.
2habryka5y
Yes! I agree. I also see that as a key feature. I've been working on this, but apparently forgot to add it to the feature list. This is related to improving search in general by allowing you to not only search through posts but also comments and user-profiles which is high-priority for me.
LessWrong 2.0 Feature Roadmap & Feature Suggestions

Probably not suitable for launch, but given that the epistemic seriousness of the users is the most important "feature" for me and some other people I've spoken to, I wonder if some kind of "user badges" thing might be helpful, especially if it influences the weight that upvotes and downvotes from those users have. E.g. one badge could be "has read >60% of the sequences, as 'verified' by one of the 150 people the LW admins trust to verify such a thing about someone" and "verified superforecaster" an

... (read more)
1Chris_Leong5y
I don't think we should emphasise this too much as many people would have read a lot of the sequences, but not had it actually recorded. (Apparently they have some data for different user accounts, but I have read a lot of articles whilst not logged in)
AGI and Mainstream Culture

Thanks for briefly describing those Doctor Who episodes.

The Best Textbooks on Every Subject

Lists of textbook award winners like this list might also be useful.

Can the Chain Still Hold You?

Today I encountered a real-life account of a the chain story — involving a cow rather than an elephant — around 24:10 into the "Best of BackStory, Vol. 1" episode of the podcast BackStory.

0Raemon5y
Cool, I'd be wondering about that. :)
CFAR’s new focus, and AI Safety

"Accuracy-boosting" or "raising accuracy"?

Paid research assistant position focusing on artificial intelligence and existential risk

Source. But the non-cached page says "The details of this job cannot be viewed at this time," so maybe the job opening is no longer available.

FWIW, I'm a bit familiar with Dafoe's thinking on the issues, and I think it would be a good use of time for the right person to work with him.

9AnnaSalamon6y
Thanks!
Audio version of Rationality: From AI to Zombies out of beta

Any chance you'll eventually get this up on Audible? I suspect that in the long run, it can find a wider audience there.

1Yiar6y
I've been listening to the book with an iOS app called Voice Dream Reader [https://itunes.apple.com/us/app/voice-dream-reader/id496177674?mt=8], with the voice Amy from Ivona. It's the best quality voice I've found for iOS and it let's me listen to all my ebooks. A real voice is probably better, but I got used to the voice in no time and now enjoy stimulating my mind while I e.g. go to school. Greatly recommended!

We're in the process of getting it onto Audible and plan to get it onto iTunes as well to get it in front of the widest audience as possible.

1Dr_Manhattan6y
Two thumps up! Audible has a much better interface/DRM management than podcast readers. Many LW readers already use Audible. Plus you can get a lot of traffic via the recommender system
The Best Textbooks on Every Subject

Another attempt to do something like this thread: Viva la Books.

3Davidmanheim10mo
This is unfortunately defunct, replaced by another site on a different topic.
0marai27y
Thanks so much!
Estimate Stability

I guess subjective logic is also trying to handle this kind of thing. From Jøsang's book draft:

Subjective logic is a type of probabilistic logic that allows probability values to be expressed with degrees of uncertainty. The idea of probabilistic logic is to combine the strengths of logic and probability calculus, meaning that it has binary logic’s capacity to express structured argument models, and it has the power of probabilities to express degrees of truth of those arguments. The idea of subjective logic is to extend probabilistic logic by also expre

... (read more)
[link] FLI's recommended project grants for AI safety research announced

For those who haven't been around as long as Wei Dai…

Eliezer tells the story of coming around to a more Bostromian view, circa 2003, in his coming of age sequence.

In turn Nick, for his part, very regularly and explicitly credits the role that Eliezer's work and discussions with Eliezer have played in his own research and thinking over the course of the FHI's work on AI safety.

A map: Typology of human extinction risks

Any idea when the book is coming out?

1turchin7y
I am now mostly concentrated on roadmaps. While content is mostly the same, the roadmaps gain more interest than plain text. I will rearrange book, and put roadmaps in the beginning of each chapter, which will than explain it.
Learning to get things right first time

I don't know if this is commercially feasible, but I do like this idea from the perspective of building civilizational competence at getting things right on the first try.

Load More