All of Zac Hatfield Dodds's Comments + Replies

Let's Go Back To Normal

I think it's a valuable post, and agree that as an individual in the USA in 2021 it's worth thinking carefully about these tradeoffs. In Australia though, it's trivial to avoid facing these tradeoffs, because of the different policies we followed through 2020. (I will never claim they were great policies, but they were good enough)

My broader point is that the policy playbook we learn from COVID should be to avoid such situations, not about how to live with for extended periods. Just do the proper lockdown for four-six weeks at the start instead of ... (read more)

Let's Go Back To Normal

This approach to tradeoffs makes sense for the USA in 2021.

I just don't want our analysis to lose sight of the fact that facing these tradeoffs is stupid and avoidable, and that almost every country could have done so much better. Avoiding outbreaks is so much cheaper and easier than dealing with them that the choice to do so should have been overdetermined.

  • The background risk rate in Australia is roughly zero. We occasionally get "outbreaks" of single-digit cases, lock down one city for a few days to trace it, and then go back to normal.
  • It's not even
... (read more)
7Dentin10hI'd just like to point out that while "facing these tradeoffs is stupid and avoidable" (which I agree with), it's much, much more accurate to say instead "facing these tradeoffs is effectively impossible to avoid even though it's stupid and avoidable". We might not like reality, but it's not going to go away no matter how much we call it stupid and avoidable.
What do the reported levels of protection offered by various vaccines mean?

See COVID-19 vaccine efficacy and effectiveness in The Lancet:

Vaccine efficacy is generally reported as a relative risk reduction—ie, the ratio of attack rates [i.e. any symptomatic infection] with and without a vaccine.

Ranking by reported efficacy gives relative risk reductions of 95% for the Pfizer–BioNTech, 94% for the Moderna–NIH, 90% for the Gamaleya, 67% for the J&J, and 67% for the AstraZeneca–Oxford vaccines. However, RRR should be seen against the background risk of being infected and becoming ill with COVID-19, which varies between populati

... (read more)
ACrackedPot's Shortform

You have (re)invented delay-line memory!

Acoustic memory in mercury tubes was indeed used by most of first-generation electronic computers (1948-60ish); I love the aesthetic but admit they're terrible even compared to electromagnetic delay lines. An even better (British) aesthetic would be Turing's suggestion of using Gin as the acoustic medium...

There’s no such thing as a tree (phylogenetically)

Height is also useful for reducing impact of fires, herbivores, some parasites, etc.; and gives you substantially better volume-of-airflow-over-leaves which can be helpful - a flat sheet of leaf-material would underperform substantially for respiration, even before considering the variable angle of sunlight for photosynthesis.

With some handwaving, we seem to agree that "the absence of trees becoming grass-like indicates that there's no nice/large path in evolution-trajectory-space which is continuously competitive" and I'm gesturing towards the known-to-be... (read more)

The Schelling Game (a.k.a. the Coordination Game)

Dixit, which has similar gameplay, does develop group-independent skills - though in-group references often dominate skill.

1Ericf2dHeh. specifically, max point scoring Dixit play involves explicitly referencing an in-joke known by some of the group, and unknown to others.
There’s no such thing as a tree (phylogenetically)

Why don’t more plants evolve towards the “grass” strategy?

I suspect it's related to the the distinction between C3 and C4 photosynthesis - both are common in grasses and C4 species tend to do better in hot climates, but trees seem to have trouble evolving C4 pathways even though that happened on 60+ separate occasions.

(also IMO monocots top out at "kinda tree-ish" - they do have a recognisable trunk, but more fibrous than woody)

3ejacob3dMy hypothesis after 30 seconds of thinking was that trees evolve independently because height = good for competing for sunlight, while grasses must specialize a ton to 'afford' passing up on the height advantage. So once a grass is established somewhere it might be hard for an up-and-coming-almost-grass species to nudge out of its niche. Maybe this is related? I could imagine lots of plants getting stuck in a local maximum of fitness where they are still pretty tree-like but would need to simultaneously lose some tree features and gain C4 photosynthesis in order to succeed as grasses, so the gap to jump in adaptation-space is too large.
Viliam's Shortform

While I think much of the anger about Bitcoin is caused by status considerations, other reasons to be more upset about Bitcoin than land rents include:

  • Land also has use-value, Bitcoin doesn't
  • Bitcoin has huge negative externalities (environmental/energy, price of GPUs, enabling ransomware, etc.)
  • Bitcoin has a different set of tradeoffs to trad financial systems; the profusion of scams, grifts, ponzi schemes, money laundering, etc. is actually pretty bad; and if you don't value Bitcoin's advantages...
  • Full-Georgist 'land' taxes disincentivise searching f
... (read more)
2Viliam3dOh, that's an interesting point: in Georgist system, if you invent a better use of your land, the rational thing to do is shut up, because making it known would increase your tax! I wonder what would happen in an imperfectly Georgist system, with a 50% or 90% land value tax. Someone smarter than me probably already thought about it. Also, people can brainstorm about the better use of their neighbor's land. No one would probably spend money to find out whether there is oil under your house. But cheap ideas like "your house seems like a perfect location to build a restaurant" would happen. Maybe in Georgist societies people would build huge fences around their land, to discourage neighbors from even thinking about it.
[Letter] Re: Advice for High School

Nonfiction

For tech history - it's worth knowing how modern industrial civilisation arose! - I'd recommend

Why read old books to understand technology? Because they come for a different world-view and make very different assumptions about the direction that things are going - because they have only the context of thei... (read more)

5lsusr6dThe Art of Unix Programming helped me get into software engineering too—especially Chapter 2. Jason Crawford has written up his highlights from the Carnegie book [https://www.lesswrong.com/posts/4yDZgN7dvyzKEMYkg/highlights-from-the-autobiography-of-andrew-carnegie] . I replaced The Black Swan with your recommendations. You are right about Seeing Like A State. I have removed Seeing Like A State from the list. The Secret of Our Success [https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/] belongs with Seeing Like A State. Yep!
[Linkpost] Treacherous turns in the wild

Trying to unpack why I don't think of this as a treacherous turn:

  • It's a simple case of a nearest unblocked strategy
  • I'd expect a degree of planning and human-modelling which were absent in this case. A 'deception phase' based on unplanned behavioural differences in different environments doesn't quite fit for me.
  • Neither the evolved organisms nor the process of evolution are sufficiently agentlike that I find the "treacherous turn" to be a useful intuition pump.

I think it's mostly the intuition-pump argument; there are obviously risks that you evolve ... (read more)

What topics are on Dath Ilan's civics exam?

A reasonable argument could be made that in our form of democracy, civics knowledge is of little use to the average citizen. This is because that each of us has such an infinitesimal 'vote', and each person well educated in civics has their vote drowned out.

IMO the assumption that civics knowledge is only useful when voting, is itself a concerning failure of civics education. Above-average civics knowledge might reveal high-value opportunities such as advocacy, focussed policy submissions, talking to friends about particular policies, raising public a... (read more)

1Gerald Monroe9dNote that being a volunteer super-agitator is also not being an average citizen...
[Linkpost] Treacherous turns in the wild

I would not call this a treacherous turn - the "treachery" was a regular and anticipated behaviour, and "evolve higher replication rates in the environment" is a pretty obvious outcome.

Suppressing-and-ignoring failed "treachery" in the sandbox just has the effect of adding selection pressure towards outcomes that the censor doesn't detect. Important lesson from safety engineering: you need to learn from near misses, or you'll eventually have a nasty accident. In a real turn, you don't get this kind of warning.

4Neel Nanda9dI disagree, I think that toy results like this are exactly the kind of warning we'd expect to see. You might not get a warning shot from a superintelligence, but it seems great to collect examples like this of warning shots from systems dumber - if there's going to be continuous takeoff, and there's going to be a treacherous turn eventually, it seems like a great way to get people to take treacherous turns seriously is to watch closely for failed examples (though hopefully ones more sophisticated than this!)
Malicious non-state actors and AI safety

Instead, I'm worried about the sort of person who become a mass-shooter or serial killer. ... I'm worried about people who value hurting others for its own sake.

Empirically, almost or actually no mass-shooters (or serial killers) have this kind of abstract and scope-insensitive motivation. Look at this writeup of a DoJ study: it's almost always a specific combination of a violent and traumatic background, a short-term crisis period, and ready access to firearms.

0keti9dEven if there's just one such person, I think that one person still has a significant chance of succeeding. However, more importantly, I don't see how we could rule out that there are people who want to cause widespread destruction and are willing to sacrifice things for it, even if they wouldn't be interested in being a serial killer or mass shooter. I mean, I don't see how we have any data. I think that for almost all of history, there has been little opportunity for a single individual to cause world-level destruction. Maybe during the time around the Cold War someone could manage to trick the USSR and USA to start a nuclear war. Other than that, I can't think of much other opportunities. There are eight billion people in the world, and potentially all it would take is one, with sufficient motivation, to bring a about a really bad outcome. Given we need a conjunction with eight billion, I think it would be hard to show that there is no such person. So I'm still quite concerned about malicious non-state actors. And I think there are some reasonably doable, reasonably low-cost things someone could do about this. Potentially just having very thorough security clearance before allowing someone to work on AGI-related stuff could make a big difference. And increasing there physical security of the AGI organization could also be helpful. But currently, I don't think people at Google and other AI place is worrying about this. We could at least tell them about this.
2Josh Smith-Brennan10dI think the efforts to focus on issues of 'Mental Health' pay only lip service to this point. We live in a culture which relies on male culture to be about learning to traumatize others and learning to tolerate trauma, while at the same time decrying it as toxic male culture. Males are rightly confused these days, and the lack of adequate social services combined with a country filled with guns that continues to promote media of all sorts that celebrates violence as long as it's 'good violence', is a recipe for this kind of tragedy. Focusing on the individual shooters as being the problem isn't the answer. It is a systemic problem I believe.
1keti10dThis is a good point. I didn't know this. I really should have researched things more.
For mRNA vaccines, is (short-term) efficacy really higher after the second dose?

Similarly, the effect of the second dose might be to maintain the high initial effectiveness for a longer period of time, by "reminding" your immune system not to relax too soon.

Daniel Kokotajlo's Shortform

The IEA is a running joke in climate policy circles; they're transparently in favour of fossil fuels and their "forecasts" are motivated by political (or perhaps commercial, hard to untangle with oil) interests rather than any attempt at predictive accuracy.

1Sherrinford10dWhat do you mean by "transparently" in favour of fossil fuels? Is there anything like a direct quote e.g. of Fatih Birol backing this up?
2Daniel Kokotajlo11dOH ok thanks! Glad to hear that. I'll edit.
Malicious non-state actors and AI safety

I refer you to Gwern's Terrorism Is Not About Terror:

Statistical analysis of terrorist groups’ longevity, aims, methods and successes reveal that groups are self-contradictory and self-sabotaging, generally ineffective; common stereotypes like terrorists being poor or ultra-skilled are false. Superficially appealing counter-examples are discussed and rejected. Data on motivations and the dissolution of terrorist groups are brought into play and the surprising conclusion reached: terrorism is a form of socialization or status-seeking.

and Terrorism Is No... (read more)

1keti11dI'm not worried about the sort of person who would become a terrorist. Usually, they just have a goal like political change, and are willing to kill for it. Instead, I'm worried about the sort of person who become a mass-shooter or serial killer. I'm worried about people who value hurting others for its own sake. If a terrorist group took control of AGI, then things might not be too bad. I think most terrorists don't want to damage the world, they just want their political change. So they could just use their AGI to enact whatever political or other changes they want, and after that not be evil. But if someone who just terminally values harming others, look a mass-shooter, took over the world, things would probably be much worse. Could you clarify what you're thinking of when saying "so any prospective murderer who was "malicious [and] willing to incur large personal costs to cause large amounts of suffering" would already have far better options than a mass shooting"? What other, better options would they have that the don't do?
Naturalism and AI alignment

Assuming we completely solved the problem of making AI do what its instructor tells it to do

This seems to either (a) assume the whole technical alignment problem out of existence, or (b) claim that paperclippers are just fine.

The Fall of Rome, II: Energy Problems?

Wikipedia has a page on Roman deforestation, which even uses the phrase "peak wood" - so depletion was definitely a concern (and Italy has never recovered the pre-Roman forests).

All that said, I think you're still underestimating the costs of transport!

  • Overland transport - i.e. wagons, and perhaps draft animals - is prohibitively expensive for anything more than a single-day journey of up to ~10 miles. Fuel is bulky, heavy, and frankly not that valuable.
  • Moving bulk freight by water is much better - whether floating logs down a river canadian-style, or
... (read more)
Two Designs

Important caveat for the pass-through approach: if any of your build_dataset() functions accept **kwargs, you have to be very careful about how they're handled to preserve the property that "calling a function with unused arguments is an error". It was a lot of work to clean this up in Matplotlib...

The general lesson is that "magic" interfaces which try to 'do what I mean' are nice to work with at the top-level, but it's a lot easier to reason about composing primitives if they're all super-strict.

Another example: $hypothesis write numpy.matmul produces c... (read more)

2SatvikBeri14d100% agree. In general I usually aim to have a thin boundary layer that does validation and converts everything to nice types/data structures, and then a much stricter core of inner functionality. Part of the reason I chose to write about this example is because it's very different from what I normally do. To make the pass-through approach work, the build_dataset functions do accept excess parameters and throw them away. That's definitely a cost. The easiest way to handle it is to have the build_dataset functions themselves just pass the actually needed arguments to a stricter, core function, e.g.: def build_dataset(a, b, **kwargs): build_dataset_strict(a, b) build_dataset(**parameters) # Succeeds as long as keys named "a" and "b" are in parameters
Covid 4/15: Are We Seriously Doing This Again

I don't think we're actually disagreeing much about outcomes (which I agree have been great!), or even that Australia has competently executed at least enough of the important things to get right. Of the five items you mention I'd include borders, quarantine, snap-lockdowns, and testing as part of the local elimination policy; we haven't done them perfectly but we have done them well enough.

I understand "using good epistemics to make decisions" to require that your decisions should be made based on a coherent understanding and cost-benefit analysis of the... (read more)

Are there opportunities for small investors unavailable to big ones?

Some factors which I think are both important and missing from your model:

  • Risk. You probably cannot convince me that, in a liquid market, your outperforming trading strategy does not round to "picking up pennies in front of a steamroller".
  • Availability of capital. If you have to lock up $10K for a year per 20-hours-of-research deal, you're probably more constrained by money than time.
  • Opportunity costs. If you have sufficient quant and business skills to make money trading, you can probably make more working somewhere and investing the proceeds in index funds.
  • Transaction costs, taxes, etc.
2AllAmericanBreakfast21dI agree, these are all important and missing. The concept of this model isn't to give license to any trade meeting these criteria. Instead, it's to show you a category of trades that might be ignored by hedge funds and HFTs not, a priori, because they are bad investments. So the idea would be that within this space, you'd then look for trades with an attractive risk/reward/tax/fee profile. I do agree that opportunity costs might be the clincher. It might be that no matter how much you earn an hour (risk adjusted), the simple fact that you can make that much through this form of work is strong evidence you could have made more in another line of work. It points to a model of the world that goes something like this: Humanity has a giant pot of slack, known as "funding," which it doles out liberally to people who've got a reasonable chance of providing value to shareholders. This tends to generate even more wealth for those value-generating businesspeople, who then have nothing to do with that money but put it back into the pot. Investing, then, is only for two kinds of people: * Those who have more money than energy + brains (which doesn't imply they're lazy/dumb, just that they have a WHOLE lot of money, or that they're ready to retire) * People who want to help the investors make better investment decisions. This is a form of work, rather than investing. But they have to prove their product works, by using it to make good investments. Example: people selling satellite data to hedge funds. So maybe we need a change of quote. Instead of "if you're so smart, why aren't you rich," it's "why are you investing, if you're not rich, retired or dumb?"
Are there opportunities for small investors unavailable to big ones?

On the other hand, there's some suggestive evidence that seed-stage returns have a power-law distribution with - implying that the best strategy is to filter out the obvious duds and then invest in literally everything else.

Covid 4/15: Are We Seriously Doing This Again

Worldwide demand should be easily big enough to justify [subcontracting manufacturing]

If it was legal to sell vaccines for the market price, or anywhere near their actual value, of course. Thanks to monopsony purchasers (i.e. irrationally cheap governments), we instead see massive underproduction.

Covid 4/15: Are We Seriously Doing This Again

The hypothesis that Australia succeeded because it was using good epistemics to make decisions is not holding up well in the endgame.

From Australia, this hypothesis was only ever plausible if you looked at high-level outcomes rather than the actual decision-making.


We got basically one thing right: pursue local elimination. Without going into details, this only happened because the Victorian state government unilaterally held their hard lockdown all the way back to nothing-for-two-weeks, ending our winter second wave. Doing so created both a status ... (read more)

7mukashi21dStrongly disagree. Australia has done many things right: 1. Close the borders early (and close them for real) 2. Very efficient contact tracing. Even after months with 0 cases, we still are asked to sign up to every bar we visit 3. 2 weeks supervised quarantines for returning Australians 4. Very quick reactions as soon as one case is detected in the community, e.g. Queensland lockdown from a few weeks ago 5. Tons of testing Etc Handwashing has no effect on the transmission of the virus. Distancing is meaningless if there are no cases. I will concede though that the vaccine rollout is being inefficient, but it does not have such a high cost: people are not dying in the thousands. Australia can afford that. Deaths in the USA (correcting per population) are 50 times higher than in Australia.
Raemon's Shortform

Important for what? Best for what?

In a given (sub)field, the highest-cited papers tend to be those which introduced or substantially improved on a key idea/result/concept; so they're important in that sense. If you're looking for the best introduction though that will often be a textbook, and there might be important caveats or limitations in a later and less-cited paper.

I've also had a problem where a few highly cited papers propose $approach, many papers apply or puport to extend it, and then eventually someone does a well-powered study checking whethe... (read more)

What weird beliefs do you have?

Excellent question!

I'm not personally concerned about what Bostrom called 'risks of irrationality and error' or 'risks to valuable states and activities'. There are costs of rationality though, where knowing just a little can expose you to harms that you're not yet equipped to handle (classic examples: scope sensitivity, demandingness, death). This rounds to common sense - 'be sensitive about when/whether/how to discuss upsetting topics'.

Mostly though, I'm inclined to keep quiet about data, idea, and attention hazards where my teenage self might have wan... (read more)

What weird beliefs do you have?

At a more concrete level, I've spent the last ~14 months holding strong and unusual views on most pandemic-related matters, though I don't think any of them would raise eyebrows on LessWrong. A minority are probably now mainstream, the others - unfortunately - remain weird.

What weird beliefs do you have?

Taking information hazards seriously.

This can range from the benign (is it a good idea to post very weird beliefs here?) to the more worrying (plausible attacks on $insert_important_system_here), and upwards.

5niplav22dDoes this include extreme examples, such as pieces of information that permanently damage your mind when exposed to, or antimemes [https://www.lesswrong.com/s/3xKXGh9RXaYTYZYgZ]? Have you made any changes to your personal life because of this?
2Zac Hatfield Dodds22dAt a more concrete level, I've spent the last ~14 months holding strong and unusual views on most pandemic-related matters, though I don't think any of them would raise eyebrows on LessWrong. A minority are probably now mainstream, the others - unfortunately - remain weird.
Ben Goertzel's "Kinds of Minds"

https://staking.singularitynet.io/howitworks

youvegottobekiddingme

The entire project description is full of "we will", "we aim to", "we are creating"... without visible evidence that the project has actually made a novel technical thing, I tend to assume that it's just a cash grab using a pile of buzzwords.

MikkW's Shortform

For a value of "break into flames" that matches damp and poorly-oxygenated fuel, yep! This case in Australia is illustrative; you tend to get a lot of nasty smoke rather than a nice campfire vibe.

You'd have to mismanage a household-scale compost pile very badly before it spontaneously combusts, but it's a known and common failure mode for commercial-scale operations above a few tons. Specific details about when depend a great deal on the composition of the pile; with nitrate filmstock it was possible with as little as a few grams.

What will GPT-4 be incapable of?

It's at least syntatically-valid word salad composed of relevant words, which is a substantial advance - and per Gwern, I'm very cautious about generalising from "the first few results from this prompt are bad" to "GPT can't X".

What will GPT-4 be incapable of?

Testing in full generality is certainly AGI-complete (and a nice ingredient for recursive self-improvement!), but I think you're overestimating the difficulty of pattern-matching your way to decent tests. Chess used to be considered AGI-complete too; I'd guess testing is more like poetry+arithmetic in that if you can handle context, style, and some details it comes out pretty nicely.

I expect GPT-4 to be substantially better at this 'out of the box' due to

  • the usual combination of larger, better at generalising, scaling laws, etc.
  • super-linear performance
... (read more)
2Ericf1moI object to the characterization that it is "getting the right idea." It seems to have latched on to "given a foo of bar" -> "@given(foo.bar)" and that "assert" should be used, but the rest is word salad, not code.
What will GPT-4 be incapable of?

Reason about code.

Specifically, I've been trying to get GPT-3 to outperform the Hypothesis Ghostwriter in automatic generation of tests and specifications, without any success. I expect that GPT-4 will also underperform; but that it could outperform if fine-tuned on the problem.

If I knew where to get training data I'd like to try this with GPT-3 for that matter; I'm much more attached to the user experience of "hypothesis write mypackage generates good tests" than any particular implementation (modulo installation and other managable ops issues for novice users).

1Michaël Trazzi1moI think the general answer to testing seems AGI-complete in the sense that you should understand the edge-cases of a function (or correct output from "normal" input). if we take the simplest testing case, let's say python using pytest, with a typed code, with some simple test for each type (eg. 0 and 1 for integers, empty/random strings, etc.) then you could show it examples on how to generate tests from function names... but then you could also just do it with reg-ex, so I guess with hypothesis. so maybe the right question to ask is: what do you expect GPT-4 to do better than GPT-3 relative to the train distribution (which will have maybe 1-2y of more github data) + context window? What's the bottleneck? When would you say "I'm pretty sure there's enough capacity to do it"? What are the few-shot examples you feed your model?
In plain English - in what ways are Bayes' Rule and Popperian falsificationism conflicting epistemologies?

Considered as an epistemology, I don't think you're missing anything.

To reconstruct Popperian falsification from Bayes, see that if you observe something that some hypothesis gave probability ~0 ("impossible"), that hypothesis is almost certainly false - it's been "falsified" by the evidence. With a large enough hypothesis space you can recover Bayes from Popper - that's Solomonoff Induction - but you'd never want to in practice.

For more about science - as institution, culture, discipline, human activity, etc. - and ideal Bayesian rationality, see the Sc... (read more)

4Sandro P.1moThanks for the recommendation. To the sequence I go!
Speculations Concerning the First Free-ish Prediction Market

With the capital I have on hand as a PhD student, there's just no way that running something like Vitalik's pipeline to make money on prediction markets will have a higher excess return-on-hours-worked over holding ETH than my next-best option (which I currently think is a business I'm starting).

If I was starting with a larger capital pool, or equivalently a lower hourly rate, I can see how it would be attractive though.

Speculations Concerning the First Free-ish Prediction Market

I don't use Polymarket because, relative to a material investment,

More generally, I haven't yet seen a prediction market where the "easy money" looks more attractive on a risk-and-work-adjusted basis than working on HypoFuzz. Perhaps... (read more)

9ike1moUSDC is a very different thing than tether. Do you have most of your net worth tied up in Eth, or something other than USD at any rate? If not I don't see how the volatility point could apply.
Raj Thimmiah's Shortform

Don't forget inventions: in the long run, changing the set of available goods and services has been even more important (!) than improving their distribution. Notable post-WWII examples include high-yield cereal varieties, smallpox and polio vaccines, everything made with semiconductors and all the services they enable...

The importance of how you weigh it

An alternative framing of this project (for global utilitarians) is "the project of explaining our moral intuitions and other ethical considerations" ala rule utilitiarianism. I don't see this as a purely empirical project though; there's also a lot of conceptual work which philosophy is well equipped for. That said,

those inclined to stop at the basic set seem more likely to sit down in the armchair for a bit, answer their central questions with reference to the basic set, then get up and leave — in search of further, empirical information and opportun

... (read more)
Think like an educator about code quality

It seems to me that your post is missing something: what specifically do you want people to learn?

I am an educator - currently teaching CS research and Python for a general audience - and I find it's easy to get people to learn... and surprisingly hard to have everyone learn the same specific thing (to a degree they can use, not just repeat on a test <=2 weeks later). Circumstances to discover and apply the ideas for themselves are best, followed by a variety of communications strategies like the videos, diagrams, and docs you mention.

For code-quality ... (read more)

2adamzerner1moHm, I'm not sure I'm following. It sounds like you're asking this from the perspective of "I'm a developer. What specifically do I want to teach the other developers about how this code works?" The answer to that totally depends on what the code is for. I do think it would have been better if I had a running, concrete example to reference throughout the post though. Or is this specific enough to count as an answer? If so, yes, that's what I'm going for. That's an interesting perspective. I don't have a good enough grasp of what each term really means, but to me education is a type of communication that has a connotation of being about something that is harder to grasp. Ie. if I'm teaching you calculus, that's education, but if I'm figuring out a time for us to meet for coffee, that's communication. With that, education seems like a better term for what I'm going for than communication, but it's very possible I'm using the terms improperly.
Think like an educator about code quality

Fill in the blank: "Think like a ____ about code quality". What else makes sense?

From my own experience, 'think like an open-source maintainer':

  • One goal is to make it easy (and fun!) enough to work with the code that others will use it and contribute to it - voluntarily, not because they need to for a job or a class. Clarity and brevity are virtues, as is functionality.
  • The code, in itself, is an instrument for the education of users (and contributors). Readers should be enlightened about the purpose of the code, and at a more granular level focus
... (read more)
2adamzerner1moThat's amazing! Great idea!
My AGI Threat Model: Misaligned Model-Based RL Agent

I assume that the learned components (world-model, value function, planner / actor) continue to be updated in deployment—a.k.a. online learning

If it's not online learning in the strict sense, I'd expect a sufficient (and sufficiently-low-insight) process of updates and redeploys to have the same effect in the medium term. I agree that online learning is the most likely path in such scenarios, but I don't think it's necessary.

Jimrandomh's Shortform

It's worth noticing that this is not a universal property of high-paranoia software development, but a an unfortunate consequence of using the C programming language and of systems programming. In most programming languages and most application domains, crashes only rarely point to security problems.

I disagree. While C is indeed terribly unsafe, it is always the case that a safety-critical system exhibiting behaviour you thought impossible is a serious safety risk - because it means that your understanding of the system is wrong, and that includes the safety properties.

What's a good way to test basic machine learning code?

I don't know of any courses specifically on testing ML (or numerical) code, but 'property-based testing' gives you great tools for testing code where coming up with input-output pairs is difficult. I wrote a paper on testing numerical or scientific code last year, and QuiviQ makes great PBT tools for Elixr.

1Kenny2moYour paper is excellent so far – very readable! Thanks again!
1Kenny2moI see now that my question title could be better. I'm more looking for test cases, than testing tools. I've added your paper to my reading queue! Thanks!
Zac Hatfield Dodds's Shortform

Fully-general precommitment and its discontents

A cute trick for Newcomblike problems: precommit to "always act as if I had made the most advantageous precommitment".  This is the essence of functional decision theory; but operationalisation remains challenging.  Consider a Stag Hunt among FDT agents:

  • Without common knowledge (that each agent knows the others use FDT), FDT has nothing to say about the game - just calculate expected value by multiplying payoffs by probabilities as usual.
  • With common knowledge, FDT agents hunt stag, hurray! &n
... (read more)
Covid 3/4: Declare Victory and Leave Home

Seconding Eliezer's recommendation for the Young Wizards series; I'd call them the most important fiction I've read.

And cheap ebooks are available direct from the author, eg https://ebooks.direct/products/the-i-want-everything-youve-got-package (and the Tale of the Five series is probably also of interest to anyone who can spell "polycule").

2gjm2moNote that if you are in the UK, that link will not work. (For me, exactly what happens depends on what browser I'm using, but a typical experience is that it displays the bundle you intended, and then after about a second it switches to displaying a grid showing all their products, and then after about 1/3 of a second those all disappear, leaving me on a shopping page where I cannot in fact either view or buy anything.) The reason, looking at their blog, appears to be that their response to the extra bureaucracy they face post-Brexit is to give up on selling to the UK, so they just filter out their entire product range if they detect that you're in the UK. (You'd have thought it would be easy for them to put up a little notice saying why, so it doesn't look like a bug, but no matter.) The intended workaround for people in the UK is a thing that points you at places where you can buy their books (presumably at higher prices and with DRM) from Amazon etc., but that is currently unavailable and "will become available during the week of January 18-22, 2021". I suppose it's appropriate for SF&F authors to sell their books only to people capable of time travel.
Takeaways from one year of lockdown

I agree that expecting a more competent government response than we actually saw (almost anywhere [1]) was entirely reasonable, and that premised on that extreme action to manage tail risks was prudent for the first half of 2020.

Subsequently, I think the rationalist community underperformed wrt. understanding the ongoing crises; we've had excellent epistemology but only inconsistently translated that into "winning" as the problem moved from extreme uncertainty and tail risk, to a set of more detail-rich operational challenges. In a slogan, we've been long... (read more)

1tog2moI'm curious, what countries have and haven't seen substantial focus on hand hygiene? We have that here in Canada.
Micromorts vs. Life Expectancy

The simple answer is that micromort exposure is not independent of age: in expectation, a larger proportion of 80-year-old Americans will die within the next day than of 30-year-old Americans.

Based on 1, a 30-year-old is ~6 micromorts/day, rising to ~180 by age 80! On the other hand I'm a little suspicious of their numbers, because the female death rate is lower than male in literally every age group, and by eyeball it seems too much to explain by surviving longer into the 85-and-over group. 2 is a nice overview of life expectancy dynamics, including a s... (read more)

1Young Buck3moI figured it had something to do with different ages. At some level it still felt like if the micromort-based estimate of lifespan is 124 years, then someone group should have a life expectancy that long. But based on your comments, maybe the real issue is that, say, a baby that dies on the first day experiences 1 million micromorts per day, while someone who lives to be 100 experiences (1 million)/(365.25 days per year x 100 years) = 27 micromorts per day. Wait, something still doesn't make sense, because even at age 100, 27 per day on average is more than the 24 per day that we get from using deaths in a year.