RobertM

LessWrong dev & admin as of July 5th, 2022.

Comments

RobertM11h20

I think there might be many local improvements, but I'm pretty uncertain about important factors like elasticity of "demand" (for robbery) with respect to how much of a medication is available on demand.  i.e. how many fewer robberies do you get if you can get at most a single prescriptions' worth of some kind of controlled substance (and not necessarily any specific one), compared to "none" (the current situation) or "whatever the pharmacy has in stock" (not actually sure if this was the previous situation - maybe they had time delay safes for storing medication that wasn't filling a prescription, and just didn't store the filled prescriptions in the safes as well)?

Headline claim: time delay safes are probably much too expensive in human time costs to justify their benefits.

The largest pharmacy chains in the US, accounting for more than 50% of the prescription drug market[1][2], have been rolling out time delay safes (to prevent theft)[3].  Although I haven't confirmed that this is true across all chains and individual pharmacy locations, I believe these safes are used for all controlled substances.  These safes open ~5-10 minutes after being prompted.

There were >41 million prescriptions dispensed for adderall in the US in 2021[4].  (Note that likely means ~12x fewer people were prescribed adderall that year.)   Multiply that by 5 minutes and you get >200 million minutes, or >390 person-years, wasted.  Now, surely some of that time is partially recaptured by e.g. people doing their shopping while waiting, or by various other substitution effects.  But that's also just adderall!

Seems quite unlikely that this is on the efficient frontier of crime-prevention mechanisms, but alas, the stores aren't the ones (mostly) paying the costs imposed by their choices, here.

  1. ^

    https://www.mckinsey.com/industries/healthcare/our-insights/meeting-changing-consumer-needs-the-us-retail-pharmacy-of-the-future

  2. ^

    https://www.statista.com/statistics/734171/pharmacies-ranked-by-rx-market-share-in-us/

  3. ^

    https://www.cvshealth.com/news/pharmacy/cvs-health-completes-nationwide-rollout-of-time-delay-safes.html

  4. ^

    https://www.axios.com/2022/11/15/adderall-shortage-adhd-diagnosis-prescriptions

RobertM1mo126

use spaces that your community already has (Lighthaven?), even if they're not quite set up the right way for them

Not set up the right way would be an understatement, I think.  Lighthaven doesn't have an indoor space which can seat several hundred people, and trying to do it outdoors seems like it'd require solving maybe-intractable logistical problems (weather, acoustics, etc).  (Also Lighthaven was booked, and it's not obvious to me to what degree we'd want to subsidize the solstice celebration.  It'd also require committing a year ahead of time, since most other suitable venues are booked up for the holidays quite far in advance.)

I don't think there are other community venues that could host the solstice celebration for free, but there might be opportunities for cheaper (or free) venues outside the community (with various trade-offs).

RobertM1mo51

Having said that, I would NOT describe this as asking "how could I have arrived at the same destination by a shorter route". I would just describe it as asking "what did I learn here, really".

I mean, yeah, they're different things.  If you can figure out how to get to the correct destination faster next time you're trying to figure something out, that seems obviously useful.

RobertM1mo20

Some related thoughts.   I think the main issue here is actually making the claim of permanent shutdown & deletion credible.  I can think of some ways to get around a few obvious issues, but others (including moral issues) remain, and in any case the current AGI labs don't seem like the kinds of organizations which can make that kind of commitment in a way that's both sufficiently credible and legible that the remaining probability mass on "this is actually just a test" wouldn't tip the scales.

RobertM1moΩ9172

I am not covering training setups where we purposefully train an AI to be agentic and autonomous. I just think it's not plausible that we just keep scaling up networks, run pretraining + light RLHF, and then produce a schemer.[2]

Like Ryan, I'm interested in how much of this claim is conditional on "just keep scaling up networks" being insufficient to produce relevantly-superhuman systems (i.e. systems capable of doing scientific R&D better and faster than humans, without humans in the intellectual part of the loop).  If it's "most of it", then my guess is that accounts for a good chunk of the disagreement.

RobertM2mo163

Curated.  I liked that this post had a lot of object-level detail about a process that is usually opaque to outsiders, and that the "Lessons Learned" section was also grounded enough that someone reading this post might actually be able to skip "learning from experience", at least for a few possible issues that might come up if one tried to do this sort of thing.

RobertM2mo112

(We check for "downvoter count within window", not all-time.)

RobertM2mo135

Curated.  This dialogue distilled a decent number of points I consider cruxes between these two (clusters of) positions.  I also appreciated the substantial number of references linking back to central and generally high-quality examples of each argument being made; I think this is especially helpful when writing a dialogue meant to represent positions people actually hold.

I look forward to the next installment.

RobertM3mo30

Here's the editor guide section for spoilers.  (Note that I tested the instructions for markdown, and that does indeed seem broken in a weird way; the WYSIWYG spoilers still work normally but only support "block" spoilers; you can't do it for partial bits of lines.)

In this case I think a warning at the top of the comment is sufficient, given the context of the rest of the thread, so up to whether you want to try to reformat your comment around our technical limitations.

Load More