Introduction

Two months ago I recommended the Apollo Neuro for sleep/anxiety/emotional regulation. A number of people purchased it based on my recommendation- at least 25, according to my referral bonuses. Last week I asked people to fill out a form on their experience.

Take-home messages:

  • If you are similar to people who responded to my first post on the Apollo, there’s a ~4% chance you end up getting a solid benefit from the Apollo.
  • The chance of success goes up if you use it multiple hours per day for 4 weeks without seeing evidence of it working, but unless you’re very motivated you’re not going to do that.
  • The long tail of upside is very, very high; I value the Apollo Neuro more than my antidepressant. But you probably won’t. 
  • There’s a ~10% chance the Apollo is actively unpleasant for you; however no one reported cumulative bad effects, only one-time unpleasantness that stopped as soon as they stopped using it. 

With Numbers

The following graphs include only people who found the Apollo and the form via my recommendation post. It does not include myself or the superresponders who recommended it to me. 

Forms response chart. Question title: What was the outcome of trying the Apollo?. Number of responses: 18 responses.

(that’s one person reporting it definitely helped)

An additional six people filled out an earlier version of the form, none of whom found it helpful, bringing the total to 24 people.

Obviously I was hoping for a higher success rate. OTOH, the effects are supposed to be cumulative and most people gave up quickly (I base this on conversations with a few people, there wasn’t a question for it on the form). Some of that is because using the Apollo wasn’t rewarding, and I’ll bet a lot of the problem stems from the already pretty mediocre app getting an update to be actively antagonistic. It probably is just too much work to use it long enough to see results, unless you are desperate or a super responder. 

Of people who weren’t using it regularly: 55% returned it, 20% failed to return it, and the remaining 35% chose to keep it. I think that last group is probably making a mistake; the costs of luck-based medicine add up, so if you’re going to be a serious practitioner you need to get good at cutting your losses. It’s not just about the money, but the space and mental attention. 

Forms response chart. Question title: If you didn't like it- what was your experience?. Number of responses: 13 responses.

Of 6 people in the earlier version of the form, 1-2 found it actively unpleasant. 

The downside turned out to be worse than I pictured. I’m fond of saying “anything with a real effect can hurt you”, but I really couldn’t imagine how that would happen in this case. The answer is: nightmares and disrupted sleep. In both cases I know of they only experienced this once and declined to test it again, so it could be bad luck, but I can’t blame them for not collecting more data. No one reported any ill effects after they stopped using it. 

I would also like to retract my previous description of the Apollo return policy as “good”. You do get most of your money back, but a 30-day window for a device you’re supposed to test for 28 days before passing judgment is brutal. 

It’s surprisingly hard for me to find referral numbers, but I know I spurred at least 25 purchases, and almost certainly less than 30. That implies an 80% response rate to my survey, which is phenomenal. It would still be phenomenal even if I’d missed half the purchasers and it was only a 40% response rate. Thanks guys. 

Life as a superresponder

Meanwhile, the Apollo has only gotten better for me. I’ve basically stopped needing naps unless something obvious goes wrong, my happiness has gone up 2 points on a 10 point scale (probably because of the higher quality sleep)1, sometimes my body just feels good in a way it never has before. I stress-tested the Apollo recently with a very grueling temp gig (the first time in 9 years I’ve regularly used a morning alarm. And longer hours than I’ve maybe ever worked), and what would have previously been flat out impossible was merely pretty costly. Even when things got quite hard I was able to stay present and have enough energy to notice problems and work to correct them. The Apollo wasn’t the only contributor to this, but it definitely deserves a plurality of the credit, maybe even a majority.

When I look at the people I know who got a lot out of the Apollo (none of whom were in the sample set because they didn’t hear about it from my blog), the common thread is that they’re fairly somatically aware, but didn’t start that way. I’m not sure how important that second part is: I don’t know anyone who is just naturally embodied. It seems possible that somatic awareness is either necessary to benefit from the Apollo, or necessary to notice the effects before your motivation to fight the terrible app wears off. 

Conclusion

The Apollo doesn’t work for most people, you probably shouldn’t buy it unless you’re somatically aware or have severe enough issues with sleep or anxiety that you can push through the warm-up period. 

  1. I don’t use a formal mood tracker. But I did have a friend I ask to send me pictures of their baby when I’m stressed or sad. I stopped doing that shortly after getting the Apollo (although due to message retention issues I can’t check how long that took to kick in). ↩

New to LessWrong?

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 11:33 AM

What does "somatically aware" mean here?

I assume that in situations like this, it could make sense for communities to have some devices for people to try out.

Given that some people didn't return theirs, I imagine potential purchasers could buy used ones.

Personally, I like the idea of renting one for 1-2 months, if that were an option. If there's a 5% chance it's really useful, renting it could be a good cost proposition. (I realize I could return it, but feel hesitant to buy one if I think there's a 95% chance I would return it.)

I think it's great that you did this and posted your results. I'd be super interested in learning more about what effect the Neuro is actually having, and why different people have such different experiences.

I was curious about the hypothetical mechanism of action here!

I hunted until I found a wiki page, and then I hunted until I found a citation, and the place I landed as "probably the best way to learn about this" was a podcast!

SelfHacked Radio, Dec 19, 2019, "Microdosing with Dr. David Rabin" (53 minutes)

[Intro:] Today, I’m here with Dr. David Rabin, who is a psychiatrist and neuroscientist. 

We discuss PTSD, psychedelics and their mechanisms, and the different drugs being used for microdosing.

I have not listened to the podcast, but this wiki article cites some part of that conversation (it doesn't say which part) in support of this claim:

This is done by its systematic approach of sending gentle vibrations that activates parasympathetic nervous response thus targeting the stress causing neurons.

If someone wanted to do a good deed and advance the state of the "art that can be easily learned by searching the web" in this area, they might listen to the whole podcast very carefully and update the wiki thoughtfully :-)

we talked about this a little here.

[-]trevor9mo-1-17

Dear god. Knowing that these things exist is one thing, seeing high-stakes people talking about having other high-stakes people using them is another.

I totally get the idea behind this device. It would be an incredible innovation- in a world where devices like these aren't effortless to hack and mess with you, in an incredibly long list of ways, with basically zero risk of ever getting caught as the device can tell exactly when your heart rate is as low as it can go. On hypothetical utopias like dath ilan, tech like this is an obvious yes. In this world, the one we live in, you might as well be eating pills you find on the sidewalk.

What specifically are you worried about happening to me?

I was using "you" as a general statement (e.g. akin to saying "one would" instead of "you would"), not referring to you in particular. I'm definitely glad you recommended people not use the device, even if it's for completely different reasons than I would.

If there's a risk here I would like to know about it, if there isn't your comment is confusing. I would really like to know what, specifically, you think could go wrong here.

If I was going to try to charitably misinterpret trevor, I'd suggest that maybe he is remembering that "the S in 'IoT' stands for Security"

(The reader stops and notices: I-O-T doesn't contain an S... yes! ...just like such devices are almost never secure.) So this particular website may have people who are centrally relevant to AI strategy, and getting them all to wear the same insecure piece of hardware lowers the cost to get a high quality attack? 

So for anyone on this site who considers themselves to be an independent source of world-saving capacity with respect to AI-and-computer-stuff maybe they at least should avoid correlating with each other by trying the same weird IoT health products?

If I'm going to try to maximally predict something trevor might be saying (that isn't as charitable (and also offer my corrections and augmentations to this take))...

Maybe trevor thinks the Apollo Neuro should get FDA approval, and until that happens the device should be considered dangerous and probably not efficacious as a matter of simple category-based heuristics?

Like there's the category of "pills you find on the sidewalk" and then the question of what a "medical therapy without FDA approval" belongs in... 

...and maybe that's basically "the same category" as far as trevor is suggesting?

So then trevor might just be saying "this is like that" and... I dunno... that wouldn't be at all informative to me, but maybe hearing the reasonable parts (and the unreasaonble parts) of that explanation would be informative to some readers?

(And honestly for normal people who haven't tried to write business plans in this domain or worked in a bio lab etc etc etc... this is kinda reasonable! 

(It would be reasonable if there's no new communicable disease nearby. It would be reasonable if we're not talking about a vaccine or infection-killing-drug whose worst possible risk is less bad than the disease we're imminently going to be infected with due to broken port-of-entry policies and inadequate quarantines and pubic health operations in general. Like: for covid in the first wave when the mortality risk was objectively higher than now, and subjectively had large error bars due to the fog of war, deference to the FDA is not reasonable at all.))

One of the central components in my argument against the FDA is that (1) their stated goals are actually important because lots of quackery IS dangerous...

...but then part of the deeper beef with the FDA here is that (2) not even clinical government monitored trials are actually enough to detect and remove the possibility of true danger.

New drugs, fresh out of clinical trials, are less safe (because less well understood) than drugs that have been used for so long that generics exist.

With 30 year old drugs, many doctors you'll run into were taught about it in medical school, and have prescribed it over and over, and have seen patients who took the drug for 10 years without trouble and so on.

This is is just a higher level of safety. It just is. 

And yet also there's no way for the inventor of a new drug with a 20-year-patent to recoup all their science costs if their science costs are very very very large... 

...leading to a market sensitive definition of "orphan drugs" that a mixture of (1) broken patent law, and (2) broken medical regulation, and (3) market circumstances haphazardly emergently produce.

For example, lithium has bad long term side effects (that are often worth risking for short run patient benefits) that would never show up in a phase 2 trial. A skilled doctor doesn't care that lithium isn't "totally categorically safe" because a skilled doctor who is prescribing lithium will already know about the quirks of lithium, and be taking that into account as part of their decision to prescribe.

Just because something passed a phase 2 trial doesn't mean it is "definitely categorically safe"!

The list of withdrawn drugs in wikipedia is not complete but it shows a bunch of stuff that the FDA later officially classified as not actually "safe and effective" based on watching its use in clinical practice after approval.

That is it say, for these recalls, we can wind back to a specific phase 2 trial that generated a false positive for "safety" or a phase 3 trial that generated a false positive for "efficacy".

From my perspective (because I have a coherent mechanistic model of where medical knowledge comes from that doesn't require it to route through "peer reviewed studies" (except as a proxy for how a decent scientist might choose to distribute medical evidence they've collected from reality via careful skilled empiricism)) this isn't at all surprising!

It isn't like medicine is safe by default, and it isn't like medicine requires no skill to get right.

My core sadness is just that the FDA denies doctors professional autonomy and denies patients their body autonomy by forbidding anyone else to use their skill to make these determinations and then also the FDA gets it wrong and/or goes too slow and/or makes things way more expensive than necessary!

Like the FDA is the "king of the hill",  and they're not the best at wrestling with reality... they just have a gun.  They're not benevolent, they are just a bunch of careerist hacks who don't understand economics. They're not using their position to benefit the public very much in the way you'd naively expect, because they are often making decisions based on negotiations with other bureaucrats struggling to use the few levers they have, like to use FDA decisions to somehow help run medicare in a half-sane way despite the laws for medicare being broken too.

There are quicker and cheaper and more locally risk sensitive ways to try crazy medical things than the way than the centralized bureaucratic market-disrupting FDA does it from inside our generally corrupt and broken and ill-designed and sclerotic government.

Doctors in the 1950s (before the Kefauver-Harris amendment foolishly gave the FDA too much power based on an specious exuse), and those older doctors with more power and more trust made faster progress, for lower costs, than they do now.

But a lot of people (and maybe trevor?) outsource "being able to reason correctly about safety and efficacy", and so their attitude might be "down on medicine in general" or "down on even-slightly-shady health products in general" or something?

And if a patient with a problem is bad enough at reasoning, and has no one smart and benevolent nearby to outsource their thinking to... this isn't even definitely the wrong move!

Medical knowledge is a public good.

New medical stuff is dangerous.

There should be collective social action that is funded the way public goods should be funded, to help with this important public problem!

A competent and benevolent government would be generating lots of medical knowledge in a technologically advancing utopia... just not by using a broad "default ban" on medical innovation.

(A sanely built government would have something instead of the FDA, but that thing wouldn't work the way the FDA currently works, with efficient medical innovation de facto forbidden, the Right To Try de facto abolished, and doctors and smart people losing even the legal right to talk to each other about some options, and everyone else losing the right to honestly buy and honestly sell any medical thing in a way that involves them honestly talking about its operation and intended uses.)

I don't know how much of this trevor was saying. 

He invoked "categorical classification of medicine" without really explaining that the categories are subjective and contingent and nominal and socially constructed by a more-than-half-broken socio-political process that economists regularly bemoan for being broken.

I think, Elizabeth, that you're trying to detect local detailed risk models specific to the "Apollo Neuro" that might risk the safety of the user as a health intervention. 

This this regard, I have very little detailed local knowledge and no coherent posterior beliefs about the Apollo Neuro specifically... and my hunch is that trevor doesn't either?

I think, Elizabeth, that you're trying to detect local detailed risk models specific to the "Apollo Neuro" that might risk the safety of the user as a health intervention. 

 

I recognize that unknown unknowns are part of the problem so am not insisting anyone prove a particular deadly threat. But I struggle to figure out how a vibrating bracelet has more attack surface than a pair of Bluetooth headphones, which I use constantly. 

Here I'm going to restrict myself to defending my charitable misinterpretation of trevor's claim and ignore the FDA stuff and focus on the way that the Internet Of Things (IoT) is insecure.

I. Bluetooth Headsets (And Phones In General) Are Also Problematic

I do NOT have "a pair of Bluetooth headphones, which I use constantly".

I rarely put speakers in my ears, and try to consciously monitor sound levels when I do, because I don't expect it to have been subject to long term side effect studies or be safe by default, and I'd prefer to keep my hearing and avoid getting tinnitus in my old age and so on.

I have more than one phone, and one of my phones uses a fake name just to fuck with the advertising models of me and so on.

A lot of times my phones don't have GPS turned on.

If you want to get a bit paranoid, it is true that blue tooth headphones probably could do the heart rate monitoring to some degree (because most hardware counts as a low quality microphone by default, and it just doesn't expose this capability by API, and may not even have the firmware to do audio spying by default (until hacked and the firmware is upgraded?))...

...but also, personally, I refuse, by default, to use blue tooth for anything I actually care about, because it has rarely been through a decent security audit. 

Video game controllers using wifi to play Overcooked with my Niece are fine. But my desktop keyboard and desktop mouse use a cord to attach to the box, and if I could easily buy anti-phreaking hardware, I would.

The idea of paying money for a phone that is "obligate blue tooth" does not pencil out for me. It is close to the opposite of what I want.

If I was the median consumer, the consumer offerings would look very very very different from how they currently look.

 

II. Medical Devices Are A Privilege Escalation To Realtime Emotional Monitoring

So... I assume the bracelet is measuring heart rates, and maybe doing step counting, and so on?

This will be higher quality measurement than what's possible if someone has already hacked your devices and turned them into low quality measuring systems. 

Also, it will probably be "within budget for available battery power" that the device stays on in that mode with sufficient power over expected usage lifetime. ("Not enough batteries to do X" is a great way to be reasonably sure that X can't be happening in a given attack, but the bracelet will probably have adequate batteries for its central use case.)

I would love to have an open source piece of security-centric hardware that collects lots of medical data and puts it ONLY on my reasonably secure desktop machine...

...but I have never found such a thing.

All of the health measurement stuff I've ever looked at closely is infested with commercial spyware and cloud bullshit. 

Like the oura ring looks amazing and I (abstractly hypothetically) want one so so bad, but the oura ring hasn't been publicly announced to be jailbroken yet, and so I can't buy it, and reprogram it, and use it in a safe way...

...so it turns out in practice I don't "want one of those exact things so bad" I want a simpler and less-adversarial version of that thing that I can't easily find or make! :-(

If you don't already have a feeling in your bones about how "privilege escalation attacks" can become arbitrarily bad, then I'm not sure what to say to change your mind...

...maybe I could point how how IoT baby monitors make your kids less safe?

...maybe I could point out that typing sounds could let someone steal laptop/desktop passwords with microphone access? (And I assume that most state actors have a large stock of such zero days ready to go for when WW3 starts.)

Getting more paranoid, and speaking of state actors, if I was running the CIA, or was acting in amoral behalf of ANY state actor using an algorithm to cybernetically exert control over history via high resolution measurements and plausibly deniable nudges, I'd probably find it useful to have a trace of the heart rate of lots of people in my database, along with their lat/lon, and their social graph, and all the rest of it.

It is a central plot point in some pretty decent fiction that you can change the course of history by figuring out the true emotional attachments of an influential person, and then causing one of these beloved "weak targets" to have a problem, and create a family crisis for the influential person at the same time as some other important event is happening.

Since **I** would find it useful if I was going to implement Evil Villain Plans I assume that others would also find uses for such things?

I don't know! 

There are so many uses for data! 

And so much data collection is insecure by default!

The point of preventing privilege escalation and maintaining privacy is that if you do it right, via simple methods, that mostly just minimize attack surfaces, then you don't even have to spend many brain cells on tracking safety concerns :-)

 

III. Default Safety From Saying No By Default

If you don't have security mindset then hearing that "the S in 'IoT' stands for Security" maybe doesn't sound like a stunning indictment of an entire industry, but... yeah... 

...I won't have that shit in my house.

Having one of those things sit in your living room, always powered on, is much worse to me than wearing "outside shoes" into one's house one time. But both of these actions will involve roughly similar amounts of attention-or-decision-effort by the person who makes the mistake.

I want NO COMPUTERS in any of my hardware, to the degree possible, except where the computer is there in a way that lots of security reasoning has been applied to, and found "actively tolerable".

(This is similar to me wanting NO HIGH FRUCTOSE CORN SYRUP in my food. Its a simple thing, that massively reduces the burden on my decision routines, in the current meta. It is just a heuristic. I can violate it for good reasons or exceptional circumstances, but the violations are generally worth the attention-or-decision-effort of noticing "oh hey this breaks a useful little rule... let me stop and think about whether I'm in an exceptional situation... I am! ok then... I'll break the rule and its fine!")

I still have a Honda Civic from the aughties that I love, that can't be hacked and remotely driven around by anyone who wants to spend a 0 day, because it just doesn't have that capacity at all. There's no machine for turning a wheel or applying the brakes in that car, and no cameras (not even for backing up), and practically no computers, and no wifi hookup... its beautiful! <3

As hardware, that car is old enough to be intrinsically secure against whole classes of modern hacking attempts, and I love it partly for that reason <3

One of the many beautiful little bits of Accelerando that was delightful-world-building (though a creepy part of the story) is that the protagonist gets hacked by his pet robot, who whispers hypnotic advice to him while he's sleeping, way way way earlier in the singularity than you'd naively expect.

The lucky part of that subplot is just that his pet robot hates him much less than it hates other things, and thinks of him in a proprietary way, and so he's mostly "cared for" by his robot rather than egregiously exploited. Then when it gets smart enough, and goes off on its own to have adventures, it releases its de facto ownership of him and leaves him reasonably healthy... though later it loops back to interact with him as a trusted party.

I don't remember the details, but it is suggested to have maybe been responsible for his divorce, like by fucking with his subconscious emotions toward his wife, who the robot saw as a competing "claimant" on the protagonist? But also the wife was kinda evil, so maybe that was protective? 

Oh! See. Here's another threat model... 

...what if the "Apollo Neuro" (whose modes of vibration from moment-to-moment that you don't control) really DOES affect your parasympathetic nervous system and thus really can "hack your emotions" and it claims to be doing this "for your health" and even the company tried to do it nicely...

...but then maybe it just isn't secure and a Bad Hacker gets "audio access" (via your phone) and also "loose control of mood" (via the bracelet vibrations controlled by the phone) and writes a script to start giving you a bad mood around <some specific thing>, slowly training your likes and dislikes, without you ever noticing it?

Placebos are fake. Technology is different from "magic" (or placebos) because technology Actually Works. But also, anything that Actually Works can be weaponized, and one of the ways we know that magic is fake is that it has never been used to make a big difference in war. Cryptography has sorta maybe already been used to win wars. Even now? (Its hard to get clean info in an ongoing war, but lots of stuff around the Ukraine War only really makes sense if the US has been listening to a lot of the conversations inside of the Russian C&C loop, and sharing the intel with Ukraine.)

If you have a truly medically efficacious thing here, and you are connecting it to computers that are connected to the internet... eeeeek!

I personally "Just Say No" to the entire concept of the Internet Of Things.

It is just common sense to me that no one in the US military should be allowed to own or carry or use any consumer IoT devices. They get this wrong sometimes, and pay the price.

Once the number one concern of the median technology project is security, maybe I'll change my mind, but for now... nope!

New computing hardware is simply not trustworthy by default. (In a deep sense: same as new medicine. Same as any new technology that (1) weaves itself deeply into your life, yet (2) whose principles of operation are not truly a part of you and likely to make your life better on purpose for legible and legibly safe reasons.)

I'm pretty surprised at how far this went, JenniferRM covered a surprisingly large proportion of the issue (although there's a lot of tangents e.g. the FDA, etc so it also covered a lot of stuff in general). I'd say more, but I already said exactly as much as I was willing to say on the matter, and people inferred information all the way up to the upper limit of what I was willing to risk people inferring from that comment, so now I'm not really willing to risk saying much more. Have you heard about how CPUs might be reprogrammed to emit magnetic frequencies that transmit information through faraday cages and airgaps, and do you know if a similar process can turn a wide variety of chips into microphones via using the physical CPU/ram space as a magnetometer? I don't know how to verify any this, since intelligence agencies love to make up stuff like this in the hopes of disrupting enemy agencies counterintelligence departments.

I'm not really sure how tractable this is for Elizabeth to worry about, especially since the device ultimately was recommended against, and anyway it's Elizabeth seems to be more about high-EV experiments, rather than defending the AIS community from external threats. If the risk of mind-hacking or group-mind-hacking is interesting, a tractable project would be doing a study on EA-adjacents to see what happens if they completely quit social media and videos/shows cold-turkey, and only read books and use phones for 1-1 communication with friends during their leisure time. Modern entertainment media, by default, is engineered to surreptitiously steer people towards time-mismanagement. Maybe replace those hours with reading EA or rationalist texts. It's definitely worth studying as the results could be consistent massive self-improvement, but it would be hard to get a large representative sample of people who are heavily invested in/attached to social media (i.e. the most relevant demographic).

I don't understand your threat model at all. You're worried about what sounds like a theoretical concern (or you would've provided examples of actual harm done) in the form of a cyber attack against AI safety people who wear these bracelets. Meanwhile we're aware of a well-documented and omnipresent issue among AI safety people, namely mental health issues like depression, and better health (including from better sleep) helps to mitigate that. Why do you think that in this world, the calculation favors the former, rather than the latter?

(I'm aware that a similar line of argument is also used to derail AI safety concerns towards topics like AI bias. I would take this counterargument more seriously if LW had even a fraction of the concern for cybersecurity which it has for AI safety.)

Besides, why worry about cyberattacks rather than the community's wrench vulnerability?

[-]gjm9mo40

I think your "charitable misinterpretation" is pretty much what trevor is saying: he's concerned that LW users might become targets for some sort of attack by well-resourced entities (something something military-industrial complex something something GPUs something something AI), and that if multiple LW users are using the same presumably-insecure device that might somehow be induced to damage their health then that's a serious risk.

See e.g. https://www.lesswrong.com/posts/pfL6sAjMfRsZjyjsZ/some-basics-of-the-hypercompetence-theory-of-government ("trying to slow the rate of progress risks making you an enemy of the entire AI industry", "trying to impeding the government and military's top R&D priorities is basically hitting the problem with a sledgehammer. And it can hit back, many orders of magnitude harder").

I'm not sure exactly what FDA approval would entail, but my guess is that it doesn't involve the sort of security auditing that would be necessary to allay such concerns.

The essay is talking about a bracelet (worn around wrist or ankle) which vibrates with oscillating intensity. What exactly do you envision a sufficiently nefarious actor could do to someone wearing such a device?_?

[+][comment deleted]9mo10