We caution against purely technical interpretations of privacy such as “the data never leaves the device.” Meredith Whittaker argues that on-device fraud detection normalizes always-on surveillance and that the infrastructure can be repurposed for more oppressive purposes. That said, technical innovations can definitely help.
I really do not know what you are expecting. On-device calculation using existing data and other data you choose to store only, the current template, is more privacy protecting than existing technologies.
She's expecting, or at least asking, that certain things not be done on or off of the device, and that the distinction between on-device and off-device not be made excessively central to that choice.
If an outsider can access your device, they can always use their own AI to analyze the same data.
The experience that's probably framing her thoughts here is Apple's proposal to search through photos on people's phones, and flag "suspicious" ones. The argument was that the photos would never leave your device... but that doesn't really matter, because the results would have. And even if they had not, any photo that generated a false positive would have become basically unusable, with the phone refusing to do anything with it, or maybe even outright deleting it.
Similarly, a system that tries to detect fraud against you can easily be repurposed to detect fraud by you. To act on that detection, it has to report you to somebody or restrict what you can do. On-device processing of whatever kind can still be used against the interests of the owner of the device.
Suppose that there was a debate around the privacy implications of some on-device scanning that actually acted only in the user's interest, but that involved some privacy concerns. Further suppose that the fact that it was on-device was used as an argument that there wasn't a privacy problem. The general zeitgeist might absorb the idea that "on-device" was the same as "privacy-preserving". "On device good, off device bad".
A later transition from "in your interest" to "against your interest" could easily get obscured in any debate, buried under insistence that "It's on-device".
Yes, some people with real influence really, truly are that dumb, even when they're paying close attention. And the broad sweep of opinion tends to come from people who aren't much paying attention to begin with. It happens all the time in complicated policy arguments.
Turns out that this is today's SMBC comic. Which gets extra points points for the line "Humans are a group-level psychiatric catastrophe!"
If the value of Taylor Swift concerts comes mostly from interactions between the fans, is Swift herself essential to it?
this is a fair critique of AIs making everyone losing their jobs.
I have never heard anybody push the claim that there wouldn't be niche prestige jobs that got their whole value from being done by humans, so what's actually being critiqued?
... although there is some question about whether that kind of thing can actually sustain the existence of a meaningful money economy (in which humans are participants, anyway). It's especially subject to question in a world being run by ASIs that may not be inclined to permit it for one reason or another. It's hard to charge for something when your customers aren't dealing in money.
It also seems like most of the jobs that might be preserved are nonessential. Not sure what that means.
If humanity develops very advanced AI technology, how likely do you think it is that this causes humanity to go extinct or be substantially disempowered?
I would find this difficult to answer, because I don't know what you mean by "substantially disempowered".
I'd find it especially hard to understand because you present it as a "peer risk" to extinction. I'd take that as a hint that whatever you meant by "substantially disempowered" was Really Bad(TM). Yet there are a lot of things that could reasonably be described as "substantially disempowered", but don't seem particularly bad to me... and definitely not bad on an extinction level. So I'd be lost as to how substantial it had to be, or in what way, or just in general as to what you were getting at it with it.
But does it work at all?
It seems counterintutive that there would be one single thing called "aging" that would happen everywhere in the body at once; have a single cause or even a small set of causes; be measurable by a single "biological age" number; and be slowed, arrested, or reversed by a single intervention... especially an intervention that didn't have a million huge side effects. In fact, it seems counterintutive that that would even be approximately true. Biology sucks because everything interacts in ways that aren't required to have any pattern, and are still inconvenient even when they do have patterns.
How do you even do a meaningful experiment? For example, isn't NAD+ right smack in the middle of the whole cell energy cycle? So if you do something to NAD+, aren't you likely to have a huge number of really diverse effects that may or may not be related to aging? If you do that and your endpoint is just life span, how do you tease out useful knowledge? Maybe the sirtuins would have extended life span, but for the unrelated toxic effects of all that NAD+. Or maybe the sirtuins are totally irrelevant to what's actually going on.
The same sort of thing applies to any wholesale messing with histones and gene expression, via sirtuins or however else. You're changing everything at once when you do that.
Reprogramming too: you mentioned different kinds of cells responding differently. It seems really un-biological to expect that difference to be limited to how fast the cells "come around", or the effects to be simply understandable by measuring any manageable number of things or building any manageable mental model.
And there are so many other interactions and complications even outside of the results of experiments. OK, senescent cells and inflammation are needed for wound healing... but I'm pretty sure I don't heal even as fast at over 60 as I did at say 20, even with lots more senescent cells available and more background inflammation. So something else must be going on.
And then there are the side effects, even if something "works". For example, isn't having extra/better telomeres a big convenience if you want to grow up to be a tumor? Especially convenient if you're part of a human and may have decades to accumulate other tricks, as opposed to part of a mouse and lucky to have a year. How do you measure the total effect of something like that in any way other than full-on long-term observed lifespan and healthspan in actual humans?
And and and...
I often bounce through my comment history as a relatively quick way of re-finding discussions I've commented in. Just today, I wanted to re-find a discussion I'd reacted to, and realized that I couldn't do that and would have to find it another way.
There's surely some point or joke in this, but I'm just going "Wat?". This disturbs me because not many things go completely over my head. Maybe I'm not decision theory literate enough (or I guess maybe I'm not Star Wars literate enough).
Is Vader supposed to have yet another decision theory? And what's the whole thing with the competing orders supposed to be about?
CUDA seems to be superior to ROCm... and has a big installed-base and third-party tooling advantage. It's not obvious, to me anyway, that NVidia's actual silicon is better at all.
... but NVidia is all about doing anything it can to avoid CUDA programs running on non-NVidia hardware, even if NVidia's own code isn't used anywhere. Furthermore, if NVidia is like all the tech companies I saw during my 40 year corporate career, it's probably also playing all kinds of subtle, hard-to-prove games to sabotage the wide adoption of any good hardware-agnostic APIs.