JenniferRM

Wiki Contributions

Comments

This might be why people start companies after being roommates with each other. The "group housing for rationalists" thing wasn't chosen by accident back in ~2009.

Concretely: I wish either or both of us could get some formal responses instead of just the "voting to disagree".

 

In Terms Of Sociological Abstractions: Logically, I understand some good reasons for having "position voting" separated from "epistemic voting" but I almost never bother with the later since all I would do with it is downvote long interesting things and upvote short things full of math.

But I LIKE LONG INTERESTING THINGS because those are where the real action (learning, teaching, improving one's ontologies, vibing, motivational stuff, factional stuff, etc) most actually are.

((I assume other people have a different idea of what words are even doing, and by "disagree" they mean something about the political central tendency of a comment (where more words could raise it), instead of something conjunctively epistemic (where more words can only lower it).))

My understanding of why the mods "probably really did what they did" was that LW has to function as a political beacon, and not just a place for people to talk with each other (which, yeah: valid!) so then given that goal they wanted it to stop being the case that highly upvoted comments that were long interesting "conceptual rebuttals" to top level curated posts could "still get upvoted"... 

...and yet those same comments could somehow stop "seeming to be what the website itself as a community of voters seems to stands for (because the AGREE voting wasn't ALSO high)".

Like I think it is a political thing.

And as someone looking at how that stuff maybe has to work in order to maintain certain kinds of long term sociological viability I get it... but since I'm not a priest of rationality and I can say that I kinda don't care if lesswrong is considered low status by idiots at Harvard or Brigham Young or other seminaries... 

I just kinda wish we still had it like it was in the old days when Saying Something Interesting was still simply The King, and our king had almost no Ephor of politically palatable agreement constantly leaning over his keyboard watching what he typed.

 

Object Level: I'm actually thinking of actually "proliferating" (at least using some of the "unexploded ordinance that others have created but not had the chutzpah to wield") based on my current working model where humans are mostly virtue-ethically-bad (but sometimes one of them will level up in this or that virtue and become locally praiseworthy) whereas AI could just be actually virtue-ethically-pareto-optimally-good by design.

Part of this would include being optimally humble, and so it wouldn't actually pursue infinite compute, just "enough compute to satisfice on the key moral duties".

And at a certain point patience and ren and curiosity will all start to tradeoff directly, but there is a lot of slack in a typical human who is still learning and growing (or who has gone to seed and begun to liquidate their capital prior to death). Removing the meat-imposed moral slack seems likely to enable much greater virtue.

That is to say, I think my Friendly Drunk Fool Alignment Strategy is a terrible idea, and also I think that most of the other strategies I've heard of are even worse because the humans themselves are not saints and mostly don't even understand how or why they aren't saints, and aren't accounting for their own viciousness and that of other humans.

If I use the existing unexploded ordinance to build a robosaint that nearly always coordinates and cooperates with things in the same general basin of humanistic alignment... that seems to me like it would just be a tactically viable thing and also better than the future we're likely to get based on mechanistic historically-grounded priors where genocides happened often, and are still happening.

It would be nice to get feedback on my model here that either directly (1) argues how easy it really would be to "align the CCP" or "align Trump" or else (2) explains why a "satisfactory saint" is impossible to build.

I understand that many people are obsessed with the political impression of what they say, and mostly rationalists rarely say things that seem outside of the Rationalist Overton Window, so if someone wants to start a DM with me and Noosphere, to make either side (or both sides) of this argument in private then that would, from my perspective, be just as good. Good for me as someone who "wants to actually know things" and maybe (more importantly) good for those downstream of the modifications I make to the world history vector as a historical actor.

I just want to know what is Actually Good and then do the Actually Good things that aren't too personally selfishly onerous. If anyone can help me actually know, that would be really helpful <3

Isn't it simply true that Trump and the CCP aren't and can't be "made benevolent"?

Isn't Machiavellianism simple descriptively true of >80% of political actors?

Isn't it simply true that democracy arises due to the exigencies of wartime finance, and that guns tipped the balance and made democracy much more viable (and maybe even defensively necessary)?

Then, from such observations, what follows?

So this caught my eye:

If you believe that the only path to compute governance is a surveillance state, and you are accelerating AI and thus when we will need and when we will think we need such governance, what are the possibilities?

I'm somewhat sympathetic to "simply ban computers, period" where you don't even need a "total surveillance state", just the ability to notice fabs and datacenters and send cease and desist orders (with democratically elected lawful violence backing such orders).

Like if you think aligning AI to humanistic omnibenevolence is basically impossible, and also that computer powered surveillance states are bad, you could take computers in general away from both and that might be a decent future!

I'm also potentially sympathetic to a claim like "it isn't actually that hard to align AI to anything, including humanistic omnibenevolence, but what is hard is fighting surveillance states... so maybe we should just proliferate AI to everyone, and quite a few humans will want omnibenevolent AI that mass cooperates, and all the other AI (whose creators just wanted creator-serving-slaves who will murder everyone else if they can?) will be fighting for themselves, and so maybe mass proliferation will end with the omnibenevolent AI being the biggest coalition and winning and part of that would involve tearing down all the totalitarian (ie bad) states... so its a fight, but maybe its a fight worth having".

A lot hinges on the object level questions of (1) hard hard is it to actually make a benevolent AI and (2) how much do you trust large powerful organizations like the CCP and NSA and MSFT and so on.

Banning all computers would make the NSA's and CCP's current surveillance systems impossible and also keep AI from ever getting any stronger (or continuing to exist in the way it does). If nothing (neither AI nor organizations) can be ever be aligned to benevolence then I think I'm potentially in favor of such a thing.

However, if "aligning AI" is actually easier than "aligning the CCP" or "aligning Trump" (or whoever has a bunch of power in the next 2-20 years (depending on your timelines and how you read the political forecasts))... then maybe mass proliferation would be good?

A bold move! I admire it the epistemology of it, and your willingness to back it with money! <3

Importing some very early comments from YouTube, which I do not endorse (I'd have to think longer), but which are perhaps interesting for documenting history, and tracking influence campaigns and (/me shrugs) who knows what else?? (Sorted to list upvotes and then recency higher.)

@Fiolsthu95 3 hours ago +2

I didn't ever think I'd say this but.. based Trump?!?

@henrysleight7768 1 hour ago +1

"What Everyone in Technical Alignment is Doing and Why" could literally never 

@scottbanana1 3 hours ago +1

The best content on YouTube

@anishupadhayay3917 14 minutes ago +0

Brilliant

@Mvnt6 26 minutes ago +0

"S-tier, the s is for sociohazard" 12:25

@gnip4561 1 hour ago +0

Never did I ever thought that I'd agree with Donald Trump so much

@johnmalin4933 2 hours ago +0

I found this insightful. Reply 

@SheikhEddy 2 hours ago +0

I can't stop laughing

Here I'm going to restrict myself to defending my charitable misinterpretation of trevor's claim and ignore the FDA stuff and focus on the way that the Internet Of Things (IoT) is insecure.

I. Bluetooth Headsets (And Phones In General) Are Also Problematic

I do NOT have "a pair of Bluetooth headphones, which I use constantly".

I rarely put speakers in my ears, and try to consciously monitor sound levels when I do, because I don't expect it to have been subject to long term side effect studies or be safe by default, and I'd prefer to keep my hearing and avoid getting tinnitus in my old age and so on.

I have more than one phone, and one of my phones uses a fake name just to fuck with the advertising models of me and so on.

A lot of times my phones don't have GPS turned on.

If you want to get a bit paranoid, it is true that blue tooth headphones probably could do the heart rate monitoring to some degree (because most hardware counts as a low quality microphone by default, and it just doesn't expose this capability by API, and may not even have the firmware to do audio spying by default (until hacked and the firmware is upgraded?))...

...but also, personally, I refuse, by default, to use blue tooth for anything I actually care about, because it has rarely been through a decent security audit. 

Video game controllers using wifi to play Overcooked with my Niece are fine. But my desktop keyboard and desktop mouse use a cord to attach to the box, and if I could easily buy anti-phreaking hardware, I would.

The idea of paying money for a phone that is "obligate blue tooth" does not pencil out for me. It is close to the opposite of what I want.

If I was the median consumer, the consumer offerings would look very very very different from how they currently look.

 

II. Medical Devices Are A Privilege Escalation To Realtime Emotional Monitoring

So... I assume the bracelet is measuring heart rates, and maybe doing step counting, and so on?

This will be higher quality measurement than what's possible if someone has already hacked your devices and turned them into low quality measuring systems. 

Also, it will probably be "within budget for available battery power" that the device stays on in that mode with sufficient power over expected usage lifetime. ("Not enough batteries to do X" is a great way to be reasonably sure that X can't be happening in a given attack, but the bracelet will probably have adequate batteries for its central use case.)

I would love to have an open source piece of security-centric hardware that collects lots of medical data and puts it ONLY on my reasonably secure desktop machine...

...but I have never found such a thing.

All of the health measurement stuff I've ever looked at closely is infested with commercial spyware and cloud bullshit. 

Like the oura ring looks amazing and I (abstractly hypothetically) want one so so bad, but the oura ring hasn't been publicly announced to be jailbroken yet, and so I can't buy it, and reprogram it, and use it in a safe way...

...so it turns out in practice I don't "want one of those exact things so bad" I want a simpler and less-adversarial version of that thing that I can't easily find or make! :-(

If you don't already have a feeling in your bones about how "privilege escalation attacks" can become arbitrarily bad, then I'm not sure what to say to change your mind...

...maybe I could point how how IoT baby monitors make your kids less safe?

...maybe I could point out that typing sounds could let someone steal laptop/desktop passwords with microphone access? (And I assume that most state actors have a large stock of such zero days ready to go for when WW3 starts.)

Getting more paranoid, and speaking of state actors, if I was running the CIA, or was acting in amoral behalf of ANY state actor using an algorithm to cybernetically exert control over history via high resolution measurements and plausibly deniable nudges, I'd probably find it useful to have a trace of the heart rate of lots of people in my database, along with their lat/lon, and their social graph, and all the rest of it.

It is a central plot point in some pretty decent fiction that you can change the course of history by figuring out the true emotional attachments of an influential person, and then causing one of these beloved "weak targets" to have a problem, and create a family crisis for the influential person at the same time as some other important event is happening.

Since **I** would find it useful if I was going to implement Evil Villain Plans I assume that others would also find uses for such things?

I don't know! 

There are so many uses for data! 

And so much data collection is insecure by default!

The point of preventing privilege escalation and maintaining privacy is that if you do it right, via simple methods, that mostly just minimize attack surfaces, then you don't even have to spend many brain cells on tracking safety concerns :-)

 

III. Default Safety From Saying No By Default

If you don't have security mindset then hearing that "the S in 'IoT' stands for Security" maybe doesn't sound like a stunning indictment of an entire industry, but... yeah... 

...I won't have that shit in my house.

Having one of those things sit in your living room, always powered on, is much worse to me than wearing "outside shoes" into one's house one time. But both of these actions will involve roughly similar amounts of attention-or-decision-effort by the person who makes the mistake.

I want NO COMPUTERS in any of my hardware, to the degree possible, except where the computer is there in a way that lots of security reasoning has been applied to, and found "actively tolerable".

(This is similar to me wanting NO HIGH FRUCTOSE CORN SYRUP in my food. Its a simple thing, that massively reduces the burden on my decision routines, in the current meta. It is just a heuristic. I can violate it for good reasons or exceptional circumstances, but the violations are generally worth the attention-or-decision-effort of noticing "oh hey this breaks a useful little rule... let me stop and think about whether I'm in an exceptional situation... I am! ok then... I'll break the rule and its fine!")

I still have a Honda Civic from the aughties that I love, that can't be hacked and remotely driven around by anyone who wants to spend a 0 day, because it just doesn't have that capacity at all. There's no machine for turning a wheel or applying the brakes in that car, and no cameras (not even for backing up), and practically no computers, and no wifi hookup... its beautiful! <3

As hardware, that car is old enough to be intrinsically secure against whole classes of modern hacking attempts, and I love it partly for that reason <3

One of the many beautiful little bits of Accelerando that was delightful-world-building (though a creepy part of the story) is that the protagonist gets hacked by his pet robot, who whispers hypnotic advice to him while he's sleeping, way way way earlier in the singularity than you'd naively expect.

The lucky part of that subplot is just that his pet robot hates him much less than it hates other things, and thinks of him in a proprietary way, and so he's mostly "cared for" by his robot rather than egregiously exploited. Then when it gets smart enough, and goes off on its own to have adventures, it releases its de facto ownership of him and leaves him reasonably healthy... though later it loops back to interact with him as a trusted party.

I don't remember the details, but it is suggested to have maybe been responsible for his divorce, like by fucking with his subconscious emotions toward his wife, who the robot saw as a competing "claimant" on the protagonist? But also the wife was kinda evil, so maybe that was protective? 

Oh! See. Here's another threat model... 

...what if the "Apollo Neuro" (whose modes of vibration from moment-to-moment that you don't control) really DOES affect your parasympathetic nervous system and thus really can "hack your emotions" and it claims to be doing this "for your health" and even the company tried to do it nicely...

...but then maybe it just isn't secure and a Bad Hacker gets "audio access" (via your phone) and also "loose control of mood" (via the bracelet vibrations controlled by the phone) and writes a script to start giving you a bad mood around <some specific thing>, slowly training your likes and dislikes, without you ever noticing it?

Placebos are fake. Technology is different from "magic" (or placebos) because technology Actually Works. But also, anything that Actually Works can be weaponized, and one of the ways we know that magic is fake is that it has never been used to make a big difference in war. Cryptography has sorta maybe already been used to win wars. Even now? (Its hard to get clean info in an ongoing war, but lots of stuff around the Ukraine War only really makes sense if the US has been listening to a lot of the conversations inside of the Russian C&C loop, and sharing the intel with Ukraine.)

If you have a truly medically efficacious thing here, and you are connecting it to computers that are connected to the internet... eeeeek!

I personally "Just Say No" to the entire concept of the Internet Of Things.

It is just common sense to me that no one in the US military should be allowed to own or carry or use any consumer IoT devices. They get this wrong sometimes, and pay the price.

Once the number one concern of the median technology project is security, maybe I'll change my mind, but for now... nope!

New computing hardware is simply not trustworthy by default. (In a deep sense: same as new medicine. Same as any new technology that (1) weaves itself deeply into your life, yet (2) whose principles of operation are not truly a part of you and likely to make your life better on purpose for legible and legibly safe reasons.)

I was curious about the hypothetical mechanism of action here!

I hunted until I found a wiki page, and then I hunted until I found a citation, and the place I landed as "probably the best way to learn about this" was a podcast!

SelfHacked Radio, Dec 19, 2019, "Microdosing with Dr. David Rabin" (53 minutes)

[Intro:] Today, I’m here with Dr. David Rabin, who is a psychiatrist and neuroscientist. 

We discuss PTSD, psychedelics and their mechanisms, and the different drugs being used for microdosing.

I have not listened to the podcast, but this wiki article cites some part of that conversation (it doesn't say which part) in support of this claim:

This is done by its systematic approach of sending gentle vibrations that activates parasympathetic nervous response thus targeting the stress causing neurons.

If someone wanted to do a good deed and advance the state of the "art that can be easily learned by searching the web" in this area, they might listen to the whole podcast very carefully and update the wiki thoughtfully :-)

If I was going to try to charitably misinterpret trevor, I'd suggest that maybe he is remembering that "the S in 'IoT' stands for Security"

(The reader stops and notices: I-O-T doesn't contain an S... yes! ...just like such devices are almost never secure.) So this particular website may have people who are centrally relevant to AI strategy, and getting them all to wear the same insecure piece of hardware lowers the cost to get a high quality attack? 

So for anyone on this site who considers themselves to be an independent source of world-saving capacity with respect to AI-and-computer-stuff maybe they at least should avoid correlating with each other by trying the same weird IoT health products?

If I'm going to try to maximally predict something trevor might be saying (that isn't as charitable (and also offer my corrections and augmentations to this take))...

Maybe trevor thinks the Apollo Neuro should get FDA approval, and until that happens the device should be considered dangerous and probably not efficacious as a matter of simple category-based heuristics?

Like there's the category of "pills you find on the sidewalk" and then the question of what a "medical therapy without FDA approval" belongs in... 

...and maybe that's basically "the same category" as far as trevor is suggesting?

So then trevor might just be saying "this is like that" and... I dunno... that wouldn't be at all informative to me, but maybe hearing the reasonable parts (and the unreasaonble parts) of that explanation would be informative to some readers?

(And honestly for normal people who haven't tried to write business plans in this domain or worked in a bio lab etc etc etc... this is kinda reasonable! 

(It would be reasonable if there's no new communicable disease nearby. It would be reasonable if we're not talking about a vaccine or infection-killing-drug whose worst possible risk is less bad than the disease we're imminently going to be infected with due to broken port-of-entry policies and inadequate quarantines and pubic health operations in general. Like: for covid in the first wave when the mortality risk was objectively higher than now, and subjectively had large error bars due to the fog of war, deference to the FDA is not reasonable at all.))

One of the central components in my argument against the FDA is that (1) their stated goals are actually important because lots of quackery IS dangerous...

...but then part of the deeper beef with the FDA here is that (2) not even clinical government monitored trials are actually enough to detect and remove the possibility of true danger.

New drugs, fresh out of clinical trials, are less safe (because less well understood) than drugs that have been used for so long that generics exist.

With 30 year old drugs, many doctors you'll run into were taught about it in medical school, and have prescribed it over and over, and have seen patients who took the drug for 10 years without trouble and so on.

This is is just a higher level of safety. It just is. 

And yet also there's no way for the inventor of a new drug with a 20-year-patent to recoup all their science costs if their science costs are very very very large... 

...leading to a market sensitive definition of "orphan drugs" that a mixture of (1) broken patent law, and (2) broken medical regulation, and (3) market circumstances haphazardly emergently produce.

For example, lithium has bad long term side effects (that are often worth risking for short run patient benefits) that would never show up in a phase 2 trial. A skilled doctor doesn't care that lithium isn't "totally categorically safe" because a skilled doctor who is prescribing lithium will already know about the quirks of lithium, and be taking that into account as part of their decision to prescribe.

Just because something passed a phase 2 trial doesn't mean it is "definitely categorically safe"!

The list of withdrawn drugs in wikipedia is not complete but it shows a bunch of stuff that the FDA later officially classified as not actually "safe and effective" based on watching its use in clinical practice after approval.

That is it say, for these recalls, we can wind back to a specific phase 2 trial that generated a false positive for "safety" or a phase 3 trial that generated a false positive for "efficacy".

From my perspective (because I have a coherent mechanistic model of where medical knowledge comes from that doesn't require it to route through "peer reviewed studies" (except as a proxy for how a decent scientist might choose to distribute medical evidence they've collected from reality via careful skilled empiricism)) this isn't at all surprising!

It isn't like medicine is safe by default, and it isn't like medicine requires no skill to get right.

My core sadness is just that the FDA denies doctors professional autonomy and denies patients their body autonomy by forbidding anyone else to use their skill to make these determinations and then also the FDA gets it wrong and/or goes too slow and/or makes things way more expensive than necessary!

Like the FDA is the "king of the hill",  and they're not the best at wrestling with reality... they just have a gun.  They're not benevolent, they are just a bunch of careerist hacks who don't understand economics. They're not using their position to benefit the public very much in the way you'd naively expect, because they are often making decisions based on negotiations with other bureaucrats struggling to use the few levers they have, like to use FDA decisions to somehow help run medicare in a half-sane way despite the laws for medicare being broken too.

There are quicker and cheaper and more locally risk sensitive ways to try crazy medical things than the way than the centralized bureaucratic market-disrupting FDA does it from inside our generally corrupt and broken and ill-designed and sclerotic government.

Doctors in the 1950s (before the Kefauver-Harris amendment foolishly gave the FDA too much power based on an specious exuse), and those older doctors with more power and more trust made faster progress, for lower costs, than they do now.

But a lot of people (and maybe trevor?) outsource "being able to reason correctly about safety and efficacy", and so their attitude might be "down on medicine in general" or "down on even-slightly-shady health products in general" or something?

And if a patient with a problem is bad enough at reasoning, and has no one smart and benevolent nearby to outsource their thinking to... this isn't even definitely the wrong move!

Medical knowledge is a public good.

New medical stuff is dangerous.

There should be collective social action that is funded the way public goods should be funded, to help with this important public problem!

A competent and benevolent government would be generating lots of medical knowledge in a technologically advancing utopia... just not by using a broad "default ban" on medical innovation.

(A sanely built government would have something instead of the FDA, but that thing wouldn't work the way the FDA currently works, with efficient medical innovation de facto forbidden, the Right To Try de facto abolished, and doctors and smart people losing even the legal right to talk to each other about some options, and everyone else losing the right to honestly buy and honestly sell any medical thing in a way that involves them honestly talking about its operation and intended uses.)

I don't know how much of this trevor was saying. 

He invoked "categorical classification of medicine" without really explaining that the categories are subjective and contingent and nominal and socially constructed by a more-than-half-broken socio-political process that economists regularly bemoan for being broken.

I think, Elizabeth, that you're trying to detect local detailed risk models specific to the "Apollo Neuro" that might risk the safety of the user as a health intervention. 

This this regard, I have very little detailed local knowledge and no coherent posterior beliefs about the Apollo Neuro specifically... and my hunch is that trevor doesn't either?

Pretty cool! I did the first puzzle, and then got to the login, and noped out. Please let me and other users set up an email account and password! As a matter of principle I don't outsource my logins to central points of identarian failure.

I see there as being (at least) two potential drivers in your characterization, that seem to me like they would suggest very different plans for a time traveling intervention. 

Here's a thought experiment: you're going to travel back in time and land near Gnaeus Pompeius Magnus, who you know will (along with Marcus Licinius Crassus) repeal the constitutional reforms of Sulla (which occurred in roughly 82-80 BC and were repealed by roughly 70BC).

Your experimental manipulation is to visit the same timeline twice and either (1) hang out nearby and help draft a much better replacement to Sulla's reforms in ~76 BC to ~70 BC (and maybe bring some gold to bribe some senators or whatever else is needed here to make it happen?) or else (2) bring along some gold, and simply go hire a bunch of honest hard-working smiths to help you build a printing press anywhere in the Roman world, and start printing dictionaries and romance novels and newspapers and so on, and keep at it until the printing business becomes profitable because lot of people picked up literacy because doing some was easier for them to cheaply get value from, because there was a bunch of good cheap written materials!

Then the experimental data you collect is to let various butterflies float around... and resample 100 chaotic instances each of "20 AD" (for a total of 200 samples of "20 AD") and see which ones are closer to an industrial revolution and which ones are farther from one.

This is one set of things that might be missing (which could potentially be intervened on politically in the aftermath of Sulla):

All of the flywheels of progress — ...large markets... financial institutions, corporate and IP law—were turning very slowly.

And this is a different thing that might be missing one (that could be intervened on any time, but doing it when the Sulla/Pompey/Crassus intervention is possible helps with a ceteris paribus comparison):

All of the flywheels of progresssurplus wealth, materials and manufacturing ability, scientific knowledge and methods, ...communication networks...—were turning very slowly.

If the problem was bad and declining institutions, then the first intervention will help a lot more to get you to a prosperous ancient world without needing to go through the intervening dark age.

But if the problem was a lack of technologists with time and funding and skills to make the world better then the second intervention will probably help a lot more.

To be conceptually thorough, you could try to have a four way experimental design, and have two more time traveling trips, one of which is "both interventions" and the other just injects some random noise in a way that counts as "neither innovation". 

I think if "there is only the ONE BIG CATEGORY OF THING that's really missing" then there will be enormous leaps in the "both" timelines, and all 300 other sampled "20 ADs" (that got the "neither", "just tech", or "just laws" intervention) will all still be on course for a dark age.

To be clear, I don't mean to say that this is the only way to "divide your proposed flywheels of progress" into two chunks. 

Maybe the only real flywheel is wealth (and it is just about doing an efficient build-out of good infrastructure), or maybe the only real flywheel is large markets (because maybe "specialization" is the magic thing to unlock), or maybe it is only knowledge (because going meta always wins eventually)?

There's a lot of possibilities. And each possibility suggests different thought experiments! :-)

Load More