Some lawyers, sure, but not the vast majority of the legal profession.
All those points you made are correct (besides maybe the x-risk one--you were right that that one came a little more from opinion, having worked with a bunch of lawyers I believe they generally do nothing better than provide expert arguments and rationalizations for whatever they want to believe or make you believe, rather than following the facts to the truth in good faith), but I don't think they're enough to outweigh the fact that the legal profession is absolutely ripe for the kinds of automation that AI excels at.
Paralegals and legal secretaries in particular I think are on the chopping block. Millions of people in those roles spend their whole day working on searching through complicated datasets of badly organized data (in discovery proceedings, each side has an obligation to present certain sets of evidence and documentation to the other side before a trial, but they have no obligation to organize it well...), picking out the relevant information to help answer a certain question or make a certain point, and arranging it to be presented in a compelling way. That's all stuff that AI excels at and can do in the blink of an eye, and there are ways to use AI to automate some of the process without hallucinations being a problem. Google NotebookLM in particular is basically tailor-made to help lawyers parse huge troves of discovery data for the specific information they're looking for, which many people in the legal profession have a full time job doing today. (and it takes only a little training and common sense to be able to do this and steer clear of the hallucination issue.)
Sure, I believe that lawyers will see to it that there will always be human lawyers representing clients in the courtroom, formally filing motions and submitting paperwork and consulting with clients and all the stuff that only lawyers are doing already, but in the near future I expect the giant infrastructure of clerks, secretaries and paralegals that supports them with menial paperwork to be gutted. I think the only reason it's not happening already in a more significant way is that lawyers on average tend to be older than most other careers, and many of them are set in their ways with the technology that they're used to. I believe the generation of lawyers that has grown up understanding computers and AI will not need nearly the number of supporting staff per lawyer as the industry currently has--if any at all.
Update: After 2 seconds of Googling I realized what I'm talking about is literally just a Wiki and I'm trying to reinvent it. MediaWiki which powers Wikipedia is open source and would be a perfect fit for this project I think. Besides, "The Whistleblower Wiki" has a nice ring to it.
https://www.mediawiki.org/wiki/MediaWiki
Cool to hear my feedback is appreciated! Spreadsheet is an improvement. I think the ultimate form of this project would be some kind of SQL database with a website and fancy UI built on top of it--but that's not my area of expertise so I wouldn't even know where to start. Maybe you have 2 components--a spreadsheet/database that lets you search for names based on categories and filters, and then each gives a link to a Wiki-style page on the person with the full text of their story, notes on what they did right/wrong and all the other stuff you would want to write out long form? I'm kind of envisioning a website with 3 parts: the Database side which is a tool to search whistleblowers by category, the Profiles side which is the wiki-style long form posts about each individual, and then an Articles or Blog side which is the meta part where you draw conclusions from the data and write posts such as guides for prospective whistleblowers, guides on infosec, any other high level analysis and discussion of the whole topic. Maybe
Unfortunately I'm not in a position to help financially and I certainly don't have legal expertise--though I don't think hosting it comes with any legal risks like you seem to suggest, all this information is public and First Amendment protected, and I think any cheap domain registrar would do. I do have a couple more points of feedback though:
-I don't think the Category A/B/C etc. system is necessary or helpful. Those categories are just filtering by multiple other categories--so instead of "Category A", you can just filter by "Whistleblower, Classified, no prison". which are other existing, hopefully filterable categories. If you had a website with a nice UI down the road, you might make them quick buttons for popular filters which might more or less match your categories, but it doesn't need to be a separate field of its own in the data.
-I would add a field for "Synopsis" with just a 1 or 2-sentence blurb about what the whistleblower is known for. Example for Edward Snowden: "NSA whistleblower, leaked documents to the press pertaining to illegal government surveillance, currently in asylum in Russia".
-Current Status field. If I'm reading it correctly it looks like your "Imprisoned" field is just for whether or not the person has ever been imprisoned, not whether or not they're in jail right now. I would separate that out into multiple fields, for legal status and current physical status. Legal status could be things like "Wanted", "Convicted", "Pardoned", and "Clear", current status could be "Free", "Fugitive", "Incarcerated", "Exile/Asylum" etc.
-More links to published works--maybe a separate field for works from the press, and their own works? I can see you're already starting on that. In particular I know Daniel Ellsberg released a couple books, I have read one of them, about his experiences with game theory and nuclear deterrence at the RAND corporation instead of the Pentagon Papers, but it's related to whistleblowing and I think it deserves a link.
-Links to organizations related to whistleblowing and whistleblower protections. The ACLU to start, I'm sure there are others. Maybe even reach out to those organizations directly and see if they're interested in sponsoring the project?
First I want to say that I'm really on your team here. I support what you're trying to do, I agree with you about the importance of whistleblowers, and your idea seems like it could be a valuable resource to prospective whistleblowers or just plain people who want to get more educated about some of the history of government wrongdoing and attempts to cover it up.
But that said... for something you called a "database", a long list of bullet points is about the worst way the data could be organized and makes it borderline useless as a resource. You already have your own website, even hosting it there in rudimentary HTML with hyperlinks would make it a little more usable. Or a big spreadsheet. Or a wiki. Or ideally, get a new domain just for this project, and organize the data there in a way that you can browse it by tags such as "Imprisoned", "Not imprisoned", "Spy/Money", "Ideological/Liberty", "Top Secret", etc. The whole reason we have computers is to make it easier to browse, analyze, and reference data from datasets. We have so many tools to make this easier. Please, use them. This really should have been a linkpost to another website.
Seriously, though, you're doing valuable work and I would love to see this project develop.
[disclaimer: I'm a cis, hetero, straight, white male who has never struggled with any issues around gender identity, so my perspective on trans issues is entirely an outside one]
I think another factor here is the "bubble" effect that happens in many online communities. Many chronically-online people who get a lot of their social interaction within a single niche online community can begin to form distorted views where they believe the views, beliefs and norms in their online niche are much more representative of society at large than they actually are. I've seen it happen with niche communities related to conspiracies, political beliefs, health and wellness, all kinds of things, and I think that r/traa subredit mentioned is a classic example of this. I feel bad for the lonely teenager that stumbles on those communities while curious about transitioning, gets way deep in the bubble and begins believing that they'll receive all kinds of love, validation and acceptance, proceeds to transition, and then realizes too late that much of society outside trans communities or politically progressive urban areas will react to them with indifference at best, or outright disgust and hostility at worst. The whole anime/cutesy side of transgender subculture is something that most people outside that subculture don't understand, and if a trans person who's used to viewing themselves and others through that lens expects the rest of society to see them that way, they'll be sorely disappointed--sometimes with tragic consequences.
I'm not some conservative saying that "people who think they are trans need to get off the gay Internet and go touch grass", though maybe some do. I'm sure that for plenty of people, transitioning is the right decision to make and it has resulted in a better overall life for them. I'm just saying that this cute-anime-trans-online-space bubble can introduce some biased and erroneous thinking to people weighing important decisions about transitioning. I'm really impressed by this article for acknowledging and really exploring the complexity here. Usually, it's only very anti-trans commentators that will even admit that AGP is a factor at all. I'm also impressed by your willingness to discuss those things openly that most people consider extremely humiliating about your own psyche, and by doing so you are moving the whole conversation forward in a meaningful way. (I guess I shouldn't say that it took a lot of balls?)
Correct, my mistake. 1200s. I was just reaching for a historical example of when a real "apocalypse" did in fact come to pass--when not only are you and everyone you know going to get killed but also your entire society as you know it will come to an end--and the brutal Mongol conquest of China was the first one that came to my mind, probably thanks to Dan Carlin's excellent Hardcore History podcast on the subject. I didn't take the 2 seconds on Wikipedia I should have to make sure I was talking about the right century.
I was thinking of other contenders like the smallpox epidemic in North America following the Columbian exchange, but in that scenario you didn't really have "doomers" who were predicting that outcome, because their epidemiology at the time wasn't quite up to understanding the problem they were facing. But in China at the time, it's feasible that some individuals would have had access to enough news and information to make doom predictions about the Mongol apocalypse that turned out to be unfortunately correct.
That's comparing apples to oranges. There are doomers and doomers. I don't think the "doomers" predicting the Rapture or some other apocalypse are the same thing as the "doomers" predicting the moral decline of society. The two categories overlap in many people, but they are distinct, and I think it's misleading to conflate them. (Which is kind of a critique of the premise of the article as a whole--I would put the AI doomers in the former category, but the article only gives examples from the latter.)
The existential risk doomers historically are usually crazy, and they've never been right yet (in the context of modern society anyway--I suppose if you were an apocalypse doomer in 1300s China saying that the Mongols were going to come and wipe out your entire society you were pretty spot on), but that doesn't mean they are always wrong or totally off base. It's completely rational to be concerned about doom from a nuclear war, for example, even though it hasn't happened yet. Whether AI risk is crazy "Y2K/Rapture" doom or feasible "nuclear war" doom is the real debate, and this article doesn't really contribute anything to it.
What this article does a good job of is illustrating how "moral decline" doomers as opposed to "apocalypse" doomers are often proved technically correct by history. I think what both they and this article miss is that they often see events as causes of the so-called decline, when they're actually milestones in an already-existing trend. Legalizing gay marriage didn't cause other "degenerate" sexual behavior to be more accepted in society--we legalized gay marriage because we had already been moving away from the Puritanical sexual mores of the past towards a more liberated attitude, and this was just one more milestone in that process. Now that's not always true--the invention of the book, and later, the smartphone absolutely did cause a devaluing of the ability to memorize and recite knowledge. And sometimes it's a little bit of both, where an event is both a symptom of an underlying trend, and also contributes to accelerating it. But I really like how the article acknowledges that they could be right even if "doom" as we think of it today did not occur, because the values that were important to them were lost--
Probably the ancients would see our lives as greatly impoverished in many ways downstream of the innovations they warned against. We do not recite poetry as we once used to, sing together for entertainment, roam alone as children, or dance freely in the presence of the internet's all-seeing eyes. Less sympathetic would be ancient's sadness at our sexual deviances, casual blasphemy or so on. But those were their values.
We laugh at them for being prudish for how appalled they would be at our society with homosexuality, polyamory, weird fetishes, etc. all being more or less openly discussed and acceptable, but think what it would feel like to you if in the future you saw your society trending towards one where, say, pedophilia was becoming less of a taboo? It doesn't matter if it's right or wrong, it's the visceral response that most people have to that idea that you need to understand. That's what it feels like to be a culturally conservative doomer watching their society experience value drift. People today like to think that our values are somehow backed up by reality in a way that isn't true of other past or present value systems, but guess what? That's what it feels like to have a value system. Everyone, everywhere, in all times and all places has believed that, and the human mind excels at no other task more than coming up with rationalizations for why your values are the right ones, and opposing values are wrong.
Overall I think this article is pretty insightful about the "moral decline" type of doomers, just completely unrelated to the question of AI existential risk that brought it up in the first place.
Not to put words in the author's mouth, but when they said "We go gently...", I don't think they meant "go" as in become extinct, at least not any time soon. I took that to mean "go" into obscurity and stagnation instead of keeping on advancing technologically until we're building Dyson spheres and colonizing other planets and all the science fiction stuff that most people believe humanity is going to do eventually. In that scenario, we would keep living on aimlessly for many millenia until some asteroid or other cosmic event took us out, because we had never advanced enough to be able to handle that or have colonies as a backup.
I agree with you that we're unlikely to stop reproducing just because many humans get addicted to watching/interacting with content fed to us by a perfect algorithm for most of our waking hours. Raising a family seems to be one of those things that brings intrinsic meaning and pleasure to many people, so I believe you'd see more of it, not less--most of the reasons people are choosing not to have kids today are because they don't have enough time or money in today's economy and work environment, and in this scenario all those problems are solved. This scenario makes the assumption that the AI-fueled content machine would be so addictive that basically all humans would forsake all other pursuits and live like the people in WALL-E. I don't think that's necessarily true, and if it isn't, we might see a population explosion requiring our AI-enabled oligarchic overlords to take control measures to keep it manageable.
Far from humanity going extinct, I think one possible catastrophe in the future, if AI advances roughly along these lines, is a Malthusian scenario where the population grows way beyond current levels thanks to AI optimizing the distribution of resources to make that possible, but becoming so dependent on complex AI logistics to provide everyone's needs that any slight hiccup in the distribution network can quickly cause a famine that kills millions.
This scenario seems to allow enough room for AI alignment and humans still being in the driver's seat on big picture issues that it wouldn't decide to let us go extinct intentionally. We can hope.
It sounds to me like you are looking for two conflicting things, trying to achieve them both at once and getting frustrated at the results. You're trying to deepen your understanding of philosophy and participate in conversation on the subject, and you're trying to "cure" your growing misanthropy and rediscover your love and kinship for your fellow man.
Any rational person who is above average intelligence can't escape having some elitism. The majority of average people are, for all practical purposes, not capable of engaging with, understanding and discussing certain intellectual subjects the way most rationalists do. They might just not be intelligent enough, but beyond raw intelligence, there's also a certain confluence of personality traits that a person needs to be motivated to put in the effort to understand and participate in discourse on complex topics which most people seem to lack.
So, you have to make a choice. Your rationality and your experience have led you to a feeling of elitism, which is pretty grounded in objective facts, the fact that you have some positive traits that a majority of average people don't. Now, is your ability to respect and enjoy spending time with people completely conditional on their ability to participate in rational discussions as your intellectual peer? That way lies misanthropy. You'll be constantly disappointed in people for not measuring up to your standards, and your inner teenage edgelord is basically proven right. You can try to surround yourself with only smart people and rationalists and spend your life sneering at the rest of the world, if you like. You might even be happier that way, I'm not in a position to know.
But there's another balance that seems to me to be a little healthier. You can simultaneously respect and enjoy the company of average people, while understanding that trying to talk to most of them about philosophy would be a waste of time--even plenty of them that think they understand it. Want to get in touch with the common man? Do it at a bar or a music concert or some event for a hobby you like, try not to be condescending to them and engage with them on their level. Want to participate in the Great Conversation? Do it in venues with a lot of vetting and gatekeeping to weed out the morons. Those are two entirely separate things, and trying to do them together is a huge mistake. This may be replacing misanthropy with a kind of paternalism, but that seems better to me somehow and might even be largely justified. A lot of those philosophers you cited earlier were probably grumpy introverts in personality anyway (Schopenhauer definitely was), and just the fact that you see your own misanthropy as a problem and you want to fix it sets you apart from them.