LESSWRONG
LW

CRISPY
15080
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
What We Learned from Briefing 70+ Lawmakers on the Threat from AI
CRISPY2mo2-21

This is very interesting. Debriefing summaries like this are very useful in assessing the state of play. Information like this is typically kept confidential, so thank you for sharing. 

I think I drew different conclusions from the information than a need to act on legislators. This article highlights vulnerabilities in the systems that are supposed to protect us. If you were able to use FUD to get tangible action from officials, then other lobbyists using positive incentives, should be able to get even greater action from a greater number of officials. 

It seems to me the threat is more the lobbyists than their customers. Organizing action at lobbyists who are enabling things that pose an existential threat to civilization is perhaps a more structured approach that reduces the advantages provided by the customers of the lobbyists. Trying to get Whitehall to act for the Greater Good is high minded, but is it practical, given the ease with which they can be spurred to action? 

Reply
Meditations on Doge
CRISPY2mo-20

The premise of 

Reply
leogao's Shortform
CRISPY2mo10

I think you’re correct. There’s a synergistic feedback loop between alarmism and social interaction that filters out pragmatic perspectives. Creating the illusion that the doom surrounding any given topic more prevalent than it really is, or even that it’s near universal. 

Even before the rise of digital information the feedback phenomenon could be observed in any insular group. In today’s environment where a lot of effort goes into exploiting that feedback loop it requires a conscious effort to maintain perspective, or even remain aware that there are other perspectives. 

Reply
AIs at the current capability level may be important for future safety work
CRISPY2mo79

The argument against R&D of contemporary systems because of future systems capabilities has always been shortsighted. Two examples of this are nuclear weapons controls and line level commenting of programming code. 

In nuclear weapons development the safety advocates argued for safety measures on the weapons themselves to prevent misuse. They were overruled by arguments that future systems would be so physically secure that they couldn’t be stolen and the controls were centralized to the launch control process, usually with one person there having ultimate control. Proliferation and miniaturization eventually made theft and misuse a major risk and an entire specialized industry sprang up to develop and implement Permissive Action Links (PALs). 

It wasn’t until the late nineteenth 1980s that the entire U.S. nuclear weapons inventory was equipped with PALs.  Which is nuts. Even then, malfunction concerns were paramount, and they continue to be concerns, creating real risks in deterrence based defenses. Even today PALs are still being upgraded for reliability. It was the fact that PALs had to be retrofitted that was responsible for the 70 year timeframe for implementation of devices with questionable reliability. Safety R&D as a primary R&D focus during the pioneering era of nuclear weapons could have prevented that mess. Trying to shoehorn it in later was unbelievably expensive, difficult, and created strategic threats. 

The finance and defense sectors have a similar problem with software. High turnover iterations led to a culture that eschewed internal comments in the code. Now making changes creates safety concerns that are only overcome with fantastically complex and expensive testing regimes that take so long to complete the problem they are trying to solve has been fixed another way. It’s phenomenally wasteful. Again, future safety was ignored because if arguments based in future capabilities. 

Most importantly however, is that R&D of contemporary systems is how we learn to study future systems. Even though future generations of the technology will be vastly different, they are still based, ultimately, on today’s systems. By considering future safety as a contemporary core competency the way is being paved for the study and innovation of future systems and preventing waste from adhoc shoehorning exercises in future. 

Reply1
Where is the YIMBY movement for healthcare?
CRISPY2mo197

One of the many things I learned during my wife’s cancer treatment is that healthcare is designed with the cost development systems insulated to resist external influence. There is little accountability for the base cost architecture, often to the point where no one can identify the architect. 
This makes addressing inefficiencies, exploitations, and shortcomings almost impossible. 

From a regulatory standpoint, legislative action has little to target. The doctor, the technicians, the hospital, the supply vendor, the pharmacist, and even the insurers rarely set their own prices. Between every transaction there is at least one middleman who manages the cost and often their pricing guidelines are set by yet another middleman. The structure eliminates the cash flow variable from the parties in direct patient contact and leaves only the profit component as a variable. This undermines free market cost controls and performance incentives.

For hypothetical example, a hospital billing for a $10000 procedure does not receive that $10000. Third, fourth, or fifth parties receive the money and redistribute it back to everyone in the chain. That means the hospital is left with a negotiable amount of just $600 (above the line, $200 below). This phenomenally granular disintermediation of the cost structure means no party has a vested interest in individual transactions. There’s simply not enough money at that level to invest in higher stakes negotiations one might expect in a $10000 transaction. This makes volume the success defining metric. 

As I was spending countless days in hospital over the course of a year, I started counting keystrokes and mouse clicks performed by various hospital staff. Roughly 60% of all computer interaction was performed solely for billing purposes, not patient care. Billing purposes are also the primary driver in wait times. The doctors review patient records before each visit and spend much of that time reviewing what treatments they are allowed to use based on the patient’s financial means and their insurer. The highest performing doctors (a volume metric) are the ones who memorize the treatment approval criteria and don’t have to refer to the computer as often. 

The scope of the disintermediation is vast, so more examples are not useful. At the end of the day, what it boils down to is that the individual parties involved in the minutiae of patient care are insulated from each other. This makes YIMBY activism ineffective because enacting change that way only affects one tiny group within the chain and everyone else adjusts to compensate. It’s like trying to eat Jello with a cooked noodle. Huge effort with little to no reward. 

I do not have any reasonable solution. Americans have proven time and time again that adopting healthcare like developed countries have is unacceptable. The focus has to change to actual healthcare where patient outcomes are important (as opposed to focusing on billing outcomes for  providers). Increased frequency of Luigi Lobbying is unreasonable, but I think more people are beginning to see it as justified. 
 

Reply11
We Can Build Compassionate AI
[+]CRISPY5mo-7-6
We Can Build Compassionate AI
CRISPY5mo-3-6

“Universal religion” has not taught or improved compassion or empathy. They teach that compassion and empathy are results of adhering to the religion. Membership confers the attributes of compassion and empathy, and minimizes or negates those attributes in non-members. 

Religions aiming at universality are inherently unaccountable and divisive political entities. They devalue and dehumanize non-members and present clear and direct threats against those who oppose them, or do not want to comply with behavioral standards established by the worst kind of absentee manager. 

Look at the Abrahamic cults. The overwhelming majority of their sacred texts are justifications for genocide and ethnic supremacy. Their brand of compassion and empathy have overseen 2,000 years of the worst violence in history. Christianism, for example, continues its tradition claiming to be the of arbiter of compassion, while simultaneously  acknowledging compassion as something only they can provide. 

It’s the pinnacle of right by might and it’s the worst possible model for training anything except an ethnic monoculture of racially similar ideologues with a penchant for violence.

If you want to learn about compassion and empathy, it’s best to go to the source of it all. Plato, and the Platonic School, are where Second Temple Judaism and Christianism, and to a large degree Islam, got their concepts of compassion and empathy. They twisted and perverted Platonic ideals to suit their political aims. They took away the individual accountability and put all the responsibility on some nebulous, ever changing supreme being who, oddly enough, always agrees with them. Best to go to the source and leave the politics out of it. 

Reply11
How to Make Superbabies
CRISPY5mo6-3

From a sales perspective, I find myself bewildered by the approach this article takes to ethics. Deriding ethical concerns then launching into a grassroots campaign for fringe primate research into genetic hygiene and human alignment is nonstarter for changing opinions. 

This article, and another here about germ engineering, are written as if the concepts are new. The reality is that these are 19th century ideas and early attempts to implement them are the reason for the ethical concerns. 

Using the standard analogical language of this site, AI and gene editing are microwaves to the toaster oven of historically disastrous applied science programs like Lebensborn. Changing the technological methods of reaching an end do not obviate the ethical issues of the end itself. The onus of allaying those concerns is on the advocates and researchers, not society. 

This article could very well have been written by Alfred Ploetz. That’s the barrier that has to be overcome. How is germ engineering, gene editing, and human alignment different from the programs that defined the 20th century as one of racial supremacy, genocide, and global warfare? 

I know the answers to those questions. But I’m not the audience that needs to be convinced. What’s being presented here is not answering those questions. In fact, it’s doing the opposite. Anyone who has read Ploetz or Anastasius Nordenholz is going to, rightly, label this appeal to utopian reason as crypto-eugenics. It’s an inescapable certainty.

Any argument that successfully overcomes the historically rooted ethical concerns must explain how the proposal is not Ploetz. How Nordenholz’s arguments against humanism and financial throttling of research won’t be reused to pursue supremacy ideologies. Those are the concerns, not incremental technological advances. The technology is just a distraction. The ethical questions must be answered before the technology can be considered. 

Reply
No posts to display.