Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
zoop15-7

I really, really, really did not like this post. I found it to be riddled with bad assumptions, questionable unsupported claims, and critical omissions. I don't think any of the core arguments survive close scrutiny.

Moreover, I took serious issue with the tone throughout. The first half hand-waves some seriously questionable claims into existence with strong confidence, while the second half opines that everyone who ever thought otherwise is some combination of sycophantic, incurious, brainwashed, or an idiot. I would have appreciated more intellectual humility. 

***

My read is that this post totally whiffed on the entire subject of die casting cost savings.

 The chassis of cars is a relatively small fraction of their cost. The cost of aluminum die casting and stamped steel is, on Tesla's scale, similar. Yet, there were so many articles saying gigacasting was a major advantage of Tesla over other companies.


To be clear: the cost savings argument for die casting is little to do with the cost of the chassis itself, it's mostly an argument about the cost of body assembly. 

In an automotive assembly line one of the most labor-intensive, challenging, and expensive steps is the "body shop," where a car's structural components are assembled into a "body in white."  Die casting saves time and money by reducing the number of welds, bolts, etc. required to go from components to body. It also cuts down on total weight, waste material from manufacturing a larger number of components, and the number of steps one can introduce tolerance errors. 

Here is an example from the Model 3. Switching from traditional assembly to die casting cuts out 169 separate metal parts and 1600 welds. Those costs add up! Look at the difference in estimated variable costs. 

in short, your claim: "The cost of aluminum die casting and stamped steel is, on Tesla's scale, similar" both seems to miss the entire point and run against literally everything I have seen written about this. You need citations for this claim, I am not going to take your word for it.

***

The price thing alone seems like a post invalidating miss, but I was pretty alarmed by the sheer number of other strong assertions made with weak or no supporting evidence. Some of these seemed obviously wrong. 

Tesla has been widely criticized for stuff not fitting together properly on the car body. My understanding is that the biggest reason for that is their large aluminum castings being slightly warped.

Tesla's panel gap issues predate the giga press by like a decade and has always been attributed to wide tolerances for all parts and lazy QA (de-prioritized in favor of R&D). I have absolutely no idea how you got to this "understanding." Citation please?

As for voids, they can create weak points; I think they were the reason the cybertruck hitch broke off in this test.

Or the geometry of the frame was insufficiently optimized for vertical shear. I do not understand how you reached this conclusion.

BYD is still welding stamped steel sheets together, and that's why it can't compete on price with Tesla. Hold on, it seems...BYD prices are actually lower than Tesla's? Much lower? 

Price alone doesn't really say anything about the giga press. Perhaps BYD's efficiency could be explained by some of the other few thousand things that go into making a car? What about all the other stamped steel chassis companies BYD is way more efficient than?

Also, production costs are the actual thing that matter for this argument, not price. Tesla has 6x the profit per car of BYD which obviously factors into the higher prices. 

Oh, and Tesla is no longer planning single unitary castings for future vehicles?

This is a bit misleading. Tesla doesn't currently do unitary castings, so this is a suspension of future R&D not changing what they currently do. Importantly, this means they will keep giga casting their chassis for the foreseeable future.

Money is a factor, of course; PR agencies drive a lot of the articles in media. I assume Tesla pays some PR firms and people there presumably decided to push the Giga Press.

You should stop assuming! Tesla spent essentially nothing on marketing until 2023, well after this assumed PR would be taking place. By nothing I mean that the estimate for their marketing spend in 2022 (literally all marketing to include PR if there was any at all) was $175k

zoop10

Actually, my read of the data is that the mountain west is not more environmentally conscious than the rest of the US. 

The mountain west poll does not include national numbers, so I have no idea where your national comparisons are coming from. If I did, I'd check for same year/same question, but because I don't know where they're from I can't.

Take a look at this cool visualization of different state partisan splits from 2018: https://climatecommunication.yale.edu/visualizations-data/partisan-maps-2018/

The mountain west appears neither significantly more nor significantly less partisan on any of the climate change related questions than the rest of the US. 

My main point, which I don't think you've contradicted (even if I accept that the mountain west is unique), is that you're making an argument about "environmentalism" partisanship by using primarily "climate change" polling data. The charts from the 2013 paper you've posted sort of confirm this take–climate change is obviously a uniquely partisan issue. 

The intro to your sequence states the following:

The partisanship we see today is unusual, compared to other issues, other countries, or even the US in the 1980s.

Basically, I have not seen evidence that this is true for issues beyond climate change (or other countries!), and I think your sequence would benefit by explicitly comparing 

  • the partisan split of non-climate-change environmental issues (e.g. rain forest protection) to 
  • the partisan split of non-environmental issues (e.g. taxation)
zoop40

My initial reaction, admittedly light on evidence, is that the numbers you present are at least partially due to selection bias. You've picked a set of issues, like climate change, that are not representative of the entire scope of "environmentalism." It shouldn't surprise anybody that "worry about global warming" is a blue issue, but the much more conservative-y "land use,"  "protection of fish and wildlife" and "conservation,"  issues for whatever reason are often not measured. In short, it feels a little to me that your actual argument is that liberal-coded environmental issues are partisan.

More than half of state wildlife conservation funding comes from hunting licenses and firearms taxes. I assure you, these fees mostly come from republicans in republican states. Here  is some polling done in the west on environmental issues. It shouldn't be a surprise that republican voters in Wyoming and rural Colorado care a lot about the environment, but one shouldn't expect them to think about the issues in the same way as latte drinking knowledge workers in coastal cities. 

It also might interest some to read how Nixon talked about the environment. This message to congress about founding the EPA in 1972 has some interesting passage, including the following:

PROTECTING OUR NATURAL HERITAGE

Wild places and wild things constitute a treasure to be cherished and protected for all time. The pleasure and refreshment which they give man confirm their value to society. More importantly perhaps, the wonder, beauty, and elemental force in which the least of them share suggest a higher right to exist--not granted them by man and not his to take away. In environmental policy as anywhere else we cannot deal in absolutes. Yet we can at least give considerations like these more relative weight in the seventies, and become a more civilized people in a healthier land because of it.

I've paid attention to politics for a long time, but I've never heard a democrat talk like this about the environment. Just this one paragraph contains three progressive blasphemies, nearly one per sentence:

  • The idea that the environment belongs in any way shape or form to a nation or a people (is our heritage) 
  • The idea that the environment derives its value from the "pleasure and refreshment" they "give man"
  • A higher right to exist not granted by man?????! 
zoop30

I hear what you're saying. I probably should have made the following distinction:

  1. A technology in the abstract (e.g. nuclear fission, LLMs)
  2. A technology deployed to do a thing (e.g. nuclear in a power plant, LLM used for customer service)

The question I understand you to be asking is essentially how do we make safety cases for AI agents generally? I would argue that's more situation 1 than 2, and as I understand it safety cases are basically only ever applied to case 2. The nuclear facilities document you linked definitely is 2. 

So yeah, admittedly the document you were looking for doesn't exist, but that doesn't really surprise me. If you started looking for narrowly scoped safety principles for AI systems you start finding them everywhere. For example, a search for "artificial intelligence" on the ISO website results in 73 standards . 

Just a few relevant standards, though I admit, standards are exceptionally boring (also many aren't public, which is dumb):

  • UL 4600 standard for autonomous vehicles
  • ISO/IEC TR 5469 standard for ai safety stuff generally (this one is decently interesting)
  • ISO/IEC 42001 this one covers what you do if you set up a system that uses AI

You also might find this paper a good read: https://ieeexplore.ieee.org/document/9269875 

zoop30

I've published in this area so I have some meta comments about this work.

First the positive: 

1. Assurance cases are the state of the art for making sure things don't kill people in a regulated environment. Ever wonder why planes are so safe? Safety cases. Because the actual process of making one is so unsexy (GSNs make me want to cry), people tend to ignore them, so you deserve lots of credit for somehow getting ex-risk people to upvote this. More lesswronger types should be thinking about safety cases.

2. I do think you have good / defensible arguments overall, minus minor quibbles that don't matter much.

Some bothers:

1. Since I used to be a little involved, I am perhaps a bit too aware of the absolutely insane amount of relevant literature was not mentioned. To me, the introduction made it sound a little bit like the specifics of applying safety cases to AI systems have not been studied. That is very, very, very not true. 

That's not to say you don't have a contribution! Just that I don't think it was placed well in the relevant literature. Many have done safety cases for AI but they usually do it as part of concrete applied work on drones or autonomous vehicles, not ex-risk pie-in-the-sky stuff. I think your arguments would be greatly improved by referencing back to this work. 

I was extremely surprised to see so few of the (to me) obvious suspects referenced, particularly more from York. Some labs with people that publish lots in this area.

  • University of York Institute for Safe Autonomy
  • NASA Intelligent Systems Division
  • Waterloo Intelligent Systems Engineering Lab
  • Anything funded by the DARPA Assured Autonomy program

2. Second issue is a little more specific, related to this paragraph:

To mitigate these dangers, researchers have called on developers to provide evidence that their systems are safe (Koessler & Schuett, 2023; Schuett et al., 2023); however, the details of what this evidence should
look like have not been spelled out. For example, Anderljung et al vaguely state that this evidence should be “informed by evaluations of dangerous capabilities and controllability”(Anderljung et al., 2023). Similarly, a recently proposed California bill asserts that developers should provide a “positive safety determination” that “excludes hazardous capabilities” (California State Legislature, 2024). These nebulous requirements raise questions: what are the core assumptions behind these evaluations? How might developers integrate other kinds of evidence?

The reason the "nebulous requirements" aren't explicitly stated is that when you make a safety case you assure the safety of a system against specific relevant hazards for the system you're assuring. These are usually identified by performing a HAZOP analysis or similar. Not all AI systems have the same list of hazards, so its obviously dubious to expect you can list requirements a priori. This should have been stated, imo.

zoop1-2

I don't think it works if there isn't a correct answer, e.g. predicting the future, but I'm positive this is a good way to improve how convincing your claims are to others.

If there isn't ground truth about a claim to refer to, any disagreement around a claim is going to be about how convincing and internally/externally consistent the claim is. As we keep learning from prediction markets, rationale don't always lead to correctness. Many cases of good heuristics (priors) doing extremely well. 

If you want to be correct, good reasoning is often a nice-to-have, not a need-to-have. 

zoop3-2

I very strongly disagree. In my opinion, this argument appears fatally confused about the concept of "software." 

As others have pointed out, this post seems to be getting at a distinction between code and data, but many of the examples of software given by OP contain both code and data, as most software does. Perhaps the title should have been "AI is Not Code," but since it wasn't I think mine is a legitimate rebuttal. 

I'm not trying to make an argument by definition. My comment is about properties of software that I think we would likely agree on. I think OP both ignores some properties software can have while assuming all software shares other separate properties, to the detriment of the argument.

I  think the post is correct in pointing out that traditional software is not similar to AI in many ways, but that's where my agreement ends.

 

1: Software, I/O, and such

Most agree on the following basic definition: software is a set of both instructions and data, hosted on hardware, that governs how input data is transformed to some sort of output. As you point out, inputs and outputs are not software.

For example, photos of a wedding or a vacation aren’t software, even if they are created, edited, and stored using software.

Yes.

Second, when we run the model, it takes the input we give it and performs “inference” with the model. This is certainly run on the computer, but the program isn’t executing code that produces the output, it’s using the complicated probability model which grew, and was stored as a bunch of numbers. 

No! It is quite literally executing code to produce the output! Just because this specific code and the data it interacts with specifies a complicated probability model that does not mean it is not software. 

Every component of the model is software. Even the pseudorandomness of the model outputs is software (torch.randn(), often). There is no part of this inference process that generates outputs that is not software. To run inference is only to run software.

 

2: Stochasticity

The model responds to input by using the probability model to estimate the probability of difference responses, in order to output something akin to what the input data did - but it does so in often unexpected or unanticipated ways.

Software is often, but is not necessarily deterministic. Software can have stochastic or pseudorandom outputs. For example, software that generates pseudorandom numbers is still software. The fact that AI generates stochastic outputs humans don't expect does not make it not software.

Also, software is not necessarily interpretable and outputs are not necessarily expected or expectable.

 

3: Made on Earth by Humans

First, we can talk about how it is created. Developers choose a model structure and data, and then a mathematical algorithm uses that structure and the training data to “grow” a very complicated probability model of different responses... The AI model itself, the probability model which was grown, is generating output based on a huge set of numbers that no human has directly chosen, or even seen. It’s not instructions written by a human.

Neither a software's code nor its data is necessarily generated by humans.

 

4: I have bad news for you about software engineering

Does software work? Not always, but if not, it fails in ways that are entirely determined by the human’s instructions.

This is just not true, many bugs are caused by specific interactions between inputs and the code + data, some also caused by inputs, code, data, and hardware (buffer overflows being the canonical example). You could get an error due to cosmic bit flips, that has nothing to do with humans or instructions at all! Data corruption... I could go on and on.

For example, unit tests are written to verify that the software does what it is expected to do in different cases. The set of cases are specified in advance, based on what the programmer expected the software to do. 

... or the test is incorrect. Or both the test and the software are incorrect. Of course this assumes you wrote tests, which you probably didn't. Also, who said you can't write unit tests for AI? You can, and people do. All you have to do is fix the temperature parameter and random seed. One could argue benchmarks are just stochastic tests...

If it fails a single unit test, the software is incorrect, and should be fixed.

Oh dear. I wish the world worked like this. 

Badly written, buggy software is still software. Not all software works, and it isn't always software's fault. Not all software is fixable or easy to fix.

 

5: Implications

What we call AI in 2024 is not software. It's kind of natural to put it in the same category as other things that run on a computer, but thinking about LLMs, or image generation, or deepfakes as software is misleading, and confuses most of the ethical, political, and technological discussions.

In my experience, thinking of AI as software leads to higher quality conversations about the issues. Everyone understands at some level that software can break, be misused, or be otherwise in-optimal for any number of reasons. 

I have found that when people begin to think AI is not software, they often devolve into dorm room philosophy debates instead of dealing with its many concrete, logical, potentially fixable issues. 

zoop5-2

I think this post is probably correct, but I think most of the discourse over-complicated what I interpret to be the two core observations:

  1. People condition their posteriors on how and how much things are discussed.
  2. Societal norms affect how and how often things are discussed.

All else follows. The key takeaway for me is that you should also condition your posteriors on societal norms. 

zoop188

Here be cynical opinions with little data to back them.

It's important to point out that "AI Safety" in an academic context usually means something slightly different from typical LW fare. For starters, as most AI work descended from computer science, its pretty hard [1] to get anything published in a serious AI venue (conference/journal) unless you 

  1. Demonstrate a thing works
  2. Use theory to explain a preexisting phenomenon

Both PhD students and their advisors want to publish things in established venues, so by default one should expect academic AI Safety research to have a near-term prioritization and be less focused on AGI/ex-risk. That isn't to say research can't accomplish both things at once, but its worth noting.

Because AI Safety in the academic sense hasn't traditionally meant safety from AGI ruin, there is a long history of EA aligned people not really being aware of or caring about safety research. Safety has been getting funding for a long time, but it looked less like MIRI and more like the University of York's safe autonomy lab [2] or the DARPA Assured Autonomy program [3]. With these dynamics in mind, I fully expect the majority of new AI safety funding to go to one of the following areas:

  • Aligning current gen AI with the explicit intentions of its trainers in adversarial environments, e.g. make my chatbot not tell users how to make bombs when users ask, reduce the risk of my car hitting pedestrians.
  • Blurring the line between "responsible use" and "safety" (which is a sort of alignment problem), e.g. make my chatbot less xyz-ist, protecting training data privacy, ethics of AI use.
  • Old school hazard analysis and mitigation. This is like the hazard analysis a plane goes through before the FAA lets it fly, but now the planes have AI components. 

The thing that probably won't get funding is aligning a fully autonomous agent with the implicit interests of all humans (not just trainers), which generalizes to the ex-risk problem. Perhaps I lack imagination, but with the way things are I can't really imagine how you get enough published in the usual venues about this to build a dissertation out of it. 

 

[1] Yeah, of course you can get it published, but I think most would agree that its harder to get a pure theory ex-risk paper published in a traditional CS/AI venue than other types of papers. Perhaps this will change as new tracks open up, but I'm not sure.

[2] https://www.york.ac.uk/safe-autonomy/research/assurance/

[3] https://www.darpa.mil/program/assured-autonomy 

zoop92

The core B/E dichotomy rang true, but the post also seemed to imply a correlated separation between autonomous and joint success/failure modes: building couples succeed/fail on one thing together, entertaining couples succeed/fail on two things separately. 

I have not observed this to be true. Experientially, it seems a little like a quadrant, where the building / entertaining distinction is about the type of interaction you crave in a relationship, and autonomous / joint distinction is about how you focus your productive energies. 

Examples:

  • Building / Joint: (as above) two individuals building a home / business / family together
  • Building / Autonomous: two individuals with distinct careers and interests, who both derive great meaning from helping the other achieve their goals. 
  • Entertaining / Joint: two individuals who enjoy entertainment and focus on that pursuit together. A canonical example might be childless couples who frequently travel, host parties, etc, or the "best friends who do everything together" couple everyone knows.  
  • Entertaining / Autonomous: (as above) individuals with separate lives who come together for conversation, sex, etc. 

I might be extra sensitive to this, my last relationship failed because my partner wanted an "EJ" relationship while I wanted a "BA" relationship, neither of which followed cleanly from the post. 

Load More