Wiki Contributions


The point is not what Reddit commenters think, the point is what OpenAI thinks. I read OP (and the original source) as saying that if ARC had indicated that release was unsafe, then OpenAI would not have released the model until it could be made safe.

This seems to be another way of stating the thesis of (Which is a recommendation; both of you are correct.)

Okay, that's a pretty serious age gap. Probably explains a lot.

This is a minor nitpick, but if you're 25 I doubt that your parents actually qualify as Baby Boomers, which is usually limited to people born before 1964. Not impossible (a person born in 1964 having a child at the age of 35 would result in the child being 25 today), but unlikely.

I bring this up because I'm annoyed by the ongoing shift towards people referring to every generation older than them as "boomers".

Congrats on getting all the way to The End. You may take a bow and enjoy our applause. We hope there will not be an encore.

The linked PDF was not terribly detailed, but it more-or-less confirmed what I've long thought about climate change. Specifically: the mechanism by which atmospheric CO2 raises temperatures is well-understood and not really up for debate, as is the fact that human activity has contributed an enormous amount to atmospheric CO2. But the detailed climate models are all basically garbage and don't add any good information beyond the naive model described above.

ETA: actually, I found that this is exactly what the Berkeley Earth study found:

The fifth concern related to the over-reliance on large and complex global climate models by the Intergovernmental Panel on Climate Change (IPCC) in the attribution of the recent temperature increase to anthropogenic forcings.

We obtained a long and accurate record, spanning 250 years, demonstrating that it could be well-fit with a simple model that included a volcanic term and, as an anthropogenic proxy, CO2 concentration. Through our rigorous analysis, we were able to conclude that the record could be reproduced by just these two contributions, and that inclusion of direct variations in solar intensity did not contribute to the fit.

I feel doubly vindicated, both in my belief that complex climate models don't do much, but also that you don't need them to accurately describe the data from the recent past and to make broad predictions.

I know that your article isn't specifically about the goose story, but I have to say that I strongly disagree with your assessment of the "failure" of the goose story.

First, you asked ChatGPT to write you a story, and one of the fundamental features of stories is that the author and the audience are not themselves inside the story It is entirely expected that ChatGPT does not model the reader as having been killed by the end of the world. In fact, it would be pretty bizarre if the robot did model this, because it would indicate a severe inability to understand the idea of fiction.

But is it a "swerve through the fourth wall" for the last paragraph to implicitly refer to the reader rather than the characters in the story? Only if you're writing a certain style of novelistic fiction, in which the fiction is intended to be self-contained and the narrator is implicit (or, if explicit, does not exist outside the bounds of the story). But if you're writing a fairy tale, a fable, a parable, a myth, an epic poem, a Greek drama, or indeed almost any kind of literature outside of the modernist novel, acknowledgement of the audience and storyteller is normal. It is, in fact, expected.

And your prompt is for the bot to write you a story about a goose who fails to prevent the end of the world. Given that prompt, it's entirely to be expected that you get something like a fable or fairy tale. And in that genre the closing paragraph is often "the moral of the story", which is always addressed to the audience and not the characters. When ChatGPT writes that the deeds of the goose "will always be remembered by those who heard his story," it isn't failing to model the world, but faithfully adhering to the conventions of the genre.

My point (which I intended to elaborate, but didn't initially have time) is that hosting one of these modern software platforms involves a whole stack of components, any one of which could be modified to make apparently-noncompliant output without technically modifying any of the AGPL components. You could change the third-party templating library used by the Mastodon code, change the language runtime, even modify the OS itself.

Which means I mostly agree with your point: the AGPL is not strict enough to actually ensure what it wants to ensure, and I don't think that it can ensure that without applying a whole bunch of other unacceptable restrictions.

There could be an argument that hosting it behind a proxy counts as modification.

Load More