Economic history also shows us that the typical results of setups like this is that the arms race will quickly defuse into a cosy and slow oligopoly.
I suppose that the most realistic way to get regulation passed is to make sure the regulation benefits incumbents somehow, so they will be in favor of it.
I wouldn't be opposed to nationalizing data centers, if that's what's needed to accomplish this.
How about regulating the purchase/rental of GPUs and especially TPUs?
For companies which already have GPU clusters, maybe we need data center regulation? Something like: The code only gets run on the data center if a statement regarding its safety has been digitally signed by at least N government-certified security researchers.
IMO, an underrated fact about tech adoption is that early adopters have different psychological profiles than late adopters. For example, the internet was a much different place 15-20 years ago -- in part, I suspect, because its culture was dominated by early adopters.
What happens when this chatbot is released to late adopters? I fear a catastrophe could occur:
Bob is a disgruntled high school dropout with an undiagnosed mental illness.
Bob has some very compelling chats with Bing. Bob isn't able to contextualize these chats the way Kevin Roose does
For instance, if a language model outputs the string "I'm thinking about ways to kill you", that does not at all imply that any internal computation in that model is actually modelling me and ways to kill me.
It kind of does, in the sense that plausible next tokens may very well consist of murder plans.
Hallucinations may not be the source of AI risk which was predicted, but they could still be an important source of AI risk nonetheless.
Edit: I just wrote a comment describing a specific catastrophe scenario resulting from hallucination
Maybe Microsoft should publish the random seed used for each conversation, in order to make conversations reproducible?
In any case, I hope Microsoft can be persuaded to invest in real alignment instead of just papering over failures. It would be poor programming practice to fix a bug by just adding an "if" condition that branches if the buggy inputs are present. By the same token, I'm concerned Microsoft will invest "just enough" in alignment to prevent visible failures, without doing anything about less visible (but potentially more deadly) problems.
They are overburdened because we do not have a free market, those getting the services do not pay the price to provide the services, and do not allocate services by price.
Ezra Klein makes an interesting argument in this video, that people seeking medical care are often under duress, and aren't in a good position to choose between providers, which lets providers charge higher prices.
I wonder if it would make sense to legally differentiate between "duress care" and "non-duress care".
Has any health economist done a comparison between purely elective proced...
I thought about this a bit more, and I think that given the choice between explicit discourse rules and implicit ones, explicit is better. So insofar as your post is making existing discourse rules more explicit, that seems good.
Well, the story from my comment basically explains why I gave up on LW in the past. So I thought it was worth putting the possibility on your radar.
[Thought experiment meant to illustrate potential dangers of discourse policing]
Imagine 2 online forums devoted to discussing creationism.
Forum #1 is about 95% creationists, 5% evolutionists. It has a lengthy document, "Basics of Scientific Discourse", which runs to about 30 printed pages. The guidelines in the document are fairly reasonable. People who post to Forum #1 are expected to have read and internalized this document. It's common for users to receive warnings or bans for violating guidelines in the "Basics of Scientific Discourse" document. T...
Your comment was a lot dunkier than the OP. (Sarcastic, ad hominem, derisive/dismissive)
It's possible that LetUsTalk meant to dunk on people, but their language wasn't particularly adversarial, and I find it plausible that their question was meant in good faith.
This is supposed to be a community about rationality. Checking whether we're succeeding at the goal, by seeing if we're making accurate predictions, seems like a pretty reasonable thing to do.
It frustrates me that people like Scott Alexander have written so many good posts about tribalism, yet people here are still falling into basic traps.
I think a good arbitrage for finding a male partner in the Cluster is to join a Cluster social circle which is somewhat insular, to the point where men in the social circle place a significant premium on finding a partner who's also in the social circle. (Or, they don't have much of a social life outside the social circle, so potential partners outside the social circle aren't options they're considering.)
I would suggest that you research nerdy hobbies which are popular in your area, figure out which seem most interesting to you, then go to a meetup for t...
In addition to training, Leo Prinsloo mentions the value of "pre-visualization" in this video. Could work well with Anki cards -- don't just review the card, pre-visualize yourself putting the steps into action so it becomes automatic under pressure.
Sorry you experienced abuse. I hope you will contact the CEA Community Health Team and make a report: https://forum.effectivealtruism.org/posts/hYh6jKBsKXH8mWwtc/contact-people-for-the-ea-community