I think it makes a huge difference that most cybersecurity desasters only cost money (or cause damage to a company's reputation and loss of confidential information of customers) while a biosecurity desaster can kill a lot of people. This post seems to ignore this?
Besides thinking it fascinating and perhaps groundbreaking, I don't really have original insights to offer. The most interesting democracies on the planet in my opinion are Switzerland and Taiwan. Switzerland shows what a long and sustained cultural development can do. Taiwan shows the potential for reform from within and innovation.
There's a lot of material to read, in particular the events after the sunflower movement in Taiwan. Keeping links within lesswrong: https://www.lesswrong.com/posts/5jW3hzvX5Q5X4ZXyd/link-digital-democracy-is-within-reach and ht...
What's missing in this discussion is why one is talking to the "bad faith" actor in the first place.
If you're trying to get some information and the "bad faith" actor is trying to deceive you, you walk away. That is, unless you're sure that you're much smarter or have some other information advantage that allows you to get new useful information regardless. The latter case is extremely rare.
If you're trying to convince the "bad faith" actor, you either walk away or transform the discussion into a negotiation (it arguably was a negotiation in the first plac...
I think any outreach must start with understanding where the audience is coming from. The people most likely to make the considerable investment of "doing outreach" are in danger of being too convinced of their position and thinking it obvious; "how can people not see this?".
If you want to have a meaningful conversation with someone and interest them in a topic, you need to listen to their perspective, even if it sounds completely false and missing the point, and be able to empathize without getting frustrated. For most people to listen and consider any ob...
Indeed, systems controlling the domestic narrative may become sophisticated enough that censorship plays no big role. No regime is more powerful and enduring than one which really knows what poses a danger to it and what doesn't, one which can afford to use violence, coercion and censorship in the most targeted and efficient way. What a small elite used to do to a large society becomes something that the society does to itself. However, this is hard and I assume will remain out of reach for some time. We'll see what develops faster: sophistication of socie...
Uncharitably, "Trust the Science" is a talking point in debates that have some component which one portrays as "fact-based" and which one wants to make an "argument" about based on the authority of some "experts". In this context, "trust the science" means "believe what I say".
Charitably, it means trusting that thinking honestly about some topic, seeking truth and making careful observations and measurements actually leads to knowledge, that knowledge is inteligibly attainable. This isn't obvious, which is why there's something there to be trusted. It mean...
No offense, this reads to me as if it was deliberately obfuscated or AI-generated (I'm sure you didn't do either of these, this is a comment on writing style). I don't understand what you're saying. Is it "LW should focus on topics that academia neglects"?
I also didn't understand at all what the part starting with "social justice" is meant to tell me or has to do with the topic.
There has been some talk recently about long "filler-like" input (e.g. "a a a a a [...]") somewhat derailing GPT3&4, e.g. leading them to output what seems like random parts of it's training data. Maybe this effect is worth mentioning and thinking about when trying to use filler input for other purposes.
just in case it turns out he's heir to a giant fortune or something.
That seems like a highly dubious explanation to me. I guess, the woman's honest account (or what you'd get by examining her state of mind) would say that she does it as a matter of habit, aiming to be nice and conform to social conventions.
If that's true, the question becomes where the convention comes from and what maintains it despite the naively plausible benefits one might hope to gain by breaking it. I don't claim to understand this (that would hint at understanding a lot of human ...
While I completely agree in the abstract, I think there's a very strong tendency for systems-of-thought, such as propagated on this site, to become cult-like. There's a reason why people outside the bubble criticize LW for building a cult. They see small signs of it happening and also know/feel the general tendency for it, which always exists in auch a context and needs to be counteracted.
As you point out, the concrete ways of thinking propagated here aren't necessarily the best for all situations and it's another very deep can of worms to be able to tell ...
Unfortunately for this scheme, I would expect rendering time for AI videos to eventually be faster than real time. So, as the post implies, even if we had a reasonably good way to prove posteriority, this may not do to certify videos as "non-AI" for long.
On the other hand, as long as rendering AI videos is slower than real time, poof of priority alone might go a long way. You can often argue that prior to some point in time you couldn't reasonably have known what kind of video you should fake.
The "analog requirement" reminds me of physical unclonable functions, which might have some cross-pollination with this issue. I couldn't think of a way to make use of them but maybe someone else will.
I guess it depends on whether this post found anything at all that can be called questionable security practice. Maybe it didn't but the author was also no cybersecurity expert. Upon reflection, my earlier judgement was premature and the phrasing overconfident.
In general, I assume that OpenAI would view a serious hack as quite catastrophic, as it might e.g. leak their model (not an issue in this case), severely damage their reputation and undermine their ongoing attempt at regulatory capture. However, such situations didn't prevent shoddy security practice...
I agree. To me, the most interesting aspects of this (quite interesting and well-executed) exercise are getting a glimpse into OpenAI's approach to cybersecurity, as well as the potentially worrying fact that GPT3 made meaningful contributions to finding the "exploits".
Given what was found out here, OpenAI's security approach seems to be "not terrible" but also not significantly better than what you'd expect from an average software company, which isn't necessarily encouraging because those get hacked all the time. It's definitely not what people here call...
Someone who you're likely to trade with (either because they offer you a trade or because they are around when you want to trade) are on average more experienced than you at trading. So trades available to you are disproportionately unfavorable and you cannot figure out which ones "are likely to lead to favorable trades in the future", by assumption that they are incomparable.
This is what you mean by "trades are often adversarialy chosen" in (1.), right? I don't understand why or in what situation you're dismissing that argument in (1.).
There can be a lot ...
Who is the target audience for this?
I doubt anyone has been calling themselves a "doomer". There are people on this site who wouldn't ever get called that but I haven't seen anyone else here label anyone a "doomer" yet. So it seems that you're left with people who don't frequent this site and would probably dismiss your arguments as "a doomer complaining about being called a doomed"?
Did I miss people call each other "doomer" on LW? Did you also post something like this on Twitter?
To me, the arguments from both sides, both arguing for and against worrying about existential risk from AI, make sense. People have different priors and biased access to information. However, even if everyone agreed on all matters of fact that can be currently established, the disagreement would persist. The issue is that predicting the future is very hard and we can't expect to be in any way certain what will happen. I think the interesting difference between how people "pro" and "contra" AI-x-risk think about this is in dealing with this uncertainty.
Imag...
I really have no idea, probably a lot?
I don't quite see what you're trying to tell me. That one (which?) of my two analogies (weather or RTS) is bad? That you agree or disagree with my main claim that "evaluating the relative value of an intelligence advantage is probably hard in real life"?
Your analogy doesn't really speak to me because I've never tried to start a company and have no idea what leads to success, or what resources/time/information/intelligence helps how much.
What point are you trying to make? I'm not sure how that relates to what I was trying to illustrate with the weather example. Assuming for the moment that you didn't understand my point.
The "game" I was referring to was one where it's literally all-or-nothing "predict the weather a year from now", you get no extra points for tomorrow's weather. This might be artificial but I chose it because it's a common example of the interesting fact that chaos can be easier to control than simulate.
Another example. You're trying to win an election and "plan long-term t...
Formally proving that some X you could realistically build has property Y is way harder than building an X with property Y. I know of no exceptions (formal proof only applies to programs and other mathematical objects). Do you disagree?
I don't understand why you expect the existence of a "formal math bot" to lead to anything particularly dangerous, other than by being another advance in AI capabilities which goes along other advances (which is fair I guess).
Human-long chains of reasoning (as used for taking action in the real world) neither require nor imp... (read more)