burmesetheater

Posts

Sorted by New

Wiki Contributions

Comments

Air Conditioner Repair

What am I supposed to do now? Chargeback?

If you want your money back, sure. The alternative is to fight a company experienced at not giving refunds.

As for warning the community, this kind of thing happens all over the place all the time in all kinds of industries. Complaints to BBB and Yelp tend to be famously ineffective although possibly will demonstrate good citizenship to those who don't know better. Overall, this post is a bit confusing--it's like someone from a completely different society was suddenly transported to modern USA. What are you asking / telling us? 

Limits of Bodily Autonomy

This is fundamentally misframed. For example, there's no reason not to support--in some cases--mandatory abortion if you support mandatory vaccination. The main benefits of abortion aren't to the user, they're to the potential conscious entity who mercifully wasn't forced to endure a predictably sub-par life and to society. Abortion isn't really about personal (bodily) autonomy, that's just a useful political expedient. 

edit: is this being downvoted because people think it's anti-abortion? To put this comment in more context,  it's assumed that abortion has great utility for reducing S-risk (the mundane kind, if that's a reserved AI danger term) and is also associated with positive social trends. With this in mind, if you compare abortion to vaccination, it makes sense to mandate abortions in at least some cases. It shouldn't matter, but if it's still not clear I am very pro abortion. 

AI Training Should Allow Opt-Out

once the cat is out of the bag it's out

Since this was not clear, that's correct. The intention is not to encourage non-contribution to the open internet, including open source projects.

It is a problem in 2022 when someone seriously proposes opt-out as a solution to anything. Our world does not "do" opt-out. Our concept of "opting out" of the big-data world is some inconsequential cookie selection with a "yes" and a buried "no" to make the user feel good. We are far past the point of starting conversations. It's not productive, or useful, when it's predictably the case that one's publicly accessible data will end up used for AI training by major players anyway, many of whom will have no obligation to follow even token opt-out and data protection measures.

Conversations can be good, but founding one on a predictably dead-end direction does not seem to make much sense.

This isn't a suggestion to do nothing, it's a suggestion to look elsewhere. At the margin, "opting out" does not affect anything except the gullible user's imagination. 

Are long-form dating profiles productive?

Productive for what, exactly? There's a lot of assumed context missing from the post, including your gender, and the gender you're targeting. It's also not completely clear what kind of relationship you want, but we'll assume it's serious and long-term.

First: you're XY, looking for XX. In this case, @swarriner's post is applicable to most of the distribution. But since you're here, we'll assume the girl you're looking for is intellectually gifted, data oriented, and may or may not be slightly on the spectrum. Even in this case, pictures are still worth 1000 words, but a lengthy profile probably won't hurt (it may not help that much, though.) If you're going for someone in the bulk of the distribution, a long profile will most likely hurt, not help. In short, make sure you have good pictures, and don't rely on your own judgement or that of biased parties to assess whether the pictures are good.

Second: You're XY looking for XY. In this case a long profile is probably pretty useful, but your pictures still need to be good.

Third: NB for one, the other or both. In this case a long description is probably generally useful. Don't know enough about this case. 

Fourth: You're XX looking for anything. A long profile isn't necessary, just some pictures and a short signal that you're smart and nerdy. The pictures don't need to be that good. 

AI Training Should Allow Opt-Out

Disagree. Data public (and private) will be used by all kinds of actors under various jurisdictions to train AI models and only a fraction of these predictably will pay any heed to an opt-out (and only a fraction of those who do may actually implement it correctly). So an opt-out is not only a relatively worthless token gesture, the premise of any useful upside appears to be based on the assumption that one can control what happens to information one has publicly shared on the internet. It's well evidenced that this doesn't work very well.

Here's another approach: if you're worried about what will happen to your data, then maybe do something more effective, like not put it out in public.

Against Active Shooter Drills

If your response to that idea is ‘what, what, that sounds horrible and terrifying and we should absolutely positively not do that’ then you seem like a normal human to me.

Or maybe it's dull, boring and dumb like most other things in school. How you perceive the threat of mass shootings, or anything else, is not one-size-fits-all. School tends to be a ways down on the list of one's influences at any age and if one's dearer influences consider shootings to be a very unlikely cause of problems to one's health, as is objectively the case, one might simply think the school is making a silly waste of time...business as usual.

So maybe a more direct problem is parents and other influences who may or may not be distributed unequally by political beliefs, who promote the idea that shooting is a direct threat to the life and limb of some individual. Does this include the OP?

To generalize this problem, the world is stuffed with terrifying threats, and would-be threats that tend to be a problem to process serenely at any age. Who is responsible? Maybe humans who "decide" to create new humans practically autonomously as a result of a biological process rewarding fitness to reproduce above practically all else. 

Alignment Risk Doesn't Require Superintelligence

Destructive alignment issues in our species are more mundane. Several leaders in the 20th century killed outright very large numbers of people for completely banal reasons like political ambition. Actually, your intuition that 9/11 events happen "all the time" is only off in a temporal sense; the number of humans unambiguously killed by the coordinated actions of relatively few other unaligned humans in the last 100 years is so great that it is probably enough to have at least one 9/11 a day during that time. Humans are generally unaligned on several levels from personal to egregoric and the only reason this is lately becoming a problem in a species-risk sense is because only now are we getting some powerful technology. A more probable version of the scenario in this post is a suicidal leader triggering a large scale nuclear war through use of their own arsenal either through deception or after taking steps to reduce the possibility of refusal. Of course it would be a great irony if now that global thermonuclear war is actually tested, the opposing forces are unable to make use of their deterrent.

Yes, AI research will be substantially curtailed if a lab causes a major disaster

and have the best forecasters

With forecasters from both sides given equal amounts of information, these institutions might not even reliably beat the Metaculus community. If one is such a great forecaster then they can forecast that jobs like this might not be, among other things, that fulfilling.

I don't know if we've gotten to the point where they can fool the professionals at not getting fooled

Quite a few professionals (not at not getting fooled) still believe in a roughly 0% probability of a certain bio-related accident a couple three years ago thanks in large part to a spun story. Maybe the forecasters at the above places know better but none of the entities who might act on that information are necessarily incentivized to push for regulation as a result. So it's not clear it would matter if most forecasters know AI is probably responsible for some murky disaster while the public believes humans are responsible. 

Yes, AI research will be substantially curtailed if a lab causes a major disaster

Well, there's a significant probability COVID isn't a "natural" pandemic, although the story behind that is too complicated without an unambiguous single point of failure which hinders uptake among would-be activists.

If there's an AI failure will things be any different? There may be numerous framings of what went wrong or what might be addressed to fix it, details sufficient to give real predictive power will probably be complicate and it's a good bet that however interested "the powers that be" are in GOF, they're probably much MUCH more interested in AI development. So there can be even more resources to spin the story in favor of forestalling any pressure that might build to regulate.

Nuclear regulation also might not be a good example of a disaster forcing meaningful regulation because the real pressure was against military use of nuclear power and that seems to have enjoyed general immunity against real regulation. So it's more like if an AI incident results in the general public being banned from buying GPUs or something while myriad AI labs still churn toward AGI. 

Let's not name specific AI labs in an adversarial context

Anyone can try, this seems way out in a practically invisible part of the tail of obstacles to not being destroyed by AGI, if it's even an obstacle at all. 

Load More