"People think killing is bad" is one of the many reasons to think that "killing is bad". Other reasons might include "people die if they are killed", "I don't want to get killed", "I don't want my loved ones to get killed", "I don't want to get traumatized by killing", "I don't want to traumatized by witnessing murder" and so on and so forth.
Lots of reasons to dislike murder. And we usually see dislike of murder developing naturally and independently in various cultures around the world. Sometimes it's only extended to people within a group, but it is invariably there.
If we need God for that principle, how is that possible?
Or let's look from slightly different perspective.
The 10th commandment states "thou shalt not kill". It's simple and strong, all murder is bad.
But do you really think all murder is always morally indefensible?
I don't know your position on any topic, so it's hard for me to guess.
But you would probably agree that someone who killed by accident is not as evil as serial killer. I'd expect you feel more pity towards that person than resentment, even if by law he ends up in prison.
It's even harder if you get attacked and end up killing your attacker in self-defense. In some countries you will get jailed for that, but not in others. People in general usually support defending side here, even in countries where it almost always ends with prison sentence.
Speaking about law, what about capital punishment? It's controversial, sure, but it used to be much more normal before morals became essentially secular.
Talking about controversies, it's even harder in cases of euthanasia and abortion. These are hard moral topics, and I'm not sure simple answer offered by religion holds up here either, considering all other exceptions.
Or what about war? Soldiers do kill, but you will look really hard to find religious figure denouncing soldiers that fight on their side.
All of this does not fit into simple framework outlined by "thou shalt not kill" commandment, does it?
And that's killing we are talking about. I too on a gut level feel that it's bad, I want to live in a world without killing, but the world we live in is much more complicated.
It's usually even harder when we talk about problems where, uhm, it's not about people getting killed. Because it's easy to agree that killing is bad (until I show up with controversial list of exceptions), but some other norms might not be quite as intuitive.
If the capability is there, the world has to deal with it, whoever first uses it. If the project is somewhat "use once, then burn all the notes", then it wouldn't make it much easier for anyone else to follow in their footsteps.
That's true if capability is there already.
If capability is maybe, possibly there but requires a lot of research to confirm the possibility and even more to get it going, I'd suggest that we might deal with it by acessing the risks and not going down that route.
I mean, that's precisely what this community seems to think about GoF research, how is that case different?
Why do you think that this is easy to do and bad. There are currently a small number of people warning about AI. There is some scary media stories, but not enough to really do much.
What I really was trying to say that if you have sufficient knowledge and resources to launch proper media campaign, it might be easy to overshoot your goal if that relates to scaring people.
Why do I think it's the case?
Because modern media excels at being scary. And any story that gains traction can snowball out of control really quickly.
And if it snowballs, most people are not going to hear or read your version of your arguments.
They would get distorted, misunderstood and misrepresented version presented by journalists.
That is a risk.
Yes the same tech could be used for horrible brainwashy purposes, but hopefully we can avoid giving the tech to people who would use it like that.
And how do you ensure that this tech does not get into the wrong hands?
There are so, so many ways this can go wrong. What if your tech (or just necessary research) gets stolen? What if you are secretly hoping to use it for some other purpose? What if someone else on the team does that?
Or more realistically, do you think that the moment CIA thinks that your plan is workable they don't disappear you? That would be entirely consistent with their history and their goals.
I don't think that you are so naive to think you'd be able to hide that kind of research from them for long. I mean, you did not ask your questions in private.
And of course, there are other parties that would be willing to go to any lengths to get that tech, CIA would not be alone in that.
I feel like risks here are much higher than potential benefits.
My (admittedly limited) knowledge of psychology and neurosciences suggests that this is not currently possible. Thankfully.
I feel like if you start seriously considering things that are themselves almost as bad as AI ruin in their implications in order to address potential AI ruin, you took a wrong turn somewhere.
If you can create a virus or something of the sort that makes people genuinely afraid of some vague abstract thing, you can make them scared of anything at all. Do I really need to spell it out how that would be abused?
On the other hand, do you really need to go that far?
Launch media campaign and you can get most of the same results without making the world much more dystopian than it already is.
The main risk here is that it's easy to scare people so much that all of the research gets shut down. And I expect that to be the reason there's not much scare about it in media yet. As far as I remember, that's why most researcher in the field were at first reluctant to admit there's a risk at all.
I understand your point, and I for the most part agree. It is important to understand the basics.
What I was trying to say is.. If you did not get the basics from your first attempt to learn those, maybe try to approach them differently.
Look for a different textbook, ask someone who is not your current teacher, maybe look for popular explanation (if you are compltetly lost), or for more technical one (if original was not detailed enough), etc etc.
Try to learn the basics, but switch the approaches if you are stuck.
I feel like it might help with motivation too, as it should be more exciting than plain repetition.
It might be inefficient for pure memorization, but maybe it can help you form more accurate maps, which is more valuable in itself.
But is it the best way to help you form higher level concepts and practise more zoomed-out perspective? Is it the best way to understand things rather than just memorize them? I'm not sure.
I suspect it's better to look for other approaches - practical applications of newly acquired knowledge, ways to test your understanding, trying to see if you understand all the implications, maybe looking for alternative explanations, or different representations of these explanations,.
I know quite a few examples of people, often much smarter than me, that struggled with conventional ways to explain some concept, only to get it instantly once they some some alternative explanation.
I do not have good psychological explanation for this, unfortunately. I've been only taught bad ones when I've studied psychology in University (I mean, practically disproved by now). Another reason to avoid putting too much weight in memorization, I guess.
I see a few problems with trust networks that are not generally present in the markets.
I'm glad that your experience was mostly positive, but I'm aware of many examples where things are more tricky.
Part of it comes from two very different but common attitudes towards transactions between friends/family. Some people think that every work should be paid, always. Others expect and provide free help.
These positions are clearly non-compatible and predictably lead to conflicts, especially when people don't communicate their position clearly. They often think that their position is obviously right and don't even consider the alternative until the conflict arises.
Another problem is that in transactions with friends/relatives there's often a pressure to work informally. Which is a risk - if they fuck it up you can't even sue them. And it's not that unlikely that they do - you probably did not select them based on their responsibility and their expertise in whatever field they work in. So you might lose all the resources AND hurt your relationship on top of it.
It's not that these problems are completely unavoidable, but people do get burned.
Some personal examples, so that I don't just parrot stories I've read on the internet.
Extremely bad example.
One of my relatives was technical director and de-facto co-owner of one local ISP. De jure he was nobody, he never got around to do all the paperwork - partly because he trusted his "friends", partly because there were some complicated issues, partly because he is rather lazy. Years and years of no consequences, until they decided to sell the company. Guess who's opinions was no considered and who got nothing out of the deal. I know, it's extreme and it's not only trust related, but these things do happen.
Somewhat good example.
Back when I was working in e-shop, our courier got sick. We could not realistically hire replacement in time, and outsourcing would be extremely expensive. My boss asked our sysadmin to help the company out. I remember him discussing that decision with someone - "I know he would not decline, he is a nice guy, but he is too shy to name the fair price, and he would be disappointed if he does not get paid fairly". He ended up paying him slightly more per hour than he paid out actual courier.
To conclude, my main points are:
I have not tried the square test before, and it's weird. At my first attempt I just completely failed. I've certainly seen enough squares in my life to imagine them, but it just did not happen. Then I imagined drawing that square - not the tactile sensition of it, but just the process of going from A to B to C to A, but that only gets me the 3rd type of square. I can push it to the 4 with additional effort, but I can't seem to get past that just yet. So it's far from red.
The shape is certainly easier for me to imagine than color, colors tend to be really bleak.
It reminds me of another classic example, where they ask you to imagine an apple.
At my very first attempt I found that difficult for some reason, but after a while I have no trouble imagining any apples I want - green, red, yellow, mixed color, stem with leaf or without leaf, no stem, partially eaten, cut in half, partially rotten, with a worm inside it, etc etc.
But then again I have a lot more experience paying attention to apples then to abstract red squares, even if I do see squares way more often. Maybe it adds to effect. Or maybe all the possible transformations of shape distract me enough from color so that I fail to notice how poor my imagination of it really is.
When I first learned about aphantasia, I thought It described me - I don't naturally visualize when I read. But after closer inspection, I found out that I can visualize if I put some effort into it. Images might not be terribly vivid, but recognizable enough.
So technically I don't have aphantasia, but my experience is pretty close, and it's all kinda confusing . For the most part of my life, I did not even realize that was not normal.
I was always fast reader because of that, you can save time and mental resources by not visualizing, so that's an upside. As for downsides, I can't imagine them, haha.
He did not threaten to nuke Ukraine. He threated to use nukes against NATO countries if they get directly involved in that conflict. Not a direct quote, but a summary would be "We know we can't win war against NATO, but we still have nuclear weapons - there will be no winners".