Posts

Sorted by New

Wiki Contributions

Comments

_self_2y20

So I didn't know this was a niche philosophy forum, with its own subculture. I'm way out of my element. My suggestions were not very relevant taking that into context, I thought it was a general forum. I'm still glad there are people thinking about it.

The links you sent are awesome! - I'll follow those researchers. I think a lot of my thoughts here are outdated as things keep changing, and I'm still putting thoughts together. So, I probably won't be writing much for a few months until my brain settles down a little.

Am I "shorttermism"? Long term, as in fate of humanity, I think I am not good to debate there

Thanks for commenting on my weird intro!

_self_2y10

Oh no the problem is already happening, and the bad parts are more dystopian than you probably want to hear about lol

From the behaviorism side yes it's incredibly easy to manipulate people via tech, it's not always done on purpose as you state. But it's frequently insomnia inducing as a whole.

Your point about knowing your weakness and preparing is spot on!

  • For the UX side of this, look up Harry Brignull and Dark Patterns. (His work has been solid for 10+ years, to my knowledge he was the first to call out some real BS that went un-called-out for most of the 2010s.)

  • The Juul lawsuit is another good one if you're interested in advertising ethics

  • Look up "A/B testing media headlines outrage addiction".

  • If you want to send your brain permanently to a new dimension, look up the RIA propaganda advertising dataset.

  • For disinformation - "Calling Bullshit", there's a course and materials online from two professors who just popped off one day

  • Want to read about historical metric optimization perils and have a huge moral/existential crisis?: Read about Robert McNamara

  • For actual solutions on a nonacademic consumer level (!!) -- Data Detox Kit and the nonprofit that runs that page. So excellent.

The problem isn't so much the manipulation. Isn't that what all marketing has been, forever, a mix of creativity and manipulation of attention and desire?

A long time ago someone realized we respond positively to color, we eat more when we see red, we feel calm when we see blue. Were they manipulative? Yes. Is it industry knowledge now? Yes. Maybe they just felt like making it blue for no reason, but now everyone does it because it works? Yes.

That's the nature of it. But now, the SPEED at which manipulative techniques can be researched, fine tuned, learned, used, and scaled up, is unheard of.

There's no time for people, or psychology, to keep up. I think it's a public health risk, and risk to our democracy that we aren't receiving more public education on how to handle it.

Back when subliminal advertising was used back in the 19somethings, it had its run and then the US cracked down and banned it for being shady asf. Since then, we haven't really done a lot else. New manipulation techniques develop too fast and frequently now. And they're often black box.

Now the solutions for the problems that tech causes are usually folk knowledge, disseminated long before education, psychology, or policy catch up. We should be bloody faster.

Instead the younger generation grows up with the stuff and absorbs it. Gets it all mixed up in their identity. And has to reverse engineer themselves for years to get it back out.

Didn't we all do that? Sure, at a slower pace. What about gen alpha? Are they ever going to get to rest? Will they ever be able to separate themselves from the algorithms that raised them? Great questions to ask!

Frankly Gen Z is already smarter and faster at navigating this new world than us. That is scary because it means we're helpless to help them, a lot of the time.

Some of it, we can't even conduct relevant research on, because the board thinks the treatment is too unethical. *See: porn addiction studies.

Knowledge is power. But power is knowledge. And it's tightly guarded. Watch how people high up in tech regulate technology use with their children.

The general resistance to addressing the core of the issue, and the features that continually keep the car driving this direction...that's valuable informative in itself. How do we balance this with the economy as a whole, and the fact that the machine seems to eat the weak to keep spinning...I don't know! Someone else please figure out that answer, thank you.

But one of the most helpful things I think we can do is provide education. Behaviorism and emotion is powerful and you can use it on yourself, too. You are your own pavlov and your own dog. Sometimes other people will be Pavlov. It's best if you're consciously aware of it when that happens and you're ok with it.

The other thing, is preserving the right to living low tech. (I hope unions are up on this already.) Biometric tracking is nice and helpful sometimes. And sometimes, it's not. . As always, if you can't outrun them, confuse them.

  • If something in this comment is incorrect please correct me. I was freeballing it
_self_2y6-3

I'm one of the new readers and found this forum through a Twitter thread that was critiquing it. psychology background then switched to ML, and I've been following AI ethics for over 15 years and have been hoping for a long time that discussion would leak across industries and academic fields.

Since AI (however you define it) is a permanent fixture in the world, I'm happy to find a forum focused on critical thinking either way and I enjoy seeing these discussions on front page. I hope it's SEO'd well too.

I'd think newcomers and non-technical contributors are awesome. 8 years ago I was so desperate to see that people in the AI space were thinking and critically evaluating their own decisions from a moral perspective, since I had started seeing unquestionable effects of this stuff in my own field with my own clients.

But if it starts attracting a ton of this you might want to consider splitting/starting a secondary forum, since this stuff is needed but may dilute from the original purpose of this forum

my concerns for AI lie firmly in the chasm between "best practices" and what actually occurs in practice.

Optimizing for bottom line with no checks and balances and a learned blindness to common sense (see: rob McNamara), and also blindness towards our own actions. "What we do to get by".

It's not overblown. But instead of philosophizing about AI doomsday I think there are QUITE enough bad practices going on in industry currently that affect tons of people, that deserve attention.

Focusing on preventing a theoretical AI takeover is not entirely a conspiracy thing, I'm sure it could happen. But it is not as helpful as:

  • getting involved with policy
  • education initiatives for the general public
  • diversity initiatives in tech and leadership
  • business/startup initiatives in underprivileged communities
  • formal research on common sense things that are leading to shitty outcomes for underprivileged people
  • encouraging collaboration, communication, and transfer of knowledge between different fields and across economic lines
  • teaching people who care about this stuff good marketing and business practices.
  • commitment to seeing beyond bullshit in general and to stop pretending, push towards understanding power dynamics
  • cybersecurity as it relates to human psychology, propaganda, and national security. (Hope some people in that space are worried)

Also consider how delving into the depths of humanity affects your own mental health and perspective, I've found myself to be much more effective when focusing on grassroots hands on stuff

Stuff from academia trickles down to reality far too slowly to keep up with the progression of tech, which is why I removed myself from it, but still love the concept here and glad that people outside of AI are thinking critically about AI