LESSWRONG
LW

958
henophilia
-817200
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1henophilia's Shortform
8mo
14
No wikitag contributions to display.
The Toxicity of Metamodernism: A Public Service Announcement
henophilia5mo10

Metamodernism has actively hurt me. It pretends to be meaningful, but it isn't. As mentioned, this is more of a public service announcement, so that other people won't fall into the same trap.

Reply
The Toxicity of Metamodernism: A Public Service Announcement
henophilia5mo-3-4

I appreciate you confirming my post :) Just to go through it again:

  • I agree, that I should have defined that more clearly. I define metamodernism as "A movement composed of the people identifying themselves as metamodernists".
  • Exactly, because it's my personal emotional experience.
  • Fair enough, that's also more an observation.
  • I see, I should also have been clearer here. I don't mean that EA as an organization is objectively fake; it's just that all the EA people I have personally met so far weren't interested in actually doing altruism effectively in the slightest; all they wanted to do was talk about it.
  • It's fascinating how you're simply invalidating my experience. That's exactly what I mean with the emotionless, cold, joyless, sometimes downright humiliating attitude of people in this forum, with a complete lack of empathy. Why are you this way? Are emotions not rational?
Reply
AI Alignment and the Financial War Against Narcissistic Manipulation
henophilia8mo10

Nice, thanks for the feedback! Absolutely, for me it was more of a stream of consciousness, just to get it out of my system, so I'll work on refining it soon! It's really fascinating which overlaps AI alignment has with mental illnesses in humans :)

Reply
henophilia's Shortform
henophilia8mo10

Oh wow, I didn't even know about that! I had always only met EA people in real life (who always suggested to me to participate in the EA forums), but didn't know about this program. Thanks so much for the hint, I'll apply immediately!

Reply
henophilia's Shortform
henophilia8mo21

Exactly! And if we can make AI earn money autonomously instead of greedy humans, then it can give all of it to philanthropy (including more AI alignment research)!

And of course! I've been trying to post in the EA forums repeatedly, but even though my goals are obviously altruistic, I feel like I'm just expressing myself badly. My posts there were always just downvoted, and I honestly don't know why, because no one there is ever giving me good feedback. So I feel like EA should be my home turf, but I don't know how to make people engaged. I know that I have many unconventional approaches of formulating things, and looking back, maybe some of them were a bit "out there" initially. But I'm just trying to make clear to people that I'm thinking with you, not against you, but somehow I'm really failing at that 😅

Reply1
henophilia's Shortform
henophilia8mo10

Oh you need to look at the full presentation :) The way how this is approaching alignment is that the profits don't go into my own pocket, but instead into philanthropy. That's the point of this entire endeavor, because we as the (at least subjectively) "more responsible" people see the inevitability of AI-run businesses, but channel the profits into the common good instead.

Reply
henophilia's Shortform
henophilia8mo10

Just look at this ChatGPT output. Doesn't this make you concerned? https://chatgpt.com/share/67a7bc09-6744-8003-b620-d404251e0c1d

Reply
henophilia's Shortform
henophilia8mo10

No, it's not hard. Because making business is not really hard.

OpenAI is just fooling us with believing that powerful AI costs a lot of money because they want to maximize shareholder value. They don't have any interest in telling us the truth, namely that with the LLMs that already exist, it'll be very cheap.

As mentioned, the point is that AI can run its own businesses. It can literally earn money on its own. And all it takes is a few well-written emails and very basic business-making and sales skills.

Then it earns more and more money, buys existing businesses and creates monopolies. It just does what every ordinary businessman would do, but on steroids. And just like any basic businessman, it doesn't take much: Instead of cocaine, it has a GPU where it runs its inference. And instead of writing just a single intimidating, manipulative email per hour, it writes thousands per second, easily destroying every kind of competition within days.

This doesn't take big engineering. It just takes a bit of training on the most ruthless sales books, some manipulative rhetorics mixed in and API access to a bank account and eGovernment in a country like Estonia, where you can form a business with a few mouse clicks.

Powerful AI will not be powerful because it'll be smart, it'll be powerful because it'll be rich. And getting rich doesn't require being smart, as we all know.

Reply1
henophilia's Shortform
henophilia8mo1-1

I'm not saying that I know how to do it well.

I just see it as a technological fact that it is very possible to build an AI which exerts economic dominance by just assembling existing puzzle pieces. With just a little bit of development effort, AI will be able to run an entire business, make money and then do stuff with that money. And this AI can then easily spiral into becoming autonomous and then god knows what it'll do with all the money (i.e. power) it will then have.

Be realistic: Shutting down all AI research will never happen. You can advocate for it as much as you want, but Pandora's Box has been opened. We don't have time to wait until "humanity figures out alignment", because by then we'll all be enslaved by AGI. If we don't make the first step in building it, someone else will.

Reply1
henophilia's Shortform
henophilia8mo1-1

Well, I'd say that each individual has to make this judgement by themselves. No human is objectively good or bad, because we can't look into each others heads.

I know that we may also die even if the first people building super-AIs are the most ethical organization on Earth. But if we, as part of the people who want to have ethical AI, don't start with building it immediately, those that are the exact opposite of ethical will do it first. And then our probability of dying is even larger.

So why this all-or-nothing mentality? What about reducing the chances of dying through AGI by building it first, because otherwise others who are much less aware of AI alignment stuff will build it first (e.g. Elon, Kim and the likes)?

Reply11
Load More
-19The Toxicity of Metamodernism: A Public Service Announcement
5mo
4
-22Reconsidering Money: The Case for Freigeld in the Digital Age and a Networked Future
5mo
0
1AI Safety Policy Won't Go On Like This – AI Safety Advocacy Is Failing Because Nobody Cares.
7mo
1
-17AI Alignment and the Financial War Against Narcissistic Manipulation
8mo
2
1henophilia's Shortform
8mo
14
1The Capitalist Agent
8mo
10
-11Debunking the myth of safe AI
10mo
8