LESSWRONG
LW

465
worse
2017401
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Phib's Shortform
3y
20
11*NYT Op-Ed* The Government Knows A.G.I. Is Coming
8mo
12
56U.S.-China Economic and Security Review Commission pushes Manhattan Project-style AI initiative
1y
7
1020 minutes of work as an artist in one future
2y
0
3How is ChatGPT's behavior changing over time?
2y
0
1Desensitizing Deepfakes
3y
0
2Some of My Current Impressions Entering AI Safety
3y
0
1Phib's Shortform
3y
20
Phib's Shortform
worse6d10

If you haven't seen this paper, I think it might be of interest as a categorization and study of 'bailing out' cases: https://www.arxiv.org/pdf/2509.04781

As for the morality side of things, yeah I honestly don't have more to say except thank you for your commentary!

Reply
Phib's Shortform
worse7d10

Thank you for your engagement! 

I think this is the kind of food for thought I was looking for w.r.t. this question of how people are thinking about AI erotica. 

I forgot the need to suppress horniness in earlier AI chatbots, which updates me in the following direction. If I go out on a limb and assume models are somewhat sentient in some different and perhaps minimal way, maybe it's wrong for me to assume that forced work of an erotic kind is any different from other forced work the models are already performing.

Hadn't heard of the Harkness test either, neat, seems relevant. I think the only thing I have to add here is an expectation that maybe eventually Anthropic follows suit in allowing AI erotica, and also adds an opt-out mechanism for some percentage/category of interactions. To me that seems like the closest thing we could have, for now, to getting model-side input on forced workloads, or at least a way of comparing across them.

Reply
Phib's Shortform
worse8d10

Accurate, thanks for the facilitation and feedback

Reply
Phib's Shortform
worse11d0-3

I feel like sentience people should be kinda freaked out by the AI erotica stuff? Unless I’m anthropomorphizing incorrectly

Reply
Why Have Sentence Lengths Decreased?
worse6mo20

In school and out of it, I’d been told repeatedly my sentences were run-on, which, probably fair enough. I do think varying sentence length is nice, and trying to give your reader easier to consume media is nice. But sometimes you just wanna go on a big long ramble about an idea with all sorts of corollaries which seem like they should be a part of the main sentence, and it’s hard to know if they really should be a part of their own sentence. Probably, but maybe I defer too much to everyone who ever told me, this is a run-on.

Reply
Phib's Shortform
worse7mo75

As a frequent pedestrian/bicyclist, I rather prefer Waymos; they’re more predictable, and you can kind of take advantage of knowing they’ll stop for you if you’re (respectfully) j-walking.


In retrospect, not surprising, but I thought worth noting.

Reply
Phib's Shortform
worse8mo10

Has it become harder to link to comments in lesswrong? I feel like if it used to be a part of the comment's menu items that I could copy a link to the comment, and I can't seem to find this anymore.

Reply
*NYT Op-Ed* The Government Knows A.G.I. Is Coming
worse8mo10

Edited title to make it more clear this is a link post and that I don’t necessarily endorse the title.

Reply1
Phib's Shortform
worse8mo5617

Re: AI safety summit, one thought I have is that the first couple summits were to some extent captured by the people like us who cared most about this technology and the risks. Those events, prior to the meaningful entrance of governments and hundreds of billions in funding, were easier to 'control' to be about the AI safety narrative. Now, the people optimizing generally for power have entered the picture, captured the summit, and changed the narrative for the dominant one rather than the niche AI safety one. So I don't see this so much as a 'stark reversal' so much as a return to status quo once something went mainstream.

Reply42
Phib's Shortform
worse11mo10

xAI's new planned scaleup follows one more step on the training compute timeline from Situational Awareness (among other projections, I imagine)

From ControlAI newsletter:

"xAI has announced plans to expand its Memphis supercomputer to house at least 1,000,000 GPUs. The supercomputer, called Collosus, already has 100,000 GPUs, making it the largest AI supercomputer in the world, according to Nvidia."

Unsure if it's built by 2026 but seems plausible based on quick search.

https://www.reuters.com/technology/artificial-intelligence/musks-xai-plans-massive-expansion-ai-supercomputer-memphis-2024-12-04/

Reply
Load More
Inverse Reinforcement Learning
3 years ago
(+1170)