momom2

AIS student, self-proclaimed aspiring rationalist, very fond of game theory.
"The only good description is a self-referential description, just like this one."

Wiki Contributions

Comments

momom210

I can imagine plausible mechanisms for how the first four backlash examples were a consequence of perceived power-seeking from AI safetyists, but I don't see one for e/acc. Does someone have one?

Alternatively, what reason do I have to expect that there is a causal relationship between safetyist power-seeking and e/acc even if I can't see one?

momom210

That's not interesting to read unless you say what your reasons are and they differ from other critics'. Perhaps not say it all in a comment, but at least a link to a post.

momom210

Interestingly, I think that one of the examples of proving too much on Wikipedia can itself be demolished by a proving too much argument, but I’m not going to say which one it is because I want to see if other people independently come to the same conclusion.

For those interested in the puzzle, here is the page Scott was linking to at the time: https://en.wikipedia.org/w/index.php?title=Proving_too_much&oldid=542064614
The article was edited a few hours later, and subsequent conversation showed that Wikipedia editors came to the conclusion Scott hinted at, though the suspicious timing indicates that they probably did so on reading Scott's article rather than independently.

momom210

Another way to avoid the mistake is to notice that the implication is false, regardless of the premises. 
In practice, people's beliefs are not deductively closed, and (in the context of a natural language argument) we treat propositional formulas as tools for computing truths rather than timeless statements.

momom210

it can double as a method for creating jelly donuts on demand

For those reading this years later, here's the comic that shows how to make ontologically necessary donuts.

momom281

I'd appreciate examples of the sticker shortcut fallacy with in-depth analysis of why they're wrong and how the information should have been communicated instead.

momom210

"Anyone thinks they're a reckless idiot" is far too easy a bar to reach for any public figure.
I do not know of major anti-Altman currents in my country, but considering surveys consistently show a majority of people worried about AI risk, a normal distribution of extremeness of opinion on the subject ensures there'll be many who do consider Sam Altman a reckless idiot (for good or bad reason - I expect a majority of them to consider Sam Altman to have any negative trait that comes to their attention because it is just that easy to have a narrow hateful opinion on a subject for a large portion of the population).

momom251

I have cancelled my subscription as well. I don't have much to add to the discussion, but I think signalling participation in the boycott will help conditional on the boycott having positive value.

momom232

Thanks for the information.
Consider though that for many people the price of the subscription is motivated by convenience of access and use.

It took me a second to see how your comment was related to the post so here it is for others: 
Given this information, using the API preserves most of the benefits of access to SOTA AI (assuming away the convenience value) while destroying most of the value for OpenAI, which makes this a very effective intervention compared to cancelling the subscription entirely.

momom210

When I vote, I basically know the full effect this has on what is shown to other users or to myself. 

Mindblowing moment: It has been a private pet peeve of mine that it was very unclear what policy I should follow for voting.

In practice, I vote mostly on vibes (and expect most people to), but given my own practices for browsing LW, I also considered alternative approaches.
- Voting in order to assign a specific score (weighted for inflation by time and author) to the post. Related uses: comparing karma of articles, finding desirable articles on a given topic.
- Voting in order to match an equivalent-value article. Related uses: same; perhaps effective as a community norm but more effortful.
- Voting up if the article is good, down if it's bad (after memetic/community/bias considerations) (regardless of current karma). Related uses: karma as indicator of community opinion.

In the end, making my votes consistent turned out to be too much effort in every case for extensive calculations, which is why I came back to vibes, amended by implicit considerations of consistent ways to vote.
 

I was trying to figure out ways to vote which would put me in a class of voters that marginally improved my personal browsing experience.
It never occurred to me to model the impact it would have on others and to optimize for their experience.
This sounds like an obviously better way to vote.

So for anyone who was in the same case as me, please optimize for others' browsing experience (or your own) directly rather than overcalculate decision-theoretic whatevers.

Load More