Wiki Contributions

Comments

Assuming you're the first to explicitly point out that lemon market type of feature of 'random social interaction', kudos, I think it's a great way to express certain extremely common dynamics.

Anecdote from my country, where people ride trains all the time, fitting your description, although it takes a weird kind of extra 'excuse' in this case all the time: It would often feel weird to randomly talk to your seat neighbor, but ANY slightest excuse (sudden bump in the ride; info speaker malfunction; grumpy ticket collector; one weird word from a random person in the wagon, ... any smallest thing) will an extremely frequently make the silent start conversation, and then easily for hours if the ride lasts that long. And I think some sort of social lemon market dynamics may help explain it indeed.

Funny is jot the only adjective this anecdote deserves. Thanks for sharing this great wisdom/reminder!

I would not search for smart ways to detect it. Instead look at it from the outside - and there I don't see why we should have large hope for it to be detectable:

Imagine you create your simulation. Imagine you are much more powerful than you are, to make the simulation as complex as you want. Imagine in your coolest run, your little simulatees start wondering: how could we trick Suzie so her simulation reveals the reset?!

I think you agree their question will be futile; once you reset your simulation, surely they'll not be able to detect it: while setting up the simulation might be complex, reinitialize at a given state successfully, with no traces within the simulated system, seems like the simplest task of it all.

And so, I'd argue, we might well expect it to be also in our (potential) simulation, however smart your reset-detection design might be.

My impression is, what you propose to supersede Utilitarianism with, is rather naturally already encompassed by utilitarianism. For example, when you write

If someone gains utility from eating a candy bar, but also gains utility from not being fat, raw utilitarianism is stuck. From a desire standpoint, we can see that the optimal outcome is to fulfill both desires simultaneously, which opens up a large frontier of possible solutions.

I disagree that typical concepts of utilitarianism - not strawmans thereof - are in anyway "stuck" here at all: "Of course," a classical utilitarian might well tell you, "we'll have to trade-off between the candy bar and the fatness it provides, that is exactly what utilitarianism is about". And you can extend that to also other nuances you bring: whatever, ultimately, we desire or prefer or what-have-you most: As classical utilitarians we'd aim exactly at that, quasi by definition.

Thanks for the link to the interesting article!

Answer by FlorianHApr 09, 202420

If I understand you correctly, what you describe seems a bit atypical, or at least not similar in all other people, indeed.

Fwiw, pure speculation: Maybe you learned very much from working on/examining advanced/codes type codes. So you learned to understand advanced concepts etc. But you mostly learned to code on the basis of already existing code/solutions.

Often, instead, when we systematically learn to code, we may learn bit by bit indeed from the most simple examples, and we don't just learn to understand them, but - a bit like when starting to learn basic math - we constantly are challenged to put the next element learned directly into practice, on our own. This ensures we master all that knowledge in a highly active way, rather than only  passively.

This seems to suggest there's a mechanistically simple yet potentially tedious path, for you to learn to more actively create solutions from scratch: Force yourself to start with the simplest things to code actively, from scratch, without looking at the solution first. Just start with a simple problem that 'needs a solution' and implement it. Gradually increase to more complexity. I guess it might require a lot of such training. No clue whether there's anything better.

The irony in Wenar's piece is: In all he does, he just outs himself as... an EA himself :-). He clearly thinks its important to think through net impact and to do the things that do have great overall impact. Sad he caricatures the existing EA ecosystem in such an uncompelling and disrespectful way.

Fully agree with your take of him being "absurdly" unconvincing here. I guess nothing is too blatant to be printed in this world, as long as the writer makes bold & enraging enough claims on a popular scapegoat and has a Prof title from a famous uni.

I can only imagine (or hope), the traction the article got, which you mention (though I have not seen it myself), being mainly limited to usual suspects for whom EA anyway, quasi by definition, is simply all stupid, if not outright evil.

Unconvinced. Bottom line seems to be an equation of Personal Care with Moral Worth.

But I don't see how the text really supports that: Just because we feel more attached to entities we interact with, it doesn't inherently elevate their sentience i.e. their objective moral worth.

Example: Our lesser emotional attachment or physical distance to chickens in factory farms does not diminish their sentience or moral worth, I'd think. Same for (future) AIs too.

At best I could see this equation to +- work out in a perfectly illusionist reality, where there is no objective moral relevance. But then I'd rather not invoke the concept of moral relevance at all - instead we'd have to remain with mere subjective care as the only thing there might be.

This page also provides a neat summary of Zulip advantages; mostly in the similar direction as here: https://stackshare.io/stackups/slack-vs-zulip

Interesting and good to hear, as I was thinking of using it for a class too (also surprised; I don't remember the slightest hint of counter-intuitiveness when I personally used Zulip with its threads).

Load More