Comments

All is fair in love and war, on Zero-sum games in life

I wrote an article on this subject (i.e. why do we play zero-sum games while praising positive-sum games?)

https://native-wonder.blogspot.com/2020/12/things-people-want.html

European Master's Programs in Machine Learning, Artificial Intelligence, and related fields

Thank you, this is very useful. Lately I've been interested in programs that are fully online and could be completed in a year. Would you have any recommendations for that?

All Lesswrong Posts by Yudkowsky in one .epub

Strongly upvoted. As a Kindle-dependent newcomer who's delving into the classics, this is precious.

I have read RAZ. Does this file include it? I would actually need only the posts that are not there.

Do you plan to do this for other authors?

Upside decay - why some people never get lucky

I had trouble understanding how the different facts and judgments in your post are connected between each other and with the concept of upside decay.

But I want to say that I really appreciate the concept, because something very similar occurred to me once, though at the time I didn't give it a name. I was studying the careers of creative artists, and there is a lot of discrimination in these fields. Against women, against people who start out in less prestigious institutions, and so on.

My idea was that because many people were excluded and diversity was stifled, this reduced the probability of "hitting the jackpot" with an extremely brilliant artist that would be the far right of the "artistic potential" curve and end up being the next Picasso. I wanted to model this intuition and verify it in the data, but eventually my project changed and I moved on. The idea, anyway, is that you reduce the chance of getting outliers (or even black swans) in the tails, but you only care about positive outliers.

Numeracy neglect - A personal postmortem

Thank you for your questions, they're proving very useful.

But it is interesting to understand, what's happening to other children, who actually do math. Suddenly you realize, that "solving problems" for them is less energy demanding, which is awkward!

I'm not sure this is the case. We're humans, maths is hard for everyone. I imagine it's more about developing an ethics of work early on and being willing to delay gratification and experience unpleasant sensations for the purpose of learning something valuable. Though of course it takes a basic level of intelligence to find motivation in intellectual work. And there needs to be some specific motivation as well, i.e. math is beautiful, or math is useful.

As for the other questions... You may be getting closer than me at hitting the target here. I think the comparison between GPT-3 talk, where nothing is wrong, and "manipulation", is central.

But "manipulation" isn't like pattern-after-pattern, it is something different. What is it?

I think the whole thing revolves around mental models. Programming "clicks" when the stuff that you do with the code suddenly turns into a coherent mental model, so that you can even predict the result of an operation that you haven't tried before. I became better at programming after watching a few theoretical computer science classes, because I was more proficient at building mental models of how the different systems worked. Likewise, maths clicks when you move from applying syntactical rules to building mental models of mathematical objects.

It's easier to build mental models with programming, because the models that you're working with are instantiated on a physical support that you can interact with. And because it's harder to fool yourself and easier to get feedback. If you screw up, the computer will stop working and tell you. If you screw up with pen and paper, you might not even realize it.

This is not the whole story, but it's a bit closer to what I meant to say.

Forcing Freedom

Your position is consistent, though to me somewhat troubling.

I wouldn't equate "unable to have different preferences or to envision a better situation" with "happy". Perhaps Plato's cave applies here. Or consider a child who is born in an underground prison, Banelike, and never sees the light of sun. Who is then offered the opportunity of freedom on the surface and refuses out of fear or ignorance. Would you think they are "happy"? Perhaps, but they could be happier. Or at least they could experience a richer level of existence, given that humans evolved to enjoy fresh air and nature landscapes and the feeling of the sun on their skin, something they can't even imagine at the moment.

Imagine writing a sort of will for altered-mind situations. If you fell under a hypnotism that turned you into a slave, would you want to be liberated? Or would you want people to always stop at your currently expressed preferences?

Doesn't this mean that you would connect yourself to Nozick's machine, since it would be easier to be "happy" in a state of brainwashed slavery than in the complex life of a free agent?

Forcing Freedom

I agree with you, though I don't think the linked account expects an "eternal old age"; what made you think that? As I see it, it's actually an argument about the inner experience of humans and how the author thinks we wouldn't be happy with a very long lifespan. I don't agree with the author, but I linked the post as anecdotal evidence that some people who are no longer young may reject the idea of a very long lifespan because of a general feeling of life-weariness (to what extent this feeling is connected to the biological phenomenon of aging is to be ascertained).

Are you sure? That seems like a question of physics, and the accessible energy reserves and computational capacity of our light cone (the latter of which may be infinite even if the former is not). 

How would computational capacity be infinite in the presence of finite energy?

Forcing Freedom

Personally, I am strongly inclined towards non-interference. I have little trouble accepting that people choose wrong, knowing how fallible I am myself. I also think that, given how complex the universe is for us, it will always be easier to find arguments for inaction than for action.

And this is precisely why I am interested in arguments for interference. Most of the time, the option of non-interference is the easiest for me; which makes me at least a bit suspicious. It makes me wonder: have I carefully considered all the opposing arguments?

'Moralising' implies that I am considering interventionism in defense of my own values. I was thinking more of situations when the other is in danger or experiencing evident suffering, which evoke an empathy response.

If you saw someone fallen on the train tracks, you wouldn't shrug and say: "It's a feature of agency, let evolution work". You would try and save them. This is the kind of experience I was trying to convey.

Can we hold intellectuals to similar public standards as athletes?
Maybe some kind of social app inspired by liquid democracy/quadratic voting might work?

Do you think it's wise to entrust the collective with judging the worth of intellectuals? I can think of a lot of reasons this could go wrong: cognitive biases, emotional reasoning, ignorance, Dunning–Kruger effect, politically-driven decisions... Just look at what's happening now with cancel culture.

In general this connects to the problem of expertise. If even intellectuals have trouble understanding who among them is worthy of trust and respect, how could individuals alien to their field fare better?

If the rating was done between intellectuals, don't you think the whole thing would be prone to conflicts of interest, with individuals tending to support their tribe / those who can benefit them / those whose power tempts them or scares them?

I am not against the idea of rating intellectual work. I'm just mistrustful of having the rating done by other humans, with biases and agendas of their own. I would be more inclined to support objective forms of rating. Forecasts are a good example.

Load More