Wiki Contributions

Comments

This seems like a great idea -- I haven't tried the GPTs yet, so I will just comment on the rest of the article.

In addition to the AI tutoring the student 1:1, I wonder whether it would be useful to match students with each other based on their current interest and skill level. For humans, it is often motivating to talk to other humans, and the problem is that kids in the same classroom are often not interested in the same topic, or they know much less than you, or they know much more than you so it is boring for them to talk to you. But if this system was used by many kids across the country, there would be a chance to find someone at the same level as you, who is trying to learn the same thing as you, so perhaps the AI could put you in the same temporary chat room and let you talk to each other. Basically create temporary classrooms.

I have mixed opinions on using the workplace as an opportunity to learn, or even to determine what is worth learning. On one hand, yes, the job can help you find your blind spots, or the blind spots of your school; and then it would be great to get one more opportunity to learn all of that at school. On the other hand, there are also companies that use obsolete technologies, or use technologies the wrong way; you can meet overconfident colleagues who teach you various "anti-patterns" as their idea of best practices, and there is no one there to provide a balancing perspective... so I imagine some jobs could actually hurt your education a lot. Also, school education is usually more general: it teaches you the general principles of programming rather than the technical details of how to use FooLibrary-2.4.3, and it is the latter kind of knowledge that gets obsolete sooner.

Answer by ViliamFeb 25, 202420

I think you could apply rationality even in a universe with a random number generator, as long as most things are causal.

Even if the universe is causal, we still need a good strategy to think about it (i.e. rationality).

Curiosity killed the cat by exposing it to various "black swan" risks.

Does this actually have some point, even as a wrong metaphor, or is it just a mathematically looking word salad? I am too tired to figure this out.

I will just note that if this worked, it would be an argument for the impossibility of alignment of anything, since the "anthropocentic" part does not play any role in the proof. So even if all we had in the universe were two paperclip maximizers, it would be impossible to create an AI aligned to them both... or something like that.

We need a research on whether atheists are more likely to suffer from akrasia.

If we take Julian Jaynes seriously, the human brain has a rational hemisphere and a motivating hemisphere. Religion connects these hemispheres, allowing them to work in synergy. Skepticism seems to split them.

Effective atheists are probably the ones who despite being atheists still believe in some kind of "higher power", such as fate or destiny or the spirit of history or some bullshit like that. Probably still activates the motivating hemisphere to some degree, only now instead of hearing a clear voice, only some nonverbal guidance is provided. Deep atheism probably silences the motivating hemisphere completely.

The question is, how to harness the power of the religious hemisphere without being religious (or believing some nominally non-religious bullshit). How to be fully rational and fully motivated at the same time.

Can we say something like "I know this is pure bullshit, but God please give me the power to accomplish my goals and smite my enemies!" and actually mean it? Is this what will unleash the true era of rationalist world optimization?

It’s not that consumers ask for one thing and get another, it’s that they get what they want but we think what they want is bad for them.

I think it makes sense to distinguish three situations:

  • the consumer wants X, the company sells Y in a bottle labeled "X"
  • the company sells X, telling everyone using advertising and bribed experts: "science proves that X makes you healthy, and lack of X makes you sick", but that is a lie
  • the company sells X, everyone knows that X is bad for you, but the customers buy it anyway

The first is clearly a problem, most people would agree with that. The last probably cannot be avoided -- if you don't allow the customers to buy the product legally, they will buy it illegally -- plus there is a chance that the government is wrong.

It is the second case that bothers me. I don't think it is completely fair to say "customers want it", even if they kinda do, because they only want it because they are lied to. I wouldn't want the government to stop me from getting what I want, but I would want to be told clearly when someone is lying to me. (And yes, there is also a risk that the government would be wrong. But I don't think that it is a good solution to let the lies unaddressed, or to let various people -- scientists and scammers alike -- say different things and expect the average person to sort it out without any more hints.)

So, I would like to see some sort of "scientific authority" that would have a monopoly on providing official medical recommendations, which would be clearly displayed on health-related products, or their absence would be obvious for everyone. Something like, each actual medicine contains a red rectangle with a logo saying "this is actual medicine", and no one is allowed to put anything similar on their product, unless FDA allows them. You are allowed to buy and sell stuff without the red rectangles, but everyone is told, repeatedly and unambiguously by media: "if it claims to have medical benefits, but it doesn't have the red rectangle, it's fraud -- always check the red rectangle". (The test criterion for "repeatedly and unambiguously" is that an average person with 80 IQ can tell you what the red rectangle means.)

I am not blaming you personally, but the Overton window contains the population growth and not much else.

market size, better matching, more niches

Improving the population (genetically or by education) would have some effect here, too. Not literally more niches or bigger market size overall, but more niches for smart-people-related things, and more market demand for the stuff smart people buy.

you should understand how the foundations of math work before doing advanced math

Is this merely something that set theoreticians believe, or do mathematicians that are experts at other branches of math actually find set theory useful for their work?

Can you in practice use set theory to discover something new in other branches or math, or does it merely provide a different (and less convenient) way to express things that were already discovered otherwise?

Many statements are undecidable in ZFC; what impact does that have on using set theory as a foundation for other branches of math?

Yes, I think the reasonable objection is that "population growth" is only a one way to achieve the (selfishly) desired outcome, and that it would be bad to focus on it at the exclusion of everything else.

For example, you could also get more research by increasing average human IQ, whether by genetic engineering or some form of eugenics. (The eugenics doesn't have to be coercive, we haven't picked even the lowest hanging fruit of encouraging healthy young men with high IQ to donate more sperms.)

The existing smart humans probably also could be used much better. Education sucks; special education for gifted kids is a taboo at many places. Scientists waste a lot of time doing paperwork. Scientific articles are paywalled. Many people do bullshit jobs, because those pay well and sometimes you don't have the skills necessary to start your own company. (Or maybe we could just open borders for people with IQ over 150.)

Basically, seeing all this inefficiency makes "we need to increase the population" sound like motivated reasoning.

*

That all said, maybe it is a sad truth that all these things are politically difficult to fix, and the population growth is after all the most likely way to actually get more research done.

Sorry for making this personal -- I had only 3 examples in mind, couldn't leave one out.

Would you agree with the statement that your meta-level articles are more karma-successful than your object-level articles?

Because if that is a fair description, I see it as a huge problem. (Not exactly as "you doing the wrong thing" but rather "the voting algorithm of LW users providing you a weird incentive landscape".) Because the object level is where the ball is! The meta level is ultimately there only to make us more efficient at the object level by indirect means. If you succeed at the meta level, then you should also succeed at the object level, otherwise what exactly was the point?

(Yours is a different situation from Roko's, who got lots of karma for an object-level article, and then wrote a few negative-karma comments, which was what triggered the censorship engine.)

The thing I am wondering about is basically this: If you write an article, saying effectively "Yudkowsky is silly for denying X", and you get hundreds of upvotes, what would happen if you consequently abandoned the meta level entirely, and just wrote an article saying directly "X". Would it also get hundreds of upvotes? What is your guess?

Because if it is the case that the article saying "X" would also get hundreds of upvotes, then my annoyance is with you. Why don't you write the damned article and bask in the warmth of rationalist social approval? Sounds like win/win to everyone concerned (perhaps except for Yudkowsky, but I doubt that he is happy about the meta articles either, so this still doesn't make it worse for him, I guess). Then the situation gets resolved and we all can move on to something else.

On the other hand, if it is the case that the article saying "X" would not get so many upvotes, then my annoyance is with the voters. I mean, what is the meaning of blaming someone for not supporting X, if you do not support X yourself? Then, I suspect the actual algorithm behind the votes was something like "ooh, this is so edgy, and I identify as edgy, have my upvote brother" without actually having a specific opinion on X. Contrarianism for contrarianism's sake.

(My guess is that the article saying "X" would indeed get much less karma, and that you are aware of that, which is why you didn't write it. If that is right, I blame the voters for pouring gasoline into fire, supporting you to fight for something they don't themselves believe in, just because watching you fight is fun.)

Of course, as is usual when psychologising, this all is merely my guess and can be horribly wrong.

Load More