Posts

Sorted by New

Wiki Contributions

Comments

Tiuto2mo10

At the moment, a post is marked as "read" after just opening it. I understand it is useful not to have to mark every post as "I read this", but it makes it so that if I just look at a post for 10 seconds to see whether it interests me, it gets marked as read. I would prefer if one could change the settings to a "I have to mark posts as <read> manually" mode. With a small box at the bottom of a post, one can check.

Answer by TiutoDec 13, 202396

I think it's mostly that people complain when something gets worse but don't praise an update that improves the UX.


If a website or app gets made worse people get upset und complain, but if the UI gets improved people generally don't constantly praise the designers. A couple of people will probably comment on an improved design but not nearly as many as when the UI gets worse.
So whenever someone mentions a change it is almost always to complain.

If I just look at the Software I am using right now:

  • Windows 11 (seems better than e.g. Windows Vista)
  • Spotify (Don't remember updates so probably slight improvements)
  • Discord (Don't remember updates so probably sight improvements)
  • Same goes for Anki, Gmail, my notes app, my bank app etc.

All of those apps have probably had UI updates, but I don't remember ever seeing people complain about any of those updates. I use most of those apps every day and I hear less about their UI changes than about reddit's, a website I almost never use, people just like to complain.

How good was the Spotify UI 10 years ago?  I have no Idea but I suspect it was worse than it is now and has slowly been getting better over the years. 

I also looked up the old logo and it's clearly much worse but people just don't celebrate logos improving the way people make fun of terrible new logos.

Tiuto7mo41

Don't look at the comments of the article if you want to stay positive.

Tiuto1y10

I think this might play a really big role. I'm a teenager and I and all the people I knew during school were very political. At parties people would occasionally talk about politics, in school talking about politics was very common, people occasionally went to demonstrations together, during the EU Parlament election we had a school wide election to see how our school would have voted. Basically I think 95% of students, starting at about age 14, had some sort of Idea about politics most probably had one party they preferred.

We were probably most concerned about climate change, inequality and Trump, Erdogan, Putin all that kind of stuff.

The young people that I know that are depressed are almost all very left wing and basically think capitalism and climate change will kill everyone exept the very rich. But I don't know if they are depressed because of that (and my sample size is very small).

Tiuto1y-1-3

Deutsch has also written elsewhere about why he thinks AI doom is unlikely and I think his other arguments on this subject are more convincing. For me personally, he is who gives me the greatest sense of optimism for the future. Some of his strongest arguments are:

  1. The creation of knowledge is fundamentally unpredictable, so having strong probabilistic beliefs about the future is misguided (If the time horizon is long enough that new knowledge can be created, of course you can have predictions about the next 5 minutes). People are prone to extrapolate negative trends into the future and forget about the unpredictable creation of knowledge. Deutsch might call AI doom a kind of Malthusianism, arguing that LWers are just extrapolating AI growth and the current state of unalignment out into the future, but are forgetting about the knowledge that is going to be created in the next years and decades.
  2. He thinks that if some dangerous technology is invented, the way forward is never to halt progress, but to always advance the creation of knowledge and wealth. Deutsch argues that knowledge, the creation of wealth and our unique ability to be creative will let us humans overcome every problem that arises. He argues that the laws of physics allow any interesting problem to be solved.
  3. Deutsch makes a clear distinction between persons and non-persons. For him a person is a universal explainer and a being that is creative. That makes humans fundamentally different from other animals. He argues, to create digital persons we will have to solve the philosophical problem of what personhood is and how human creativity arises. If an AI is not a person/creative universal explainer, it won't be creative and so humanity won’t have a hard time stopping it from doing something dangerous. He is certain that current ML technology won’t lead to creativity, and so won’t lead to superintelligence.
  4. Once me manage to create AIs that are persons/creative universal explainers, he thinks, we will be able to reason with them and convince them not do anything evil. Deutsch is a moral realist and thinks any AI cleverer than humans will also be intelligent enough to come up with better ethics, so even if it could it kill us, it won’t. For him all evil arises of a lack of knowledge. So, a superintelligence would, per definition, be super moral.

I find some it these arguments convincing, and some not so much. But for now I find his specific kind of optimism to be the strongest argument against AI doom. These arguments are mostly taken from his second book. If you want to learn more about his views on AI this video might be a good place to start (although I havent yet watched it).

[This comment is no longer endorsed by its author]Reply
Tiuto2y20

10 years later and people are still writing funny coments here.

Tiuto2y36

Isn't making your own bread really easy, you just need a bread maker put a bunch of ingredients in, press the button and wait. Seems like it might be worth a try. But obviously you know more about your situation than me.

Tiuto2y10

Hi, thanks for the advice.

Do you, or other people, know why your comment is getting downvoted? Right now it's at -5 so I have to assume the general LW audience disagrees with your advice. Presumably people think it is really hard to become a ML researcher? Or do they think we already have enough people in ML so we don't need more?

Tiuto2y70

I am interested in working on AI alignment but doubt I'm clever enough to make any meaningful contribution, so how hard is it to be able to work on AI alignment? I'm currently a high school student, so I could basically plan my whole life so that I end up a researcher or software engineer or something else. Alignment being very difficult, and very intelligent people already working on it, it seems like I would have to almost be some kind of math/computer/ML genius to help at all. I'm definitely above average, my IQ is like 121 (I know the limitations of IQ as a measurement and that it's not that important) in school I'm pretty good at maths and other sciences but not even the best out of my class of 25. So my question is basically how clever does one have to be to be able to contribute to AGI alignment?