LESSWRONG
LW

2475
Valentin2026
25922450
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Hyperbolic trend with upcoming singularity fits METR capabilities estimates.
Valentin20261mo90

Thank you very much for catching the mistake! I checked, you are completely right. 

Reply
How AI researchers define AI sentience? Participate in the poll
Valentin20263mo10

I don't think they passed it in a full sense. Before LLM, there was a 5 minute Turing test, and some chatbots were passing it. I think 5 minutes is not enough. I bet that if you give me 10 hours, any currently existing LLM and human, we will communicate only via text, I will be able to figure out who is who (if both will try hard to persuade in their humanity). I don't think LLM can come up yet with a consistent non-contradicting life story. It would be an interesting experiment :)

Reply1
How AI researchers define AI sentience? Participate in the poll
Valentin20263mo10

Would you mean similarity on the outer level (e.g. Turing test) or at inner (e.g. neural network structure should resemble brain structure?
If the first - would it mean that when AI passes Turing test it would be sentient?
If the second - what are the criteria for similarity?  Full brain emulation or something less complicated?

Reply
Foom & Doom 1: “Brain in a box in a basement”
Valentin20263mo30
  1. Are you working with SOTA model? Here, mathematicians report a quite different story  https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/
  2. I guess "good at" was improper wording. I did not mean that they do not produce nonsense. I meant that sometimes they can produce a correct solution. It is like the person may be not fit for running 100 meters in 10 seconds every day, but even if they do it in 5% of cases this is already impressive, and shows that it is possible in principle. And I guess "Ph.D. level" sounded like they can write a Ph.D. thesis from scratch. I just meant that there are short well-formulated problems that would require Ph.D. student a few hours, if not few days, which current LLM can solve in non negligible number of cases.    
Reply
Foom & Doom 1: “Brain in a box in a basement”
Valentin20263mo3-3

Can you expand your argument why LLM will not reach AGI? Like, what exactly is the fundamental obstacle they will never pass? So far they successfully doing longer and longer (for humans ) tasks https://benjamintodd.substack.com/p/the-most-important-graph-in-ai-right
I neither can see why in a few generations LLM won't be able to run a company, as you suggested. Moreover, I don't see why it is necessary to get to AGI. LLM are already good at solving complicated, Ph.D. level mathematical problems, which improves. Essentially, we just need an LLM version of AI researcher. To create ASI you don't need a billion of Sam Altmans, you need a billion of Ilya Sutskevers. Is there any reason to assume LLM will never be able to become an excellent AI researcher? 

Reply
Universal Basic Income and Poverty
Valentin20261y*-1-2

I agree, they have a really bad life, but Eliezer seems to talk here about those who work 60 hours/week to ensure their kids will go to a good school. Slightly different problem. 

And on homeless people, there are different cases. In some UBI indeed will help. But, unfortunately, in many cases the person has mental health problems or addiction, and simply giving them money may not help. 

Reply
Universal Basic Income and Poverty
Valentin20261y21

I feel that one of the key elements of the problem is misplaced anxiety. If the ancient farmer stops working hard he will not not get enough food. So all his family will be dead.  In modern Western society, the risk of being dead from not working is nearly zero. (You are way more likely to die from exhausting yourself and working too hard).  When someone works too hard, usually it is not fear of dying too earlier, or that kids will die. It is a fear of failure, being the underdog, not doing what you are supposed to, and plenty of other constructs that ancient people simply did not reach - first they needed to survive. In this sense, we are way better than even one hundred years ago. 
Can UBI eliminate this fear? Maybe partially it can help, but people will still likely work hard to preserve their future and the future of their children. Maybe making psychotherapy (t address the fear itself) more available for those with low income is a better solution. I understand that it would require training way more specialists than we have now. However, some people report a benefit from talking with GPT as a therapist https://x.com/Kat__Woods/status/1644021980948201473 , maybe it can help. 

Reply
LessWrong Community Weekend 2024 [applications closed]
Valentin20261y10

What is the application deadline? I did not find it in the post. Thank you!

Reply
Mentorship in AGI Safety (MAGIS) call for mentors
Valentin20261y10

Yes, absolutely! We will open the application for mentee later

Reply
Impactful data science projects
Valentin20262y10

So far nothing, was distracted by other stuff in my life. Yes, let's chat! frombranestobrains@gmail.com

Reply
Load More
13Hyperbolic trend with upcoming singularity fits METR capabilities estimates.
1mo
6
7How AI researchers define AI sentience? Participate in the poll
3mo
4
8Mechanistic Interpretability Via Learning Differential Equations: AI Safety Camp Project Intermediate Report.
5mo
0
5Mentorship in AGI Safety: Applications for mentorship are open!
1y
0
32Mentorship in AGI Safety (MAGIS) call for mentors
1y
3
13What to do if a nuclear weapon is used in Ukraine?
3y
9
5Would "Manhattan Project" style be beneficial or deleterious for AI Alignment?
Q
3y
Q
1
5Impactful data science projects
Q
3y
Q
2
4A proposed system for ideas jumpstart
4y
2
7Grading scheme from Valentine Smith
Q
4y
Q
3
Load More