Posts

Sorted by New

Wiki Contributions

Comments

I would say not interviewing people from psychology and religion may be a weak point in your, admittedly informal research. I wouldn't be able to speak of consciousness without speaking of how we see ourselves through one another, and in fact without that we enter ill health. The very basis of most contemporary psychology is how humans maintain health via healthy relationships, therapy is an unfortunate replacement for a lack of original healthy relationships. If this is required for human minds to thrive, it must be essential...because I'm not STEM or Rational mostly, I refer to this as love and consider it essential to humanness...the ignoring of this so obvious thing is, what I consider to be one of the main deficits of rat culture.  And if it continues to be ignored as consciousness work weaves its way into AI alignment I think it will be detrimental. 

Here is I think the seminal quote of the piece - "There is no future scenario where 2/3’s of all humanity are not significant"

Yes as I mention I view it as culture, which is similar as you say an artform...certainly in cultures it creates cultural norms...so we're on a similar page there. And I can see how it might not seem relevant to AI alignment to those deeply involved in training work or other direct aspects - but what I'm hoping to consider is the idea that since 6 Billion humans find religion significant in their life, they as a giant force may help or come against AI development, the simple point is team humanity is making AI, and a bunch of our team members are going to definitely have an influence on the team's winning or losing record. If I'm the coach I want to engage all the team members towards our goals.  I think right now AI dev is kind of under the radar for a lot of society, it's a small gang insulated by the general lack of awareness they are there, and this might make it seem religion has nothing to do with it, but the time will come when AI blows up to a bigger worldwide audience who will become more interested and potentially against it. I'm not only interested in this angle, I'm also very interested as a philosophical/theological thinker and activist in how down the line art, culture, religion, other content of civilization will be important to the very inner workings of AI dev. and thus AI alignment. If you don't see it yet, I understand, but I'm pretty sure that day will come. 

Haha meh...I don't think you're thinking big enough. There will always be ethicists and philosophers surrounding any great human endeavor who are not themselves technically proficient...certainly they should lifelong educate, but if your not good at coding or maths, you're just not gong to ever understand certain technical issues. So saying without that understanding their effectiveness is nil is just not understanding the nature of how humanity progresses on big issues. It's always a balance of abstract and concrete thinkers...they must work together. The one's who dismiss the other side are most definitely going to be the losers because it's dismissing about half of what you actually need to succeed. We need to respect those who think differently from us, we must literally feel desperately in need of them. 

I was happy to read your Golden Rule idea...I just posted my own version of the Golden AI Rule a few days ago. 

The Golden AI Rule: AI will only be as good to us as we are to each other. 

Well that's going to a level I wouldn't have been able to imagine before. microfiber cloths were a great advancement over using a towel or paper towel, which didn't wick up the water as well, and sometimes might scratch the lens. I have seen differences in quality between different microfibers...but going next level to no need for a cloth sounds great. 

These are a lot of good ideas. I comment above I think a good approach is to truly represent that we are a bunch of younger people who fear for the future...this would appeal to a lot of folks at his level, to know the kids are scared and need his help.

I agree with TekhneMakre...it comes across like an average looking unconfident person asking out a gorgeous celeb. Probably a friend approaching him is best, but an email can't hurt. I would get a few people together to work on it...my approach would be to represent truly who we are as a motivated group of people that has the desire to write this email to him by saying something like, "There's a great forum of AI interested and concerned folks that we are a part of, many of us on the younger side, and we fear for the future of humanity from misaligned AI and we look to people like you Dr. Tao as being the kind of gifted person we hope could become involved early in helping guide AI in the right directions that would keep us all safe. We are younger and up and coming, so we don't know how to appeal to what interests you, so we're just laying it out there so you can know there are thousands of us and we're hoping to create a conversation with you and your high level peers to drive some energy in this direction and maybe your direct involvement. Thanks."

I haven't seen this hydrophobic yet, but some of my glasses are afraid of water hehe. jk. I disagree with letting the dust stay and your brain will adjust...it's just increasingly opaque and the clear light is good. Here's my best practice, I think the main problem is getting enough soapy water on them to lift all the gunk off and not scratch the lens in the cleaning, so I rinse them first under fast water, then I use dishwashing detergent, foam it up in my hands and using my foamy fingers clean the lenses and the frame and do it twice, rinse good, then dry with a good micro fiber like they give you at opticians shops. I feel happy when I see nice and clear. 

I really really hope you get into AI work. I'm a big advocate for arts and other human qualities being in AI dev. Of course much of it isn't really understood yet how it will integrate, but if we get folks like you in there early you'll be able to help guide the good human stuff in when it becomes more clear how.  Viliam commenting below that AI lacks such human instincts is exactly the point...it needs to get them ASAP before things start going down the wrong road. I would guess that eventually we will be evaluating progress by how much an AI does show these qualities. Of course it's still early now. 

Load More