gaspardbos

I've been fascinated by technology since an early age and my looking at the world must have been influenced by the dinner table conversations with my parents who are physicists; trying to understand the smallest parts that make the universe work. Before starting school I was drawing airplanes. In primary school I wrote essays about magnetism, robots and hovercrafts. In highschool I got distracted and wrote about synesthesia, social interactions of first graders and Ovid's metamorphoses. I went on to study Industrial Design Engineering and found a mission in sustainability engineering by creating weather stations for the North Pole and plastic recycling machines for 3d-printers.  When I realized that making hardware is at the disadvantage of making software I got interested in the platforms and digital organization of our non-circular economy.  This eventually led me to the study of AI and experimenting with it's application for a circular economy.  When I realized that I wasn't making any money I did what any good self preserving agent would do; I became more pragmatic about work I did for money and political engagement I did for the betterment of society and mankind.  After a few tries for a PhD position but struggling with fitting into the mold of academic contexts and the horrible pay that a PhD position has, I decided that my intellectual pursuits must find root elsewhere. I hope that engaging on this forum can be productive to this end.

Posts

Sorted by New

Wiki Contributions

Comments

Dear Richard,

I stumbled upon this particular post in my initial explorations of lesswrong and researching what the best knowledge is that the community has been able to come to on the topic of "drives, intentions and pursuit of goals" as related to the non-human agency of risky AI.

Thanks for creating these sequences on fear which looks to be a well-thought-out thesis with an interesting proposal for a fear-reducing strategy. The reason I'm making this my first comment on the forum is because the topic also resonates with my personal experience, as I see it has done for others in the comments section. Therapy has helped me touch on some childhood memories and has been productive in reshaping some of my thinking and behavior for the better.

And I also think it is necessary to understand where our own goals come from if we want to align them with AI.

I think what could improve your writing and reasoning in this post, although you might balance it out with the other posts that I have not read yet, is to distinguish a bit more the doubt you have about the instrumental value of the strategy and the ontology of the fear. I think you can rightly posit that behaviors emerge out of the need for the child to adapt and you could reference (even) more sources that point to this.  Also, therapy that takes the person back to these memories to transform them is evidenced to work. 

What I am missing is an investigation into how the fear, undesirable in situations that might cognitively or intelligently be observed as non-threatening, is part of the story someone tells themselves about themselves; their self-perceived identity. Like you say, any giving goal could have at its base any give negative experience, although some might be more easily correlated through observance of their features (fear of intimacy because of abandonment or abuse, etc.), any story could be thought of by the person that makes them cope at the risk of expending rationality.

So while you do touch on these things, and probably in other posts as well, I think that by revisiting your writing and making up your mind about some things you could take out the caveats and make the post a bit more authoritative, especially from the paragraph onwards that starts with "I want to flag...".

Looking forward to your response.