This post points at an interesting fact: some people, communities or organizations already called themselves "rationalists" before the current rationalist movement. It brings forth the idea that the rationalist movement may be anchored in a longer history than what might first seem from reading LessWrong/Overcoming Bias/Eliezer history.
However, this post reads more like a Wikipedia article, or an historical overview. It does not read like it has a goal. Is this post making some sort of argument that the current rationalist community is descended from those earlier groups ? Is it poking at the consensus history of how the rationalist community ended up choosing "rationalist" as an identifier ? I don't know whether any of those things is argued in this post.
This feels like an interesting bag of facts, full of promising threads of inquiry which could develop in new historical insights and make great posts. I am looking forward to reading those follow-ups, but for now this feels incomplete and lacking purpose.
TIL that the expected path a new user of LW is expected to follow, according to https://www.lesswrong.com/posts/rEHLk9nC5TtrNoAKT/lw-2-0-strategic-overview, is to become comfortable with commenting regularly in 3-6 month, and comfortable with posting regularly in 6-9 month. I discovered the existence of shortforms. I (re)discovered the expectation that your posts should be treated as a personal blog medium style ?
As I'm typing this I'm still unsure whether I'm destroying the website with my bad shortform, even though the placeholder explicitly said... (\*right click inspect\*)
Write your thoughts here! What have you been thinking about?
Exploratory, draft-stage, rough, and rambly thoughts are all welcome on Shortform.
I'm definitely rambling ! Look ! I'm following the instructions !
I feel like a "guided tour of LW" is missing when joining the website ? Some sort if premade path to get up to speed on "what am I supposed and allowed to do as a user of LW, except reading posts ?". Could take some inspiration from Duolingo, Brilliant, or any other app trying to get a user past the initial step of interacting with the content ?
I vehemently disagree here, based on my personal and generalizable or not history. I will illustrate with the three turning points of my recent life.
First step: I stumbled upon HPMOR, and Eliezer way of looking straight into the irrationality of all our common ways of interacting and thinking was deeply shocking. It made me feel like he was in a sense angrily pointing at me, who worked more like one of the PNJ rather than Harry. I heard him telling me you're dumb and all your ideals of making intelligent decisions, being the gifted kid and being smarter than everyone are all are just delusions. You're so out of touch with reality on so many levels, where to even start.
This attitude made me embark on a journey to improve myself, read the sequences, pledge on Giving What we can after knowing EA for many years, and overall reassess whether I was striving towards my goal of helping people (spoiler: I was not).
Second step: The April fools post also shocked me on so many levels. I was once again deeply struck by the sheer pessimism of this figure I respected so much. After months of reading articles on LessWrong and so many about AI alignment, this was the one that made me terrified in the face of the horrors to come.
Somehow this article, maybe by not caring about not hurting people, made me join an AI alignment research group in Berlin. I started investing myself into the problem, working on it regularly, diverting my donations towards effective organizations in the field. It even caused me to publish my first bit of research on preference learning.
Third step: Today this post, by not hiding any reality of the issue and striking a lot of ideas down that I was relying on for hope, made me realize I was becoming complacent. Doing a bit of research in the weekend is the way to be able to say “Yeah I participated in solving the issue” once it's solved, not making sure it is in fact solved.
Therefore, based on my experience, not a lot of works made me significantly alter my life decisions. And those who did are all strangely ranting, smack-in-your-face works written by Eliezer.
Maybe I'm not the audience to optimize for to solve the problem, but on my side, I need even more smacks in the face, breaking you fantasy style posts.
Regarding the schedule, when does the event start on friday and end of monday ? I would like to already book my trip to take advantage of low prices.
I would love to go, and was pondering quite hard whether to try other people interested in this endeavour in Berlin. Sadly I am not available this weekend. Can I join on saturday 30th without going to the first one ?
Thank you for the reply. I know that worry is unnecessary, I was rather asking about what you would do if you didn't know for a fact that it was indeed based on GPT-3, or that humans were effectively overseeing it, to determine whether it is an unsafe AGI trying to manipulate humans using it ?
I know that no one could detect a super intelligent AGI trying to manipulate them, but I think it's can be non-obvious that a sub human AGI is trying to manipulate you if you don't look for it.
Primarily, I think that currently, no one uses AI systems with the expectation that it could try to deceive them, so they don't apply the basic level of doubt you put in every human when you don't know their intentions.
Thank you for the heads-up ! I joined the meetup group and i am looking forward to new events :)
Hello everyone ! My name is Lucie, and I am studying computer science. I'm fascinated by this website and started binge reading the sequences after finishing reading HPMOR. With all the information I was exposed to on this website during the last week, I am hyped and thinking frantically about how can all of this change my life goals.
However, I know that for me only reading more and more post, and getting more and more information will only sustain me for a while. When my hype die down, I think I will not be as motivated as right now into reading posts if I don't find a way to tie it to my life more than pure curiosity.
I think I need to fill at least a bit part of a community and tie it into my social life to keep my interest for long enough. Therefore, I'm making this comment, and asking you how to meet some people from this community, either online or offline.
Right now, I'm a bit lost as to what is the next step for me in this journey. I don't know whether the lack of explicit way of getting into the community is an intentional filter for people with enough intrinsic motivation to continue learning on their own for a long time ? Is there a will for new active members, whatever that means ?
So anyway, if you want to help me, to chat or to meet in Berlin, feel free to reply or to send me a message !
Conjecture is "a team of researchers dedicated to applied, scalable AI alignment research." according to their website https://www.conjecture.dev/
They are publishing regularly on the alignment forum and LessWrong https://www.lesswrong.com/tag/conjecture-org
I also searched their website, and it does not look like Bonsai is publicly accessible. This must be some internal tool they developed ?