toonalfrink

Posts

Sorted by New
[Event]Pair debuggingAmsterdam-ZuidoostNov 6th
0
[Event]Post-meetup socialCarolina MacGillavrylaan 3198, Amsterdam2020 Feb 15th
0
[Event]Meetup #45 - Implementation IntentionsScience Park 904, Amsterdam2020 Feb 15th
0
[Event]Meetup #44 - MurphyjitsuScience Park 904, Amsterdam2020 Feb 1st
0
[Event]Meetup #43 - New techniquesAmsterdam, Netherlands2020 Jan 18th
0

Comments

I'm thinking of postmodernism and modernism not as being incompatible but as being the two components of a babble and prune system on a societal level

How To Get Into Independent Research On Alignment/Agency

Based on your comment, I'm more motivated to just sit down and (actually) try to solve AI Safety for X weeks, write up my results and do an application. What is your 95% confidence interval for what X needs to be to reduce the odds of a false negative (i.e. my grant gets rejected but shouldn't have been) to a single digit? 

I'm thinking of doing maybe 8 weeks. Maybe more if I can fall back on research engineering so that I haven't wasted my time completely.

How To Get Into Independent Research On Alignment/Agency

Hi John, thanks a lot.

Your posts are coming at the perfect time. I just gave my notice at my current job, I have about 3 years of runway ahead of me in which I can do whatever I want. I should definitely at least evaluate AI Safety research. My background is a bachelor's in AI (that's a thing in the Netherlands). The little bits of research I did try got good feedback.

Even though I'm in a great position to try this, it still feels like a huge gamble. I'm aware that a lot of AI Safety research is already of questionable quality. So my question is: how can I determine as quickly as possible whether I'm cut out for this?

Not just asking to reduce financial risk, but also because I feel like my learning trajectory would be quite different if I already knew that it was going to work out in the long run. I'd be able to study the fundamentals a lot more before trying research.

Meetup #50 - Pair Debugging and applied rationality

Some updates:

- I'm going to be handing out corona tests on entry to the meetup. Still thinking about different ways to approach this, but they're currently not required.

- Topic will be Nonviolent Communication (NVC)! This is a precursor to many other techniques and almost a panacea of good collaboration and conflict resolution

- I have some long term plans for building this into a larger, more active community. See announcement here (don't forget to read the comments): https://www.facebook.com/toon.alfrink/posts/4787599687950855

- As part of those plans, I want to gradually build up to being able to sustain my living expenses (€1500/mo) by organising. I will start by asking an entry fee of €10 to the meetups. This will also cover food and corona tests. However if this entry fee keeps you from coming, you can pass on it!

Self-Integrity and the Drowning Child

You cannot truly dissolve an urge by creating another one. Now there are 2 urges at odds with one another, using precious cognitive resources while not achieving anything.

You can only dissolve it by becoming conscious of it and seeing clearly that it is not helping. Perhaps internal double crux would be a tool for this. I'd expect meditation to help, too.

Discussion with Eliezer Yudkowsky on AGI interventions

fwiw, I don't think someone's openness to thinking about an idea necessarily goes down as more people contact them about it. I'd expect it to go up. 
Although this might not necessarily be true for our target group

It doesn't, but I tend to go with the assumption that if one person voices an objection, there are 100 more with the same objection that don't voice it
I put this on my to do list, might take a few weeks to come back to it, but I will come back to it

Appreciate the criticism.

I agree with you that we need to separate the good stuff from the bad stuff and that there is a risk here that I end up diluting the brand of rationality by not doing this well enough.

My intuition is that I'm perfectly capable of doing this, but perhaps I'm not the best person to make that call, and I'm reminded that you've personally called me out in the past on being too lax in my thinking.

I feel like you have recently written a lot about the particular way in which you think people in LW might be going off the rails, so I could spend some time reading your stuff and trying to pass your ITT.

Does that sound like a good plan to you?

I decided to quit my job.

Still have various options for what to do next, but most likely I will spend at least a year trying to build a large rationality community in Amsterdam. I'm talking 5+ events a week, a dedicated space, membership program, website, retreats, etc.

The emphasis will be on developing applied rationality. My approach will be to cover many different paradigms of self-improvement. My hunch is that one will start noticing patterns that these paradigms have in common.

I'm thinking authentic relating, radical honesty, CFAR-style applied rationality, shadow work, yoga/meditation, psychedelic therapy, street epistemology, tantra, body work, nonviolent communication, etc. If you know anything that would fit in this list, please comment!

This would be one pillar of the organisation, and the other one would be explicitly teaching an Effective Altruist ethos to justify working on rationality in the first place.

If this goes really well, I'm hoping this will develop into something like "the CFAR of Europe" at some point.


 

Study Guide

I'm more interested in the time this would take if one wasn't constrained by being in college. My intuition is that you can go 2x faster on your own if the topic and the pace isn't being imposed on you, but maybe college just matched your natural learning style.

Thanks for the data point in any case

Load More