Linda Linsefors

Hi, I am a Physicist, an Effective Altruist and AI Safety student/rehearser.

Linda Linsefors's Comments

I'm leaving AI alignment – you better stay

Ok, thanks. I have changed it ...

... I mean that is what what I wrote all along, can't you see? :P

I'm leaving AI alignment – you better stay

Hm, I did not think about the tax part.

What country to you live in?

Maybe BERI would be willing to act as middle hand. They have non profit status in the US.

[Meta] Do you want AIS Webinars?

Would you be interested in just participating? I read your post about leaving AIS. Seems like you have enough experience to be able to contribute to the discussion.

I'm leaving AI alignment – you better stay

Nice diagram.

I'm currently doing interviews with early career and aspiring AIS researchers to learn how to better support this group, since I know a lot of us are struggling. Even though you left, I think there are valuable information in your experience. You can answer here publicly or contact me via your preferred method.

(linda.linsefors@gmail.com, https://calendly.com/linda-linsefors/ais)

What could have been different about the world for you to succeed in getting a sustainable AI Safety research career?

What if you got more funding?

What if you got some sort of productivity coaching?

What if you had a collaboration partner?

-------------------------------------------------------------------------------------------------------------------------------


Random suggestion

Would you be interested in being a research sponsor. I'm guessing wildly here but maybe you can earn enough to live the fun life you want while also supporting a AI Safety researcher? Given that you been in the field, you have some capability to evaluate their work. You can give someone not just money but also a discussion partner and some amount of feedback.

If you can help someone else succeed, that creates as much good as doing the work yourself.

I just started doing these interviews with people, so I don't know for sure. But if my current model is more or less right, there will be lots of people who are in the situation you just left behind. And if I would make some wild guesses again, I would say that most of them will quit after a few year, like you, unless we can create better support.

This is just something that came to my mind late at night. I have not though long and hard about this idea. But maybe check if something like this feels right for you?

[Meta] Do you want AIS Webinars?

Let's do it!

If you pick a time and date and write up an abstract, then I will sort out the logistic. Worst case it's just you and me having a conversation, but most likely some more people will show up.

TAISU - Technical AI Safety Unconference

COVID-19 Update!

TAISU in it's planed form is cancelled. But there will be a Web-TAISU around the same theme, and around the same time. I will make an announcement and probably open up for more applications when this thing is a bit more planed out.

TAISU - Technical AI Safety Unconference

Hi Jarson.

Due to the current pandemic TAISU will take a very different form than originally planed. I will organize some sort of online event on the same theme around the same time, but I don't know much more yet. I don't want to take on board more participants until I know what I'm organising. But ass soon as I know a bit more, I will do a new announcement and open up applications again. I expect this will happen with in a week or two.

Regarding your project, I'd bee happy to take a look at your google dock. Pleas share it.

TAISU - Technical AI Safety Unconference

Official application deadline has now passed. Those of you who have applied to participate will soon get an email.

However, since TAISU is not full yet. I will now accept people on a first come first serve bases, for anyone who is qualified.

Linda Linsefors's Shortform

I'm basically ready to announce the next Technical AI Safety Unconference (TAISU). But I have hit a bit of decision paralysis as to what dates it should be.

If you are reasonably interested in attending, please help me by filling in this doodle

If you don't know what this is about, have a look at the information for the last one.

The venue will be EA Hotel in Blackpool UK again.

“embedded self-justification,” or something like that

The way I understand your division of floors and sealing, the sealing is simply the highest level meta there is, and the agent has *typically* no way of questioning it. The ceiling is just "what the algorithm is programed to do". Alpha Go is had programed to update the network weights in a certain way in response to the training data.

What you call floor for Alpha Go, i.e. the move evaluations, are not even boundaries (in the sense nostalgebraist define it), that would just be the object level (no meta at all) policy.

I think this structure will be the same for any known agent algorithm, where by "known" I mean "we know how it works", rather than "we know that it exists". However Humans seems to be different? When I try to introspect it all seem to be mixed up, with object level heuristics influencing meta level updates. The ceiling and the floor are all mixed together. Or maybe not? Maybe we are just the same, i.e. having a definite top level, hard coded, highest level meta. Some evidence of this is that sometimes I just notice emotional shifts and/or decisions being made in my brain, and I just know that no normal reasoning I can do will have any effect on this shift/decision.

Load More