Wiki Contributions

Comments

Who is this MSRayne person anyway?

Then you should at least try to talk to 80,000 hours, you might eventually relocate somewhere where meeting people is easier.

It wasn't intended to make fun of you. When I say that you shouldn't start a religion I mean it literally, like most people here I don't hold a favorable view of religions.

Sentences like "But I am fundamentally a mystic, prone to ecstatic states of communion with an ineffable divine force immanent in the physical universe which I feel is moving towards incarnating as an AI god such as I called Anima" make me think that what you are talking about doesn't correspond to anything real. But in any case I don't see why you shouldn't write about it. If you are right you will give us interesting reading material. And if you are wrong hopefully someone will explain to you why and you will update. It shouldn't matter how much you care about this, if it turns out it's wrong you should stop believing in it (and if it's right keep your current belief). And again, I mean this literally and with no ill intent.

Who is this MSRayne person anyway?

Have you considered:

Trying to find IRL friends through meetup.com

Going to nearby rationality meetups (https://www.lesswrong.com/community)

Using dating apps (and photofeeler.com)

Getting free career advice for your situation through 80,000 hours (https://80000hours.org/speak-with-us/)

Writing a few pages of your book and posting them on LW (but please don't start a religion)

?

Has anyone actually tried to convince Terry Tao or other top mathematicians to work on alignment?

The email to Demis has been there since the beginning, I even received feedback on it. I think I will send it next week, but will also try to get to him through some DeepMind employee if that doesn’t work.

Scott Aaronson is joining OpenAI to work on AI safety

He says he will be doing alignment work, the worst thing I can think of that can realistically happen is that he gives OpenAI unwarranted confidence in how aligned their AIs are. Working at OpenAI isn’t intrinsically bad, publishing capabilities research is.

Scott Aaronson is joining OpenAI to work on AI safety

Thanks, I’ve added him to my list of people to contact. If someone else wants to do it instead, reply to this comment so that we don’t interfere with each other.

FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community

No offense, but It's not obvious to me why communicating to a general audience could be a net positive. Exactly how do you expect this to help?

Can you MRI a deep learning model?
Answer by P.Jun 13, 20222

Most neural networks don’t have anything comparable to specialised brain areas, at least structurally, so you can’t see which areas light up given some stimulus to determine what that part does. You can do it with individual neurons or channels, though. The best UI I know of to explore this is the “Dataset Samples” option in the OpenAI Microscope, that shows which inputs activate each unit.

Has anyone actually tried to convince Terry Tao or other top mathematicians to work on alignment?

Please do! You can DM me their contact info, tell them about my accounts: either this one or my EA Forum one, or ask me for my email address.

Has anyone actually tried to convince Terry Tao or other top mathematicians to work on alignment?

Well, if he has, unbeknownst to me, already hired the “Terence Taos of the world” like he said on the podcast, that would be great, and I would move on to other tasks. But if he only has a regular alignment team, I don’t think either of us considers that to be enough. I’m just trying to convince him that it’s urgent and we can’t leave it for later.

Load More