Registration and access to the lessons is completely free. Where do you see a paywall?
Hi, full time content developer at RAISE here.
The overview page you are referring to (is it this one?) contains just some examples of subjects that we are working on.
1. One of the main goals is making a complete map of what is out there regarding AI Safety, and then recursively create explanations for the concepts it contains. That could fit multiple audiences depending on how deep we are able to go. We have started doing that with IRL and IDA. We are also trying a bottom-up approach with the prerequisite course because why not.
2. Almost the same as readin... (read more)
After reading this I feel that how one should deal with anthropics strictly depends on goals. I'm not sure exactly which cognitive algorithm does the correct thing in general, but it seems that sometimes it reduces to "standard" probabilities and sometimes not. May I ask what does UDT say about all of this exactly?
Suppose you're rushing an urgent message back to the general of your army, and you fall into a deep hole. Down here, conveniently, there's a lever that can create a duplicate of you outside the hole. You can also break
But suppose that we were discussing something of which there were both sensible and crazy interpretations - held by different people. So:
group A consistently makes and defends sensible claim A1
group B consistently makes and defends crazy claim B1
and maybe even:
group C consistently makes crazy claim B1, but when challenged on it, consistently retreats to defending A1
I may be missing something but it seems to me that:
I have only read a small fraction of Yudkowsky's sequences (I printed the 1800 pages two days ago and have only read about 50), so maybe I think I am discussing interesting stuff where in reality EY has already discussed it in length.
Mostly this. Other things too, but all mostly are caused by this one. I am one of the few who commented in one of your posts with links to some of his writings exactly for this reason. While I'm guilty of not having given you any elaborate feedback and of downvoting that post, I still think you need to catch up w... (read more)
Fake Selfishness and Fake Morality
Ah! I independently invented this strategy some months ago and amazingly it doesn't work for me simply because I'm somehow capable of remaining in the "do nothing" state for literally days. However I thought it was a brilliant idea when I came up with it and I still think it is, I would be surprised if it doesn't work for a lot of people.
This post made a lot of things click for me. Also it made me realize I am one of those with an "overdeveloped" Prune filter compared to the Babble filter. How could I not notice this? I knew something was wrong all along, but I couldn't pin down what, because I wasn't Babbling enough. I've gotta Babble more. Noted.
Extremely important post in my opinion. The central idea seems true to me. I would like to see if someone has (even anecdotal) evidence for the opposite.
Probably you should have simply said something similar to "increasing portions of physical space have diminishing marginal returns to humans".
Uhm. That makes sense. I guess I was operating under the definition of risk aversion that makes people give up risky bets just because the alternative is a less risky bet, even if it actually translates in less of absolute expected utility compared to the risky one. As far as I know, that's the most used meaning of risk aversion. Isn't there another term to disambiguate between concave utility functions and straightforward irrationality?
I'm not sure it can be assumed that the deal is profitable for both parties. The way I understand risk aversion is that it's a bug, not a feature; humans would be better off if they weren't risk averse (they should self-modify to be risk neutral if and when possible, in order to be better at fulfilling their own values).
I'm not sure how to put this. One reason that comes to mind for having it weekly is that it seems to me that threads get "old" very quickly now. For example it seems to me that out of all questions asked in the Stupid Questions thread that are unanswered, a good percentage of those are unanswered because people don't see them, not because people don't know the answers to them. (speaking of which, I haven't seen that thread get reposted in some months, or am I missing something?)
May I suggest a period of 15 days?
Something about getting social feedback feels a lot more powerful (to me) and helps me move past it quicker than just writing it down.
I second this.
I really like the idea, but what are the limits? Can one just spit out random, speculative opinions? Can one come and just unironically state "I think Trump being president is evidence that the aliens are among us" as long as they sincerely suspect the correlation?