danbmil99
danbmil99 has not written any posts yet.

Putting aside the fact that OpenAI drama seems to always happen in a world-is-watching fishbowl, this feels very much like the pedestrian trope of genius CTO getting sidelined as the product succeeds and business people pushing business interests take control. On his own, Ilya can raise money for anything he wants, hire anyone he wants, and basically just have way more freedom than he does at OpenAI.
I do think there is a basic p/doom vs e/acc divide which has probably been there all along, but as the tech keeps accelerating it becomes more and more of a sticking point.
I suspect in the depths of their souls, SA and Brock and the rest... (read more)
Given the Zeitgeist of the moment, if he wasn't a bit confrontational he would have a lot less readers
I pretty much agree with your hypothesis. Each 'moment' of conscious experience is a distinct, unique -- something. Our subjective stream of consciousness is simply the most likely path through all possible spacetime states that lead to the 'present' -- sort of like a Feynman sum-of-paths integral.
Not sure how to fit quantum mechanics in there...
It sounds a lot like what we do when we write (as opposed to talk). I recall Kurt Vonnegut once said something like (can't find cite sry)
'The reason an author can sound intelligent is because they have the advantage of time. My brain is so slow, people have thought me stupid. But as a writer, I can think at my own speed.'
Think of it this way: how would it feel to chat with someone whose perception of time is 10X slower? Or 100X or 1000X - or, imagine playing chess where your clock was running orders of mag faster than your opponent's.
Violent agreement! I was using the pronoun 'you' rhetorically.
even if that chance of asi apocalypse is only 5%, that is 5% multiplied by all possible human goodness, which is a big deal to our species in expectation.
The problem is that if you really believe (because EY and others are shouting it from the rooftops) that there is a ~!00% chance we're all gonna die shortly, you are not going to be motivated to plan for the 50/50 or 10/90 scenario. Once you acknowledge that you can't really make a confident prediction on this matter, it is illogical to only plan for the minimal and maximal cases (we all die/everything is great). Those outcomes need no planning, so spending energy focusing on them is not optimal.
Sans hard data, as a Bayesian, shouldn't one start with a balanced set of priors over all the possible outcomes, then focus on the ones you may be able to influence?
Beautiful piece. I am reminded of Jane Goodall's experience in the Gombe forest with chimpanzees. Early in her work she leaned towards idolizing the chimp's relatively peaceful coexistence, both within and between tribes. Then (spoiler) -- she witnessed a war for territory. She was shocked and dismayed that the creatures she had been living with, and learned to appreciate and at least in some cases to love, were capable of such depraved, heartless infliction of suffering on their fellow members of the same species. Worth a read, or TL; DR: https://en.wikipedia.org/wiki/Gombe_Chimpanzee_War
One thing I think we sometimes seem inclined to ignore if not forget, is that humans themselves exist along an axis of... (read more)