Unreal

Wiki Contributions

Comments

Unreal20

Hm, you know I do buy that also. 

The task is much harder now, due to changing material circumstances as you say. The modern culture has in some sense vaccinated itself against certain forms of wisdom and insight. 

We acknowledge this problem and are still making an effort to address them, using modern technology. I cannot claim we're 'anywhere close' to resolving this? We're just firmly GOING to try, and we believe we in particular have a comparative advantage, due to a very solid community of spiritual practitioners. We have AT LEAST managed to get a group of modern millienials + Gen-Zers (with all the foibles of this group, with their mental hang-ups and all -- I am one of them)... and successfully put them through a training system that 'unschools' their basic assumptions and provides them the tools to personally investigate and answer questions like 'what is good' or 'how do i live' or 'what is going on here'. 

There's more to say, but I appreciate your engagement. This is helpful to hear. 

Unreal20

no anyone can visit! we have guests all the time. feel free to DM me if you want to ask more. or you can just go on the website and schedule a visit. 

Alex Flint is still here too, altho he lives on neighboring land now. 

'directly addressing suffering' is a good description of what we're up to? 

Unreal20

if you have any interest in visiting MAPLE, lmk. ? (monasticacademy.org) 

Unreal10

wow thanks for trying to make this distinction here on LessWrong. admirable. 

i don't seem to have the patience to do this kind of thing here, but i'm glad someone is trying. 

Unreal-2-4

We have a significant comparative advantage to pretty much all of Western philosophy. I know this is a 'bold claim'. If you're further curious you can come visit the Monastic Academy in Vermont, since it seems best 'shown' rather than 'told'. But we also plan on releasing online content in the near future to communicate our worldview. 

We do see that all the previous efforts have perhaps never quite consistently and reliably succeeded, in both hemispheres. (Because, hell, we're here now.) But it is not fair to say they have never succeeded to any degree. There have been a number of significant successes in both hemispheres. We believe we're in a specific moment in history where there's more leverage than usual, and so there's opportunity. We understand that chances are slim and dim. 

We have been losing the thread to 'what is good' over the millenia. We don't need to reinvent the wheel on this; the answers have been around. The question now is whether the answers can be taught to technology, or whether technology can somehow be yoked to the good / ethical, in a way that scales sufficiently. 

Unreal-30

Thank you for pointing this out, as it is very important. 

The morality / ethics of the human beings matters a lot. But it seems to matter more than just a lot. If we get even a little thing wrong here, ...

But we're getting more than just a little wrong here, imo.  Afaict most modern humans are terribly confused about morality / ethics. As you say "what is even good"

I've spoken with serious mathematicians who believe they might have a promising direction to the AI alignment problem. But they're also confused about what's good. That is not their realm of expertise. And math is not constrained by ethics; you can express a lot of things in math, wholesome and unwholesome. And the same is so with the social dynamics, as you point out. 

This is why MAPLE exists, to help answer the question of what is good, and help people describe that in math. 

But in order to answer the question for REAL, we can't merely develop better social models. Because again, your point applies to THAT as well. Developing better social models does not guarantee 'better for humanity/the planet'. That is itself a technology, and it can be used either way. 

We start by answering "what is good" directly and "how to act in accord with what is good" directly. We find the true 'constraints' on our behaviors, physical and mental. The entanglement between reality/truth and ethics/goodness is a real thing, but 'intelligence' on its own has never realized it. 

Unreal10

Rationality seems to be missing an entire curriculum on "Eros" or True Desire.

I got this curriculum from other trainings, though. There are places where it's hugely emphasized and well-taught. 

I think maybe Rationality should be more open to sending people to different places for different trainings and stop trying to do everything on its own terms. 

It has been way better for me to learn how to enter/exit different frames and worldviews than to try to make everything fit into one worldview / frame. I think some Rationalists believe everything is supposed to fit into one frame, but Frames != The Truth. 

The world is extremely complex, and if we want to be good at meeting the world, we should be able to pick up and drop frames as needed, at will. 

Anyway, there are three main curricula: 

  1. Eros (Embodied Desire) 
  2. Intelligence (Rationality)
  3. Wisdom (Awakening) 

Maybe you guys should work on 2, but I don't think you are advantaged at 1 or 3. But you could give intros to 1 and 3. CFAR opened me up by introducing me to Focusing and Circling, but I took non-rationalist trainings for both of those. As well as many other things that ended up being important. 

Unreal9-14

I was bouncing around LessWrong and ran into this. I started reading it as though it were a normal post, but then I slowly realized ... 

I think according to typical LessWrong norms, it would be appropriate to try to engage you on the object level claims or talk about the meta-presentation as though you and I were trying to collaborate on figuring things out and how to communicate things.

But according to my personal norms and integrity, if I detect that something is actually quite off (like alarm bells going) then it would be kind of sick to ignore that, and we should actually treat this like a triage situation. Or at least a call to some kind of intervention. And it would be sick to treat this like everything is normal, and that you are sane, and I am sane, and we're just chatting about stuff and oh isn't the weather nice today. 

LessWrong is the wrong place for this to happen. This kind of "prioritization" sanity does not flourish here. 

Not-sane people get stuck on LessWrong in order to stay not-sane because LW actually reinforces a kind of mental unwellness and does not provide good escape routes. 

If you're going to write stuff on LW, it might be better to write a journal about what the various personal, lifestyle interventions you are making to get out of the personal, unwell hole you are in. A kind of way to track your progress, get accountability, and celebrate wins. 

Unreal44

Musings: 

COVID was one of the MMA-style arenas for different egregores to see which might come out 'on top' in an epistemically unfriendly environment. 

I have a lot of opinions on this that are more controversial than I'm willing to go into right now. But I wonder what else will work as one of these "testing arenas." 

Unreal30

I don't interpret that statement in the same way. 

You interpreted it as 'lied to the board about something material'. But to me, it also might mean 'wasn't forthcoming enough for us to trust him' or 'speaks in misleading ways (but not necessarily on purpose)' or it might even just be somewhat coded language for 'difficult to work with + we're tired of trying to work with him'. 

I don't know why you latch onto the interpretation that he definitely lied about something specific. 

Load More