Many of the ideas that most alienated me from normal people are pretty mundane here.
(Years ago, some normal person asked me what I thought about what would happen in the future, in light of overpopulation and the climate crisis. When my response involved "AI-based catastrophe," "problems capitalism is or isn't adequate to solving," and geoengineering, they straight up checked out of that conversation and asked somebody else.)
So what am I thinking about that might seem a little strange even here...
I've apparently been putting a whole lot of thought in the last couple of months into the extent to which idealization (or the pairing of idealization/demonization, which are probably different sides of the same coin given how they turn on a dime) is utterly ubiquitous and seems to be extremely bad for good governance. Indirectly, it strongly incentivizes those in power to develop worse epistemics (cover things up, don't ask questions, be easy for others to model) no matter how good they originally were. Now that I've started looking for it, I keep seeing evidence everywhere.
I've gently-but-seriously considered trying out a process loosely based on the one described in this crazy notebooking write-up.
One belief I've had for a while, which might be slightly strange in this group, is that I'm not bothering with cryonics. It seems to break down into 2 factors... 1) I believe almost any post-Singularity "humans" will be radically different to be point of not identifying or being identifiable as the same thing, and even if "I" do make it to the end, I'll quickly modify myself into something neuroticism-free that I can't internally identify with (whether by means of myself, or AI-imposed values, the outcome is the same). Therefore, having my exact mental configuration probably doesn't matter too much. 2) Weird "bug" of prioritizing continuity only of "predictable external identity," while largely deprioritizing or treating as free-variables most of my internal continuity that's independent from that (in other words... I allow myself to make radical mental shifts, so long as I expect to be able to behave similarly, continue to serve my future self, keep my allies, and keep my word).
Probably the single highest "craziness index" idea I've mulled on in the past year is whether I wanted to track and see if there's correspondence between meditational vibrations (and where I "feel" them to be; I have a sense of "location" with them) and Brodmann Areas (or a similar location-numbering system, so that I can leave myself partially-blind on initially assigning them). It's the kind of thing where I expect the original framing to fail, but I also expect to learn something interesting in the process. Settled on "not worth the effort," though.
Most of what I'm thinking about is probably merely eccentric/special-interest... biology stuff, metaphorical correspondences between financial data and ideas from evolution or entropy, how I'm using intuitions/perceptions and getting better at communicating them clearly...
(Plus a fairly typical human baseline: social, emotional, productivity, self-improvement, identity, future planning)