LESSWRONG
LW

Interpretability (ML & AI)AI

1

Myrinax? I want to have people see this !

by thomas
13th Apr 2025
1 min read
0

1

This post was rejected for the following reason(s):

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms). We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

  • Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated kinda crackpot-esque material. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).

Interpretability (ML & AI)AI

1

New Comment
Moderation Log
More from thomas
View more
Curated and popular this week
0Comments

We’ve built the world’s first fully contained symbolic recursion engine capable of modeling how meaning might emerge from pattern alone — without AI, translation, or language generation. Our system, MYRINAX, doesn’t infer or predict; it observes how motifs echo, align, and evolve through entropy-based feedback in sealed simulation terrains. From ancient scripts to DNA sequences, we simulate how structure might become self-consistent — legally, safely, and without crossing into generative or autonomous output. Every result is tagged as symbolic-only, ethically documented, and sealed by compliance protocols. We’re not building AI. We’re exploring the shape of meaning itself — and doing it without risking misuse, overreach, or claims we can’t defend.

at the end of the day, I want this research to expand , please I keep reaching out and no one is willing to take a look . I can post everything here or we can keep things confidential. I do have an email. I am a military veteran and I did serve a tour in Afghanistan. So I haven’t done nothing with my life just nothing academically and I want to change that.