I used to implicitly believe that when I have a new idea for a creative(/creative-adjacent) project, all else being equal, I should add it to the end of my to-do list (FIFO). I now explicitly believe the opposite: that the fresher an idea is, the sooner I should get started making it a reality (LIFO). This way:
The downsides are:
Thoughts on this would be appreciated.
Something low complexity, repeatable, non-addictive, non-time consuming.
I think a lot of people have a habit like that, and it's different things for different people.
I think it's meditation
(Like, from what I hear you're not wrong, but . . . y'know, Beware.)
If anyone reading this wants me to build them a custom D&D.Sci scenario (or something similar) to use as a test task, they should DM me. If the relevant org is doing something I judge to be Interesting and Important and Not Actively Harmful - and if they're okay with me releasing it to the public internet after they're done using it to eval candidates - I'll make it for free.
There are many games already, made for many reasons: insofar as this could work, it almost certainly already has.
Strengthens neural circuits involved in developing/maintaining positive habits
That's any game where grinding makes your character(s) stronger, or where progression is gated behind the player learning new skills. (I'm pretty sure Pokemon did exactly this for me as a child.)
Build any sort of positive habits that transfer to real life decision making
That's any strategy game. (I'm thinking particularly of XCOM:EU, with its famously 'unfair' - i.e. not-rigged-in-the-player's-favor - hit probabilities.)
I do think that there are untapped possibilities in this space - I wouldn't have made all those educational games if I didn't - but what you're describing as possibly-impossible seems pretty mundane and familiar to me. (Kudos for considering the possibility in the first place, though.)
I think you can address >95% of this problem >95% of the time with the strategy "spoiler-tag and content-warn appropriately, then just say whatever".
Is there value in seeking out and confronting these limits,
Yes.
or should we exercise caution in our pursuit of knowledge?
Yes.
. . . to be less flippant: I think there's an awkward kind of balance to be struck around the facts that
A) Most ideas which feel like they 'should' be dangerous aren't[1].
B) "This information is dangerous" is a tell for would-be tyrants (and/or people just making kinda bad decisions out of intellectual laziness and fear of awkwardness).
but C) Basilisks aren't not real, and people who grok A) and B) then have to work around the temptation to round it off to "knowledge isn't dangerous, ever, under any circumstance" or at least "we should all pretend super hard that knowledge can't be dangerous".
D) Some information - "here's a step-by-step-guide to engineering the next pandemic!" - is legitimately bad to have spread around even if it doesn't harm the individual who knows it. (LWers distinguish between harmful-to-holder vs harmful-to-society with "infohazard" vs "exfohazard".)
and E) It's super difficult to predict what ideas will end up being a random person's kryptonite. (Learning about factory farming as a child was not good for my mental health.)
I shouldn't trusted with language right now.
I might be reading too much into this, but it sounds like you're going through some stuff right now. The sensible/responsible/socially-scripted thing to say is "you should get some professional counseling about this". The thing I actually want to say is "you should post about whatever's bothering you on the appropriate 4chan board, being on that site is implicit consent for exposure to potential basilisks, I guarantee they've seen worse and weirder". On reflection I tentatively endorse both of these suggestions, though I recognize they both have drawbacks.
For what it's worth, I'd bet large sums at long odds that whatever you're currently thinking about falls into this category.
In Thrawn's experience there are three ingredients to greatness
I think the way tenses are handled in the early part of this section is distractingly weird. (I can't tell how petty I'm being here.) (I'd be inclined to fix the problem by italicizing the parts Thrawn is thinking, and changing "Thrawn wasn't" to "I'm not".)
If you go up to someone powerful and ask for something, then there's 60% chance you lose nothing and a 1% chance you win big.
. . . what happens in the remaining 39%?
Also (outrageously pedantic stylistic point even by my standards incoming) it's strange to follow up "60% chance" with "a 1% chance": it should either be "n% chance" both times or "a n% chance" both times.
succomb
succumb
"Tatooine. They're on Tatooine,"
Was this deviation from canon intentional? (I remember in the movie she picks a different planet with a similar-sounding name.)
brainsteam
Can't tell if this is a typo'd "brainstem".
I really wish I could simultaneously strong-upvote and strong-downvote the "agree" thing for this reply. I think most of the description is horoscope-y flattery, but it doesn't have zero correlation with reality: I do think lsusr's writing is ninety-something-th percentile for
and at least eighty-something-th percentile for
while afaict there's nothing in the description that's the opposite of the truth.
(I also think there's something interesting about how the most meaningful/distinguishing lines in the description are the ones which could be most easily rephrased as criticisms. Does "describe X as neutrally as possible" or "describe the upsides and downsides of X" produce better LLM results than "describe X"?)