If your network activity becomes more critical, then the attractors disappear.
These words feel bad, would "moves toward criticality" work better?
I also did the sorts of worldbuilding exercises that I usually do when writing a novel. I spent time looking at maps of China, and using street-view to spend time going down roads.[10] (The township of Maxi, where much of the book is set, is a real place.) I generated random dates and checked the weather. I looked at budgets, salaries, import/export flows (especially GPUs), population densities, consumption trends, and other statistics, running the numbers to get a feel for how fast and how big various things are or would be.
have you written elsewhere about this process?
Tokens cost money, it'd be a lot cheaper to post-train on the document, wouldn't it? How strongly would they want to keep this document private (if real)?
To an ML layman, it seems plausible that post-training on this document could improve its moral constitution. I'm thinking of prompt innoculation, emergent misalignment. But is that silly?
I am mostly uninterested in whether or not it's pejorative. I think it's descriptively accurate.
This discussion has implications on the validity of rationalism on its own terms, and also on how others should relate to rationalism.
The question is about what-is-true, but the reason we're interested is what-is-good. This means we all have to be extra careful to keep our what-is-good boxes separate from our what-is-true boxes (I'm not accusing you of failing to do so).
I think that's what you're implying above, you're saying "im not calling you names. I'm actually thinking about this!", which is good. But what you said is dishonest.
It does have implications, and you are interested in them (for good reason)
Nevertheless, a worldview centered on preventing an imminent apocalypse is extremely easy to weaponize.
[...]
Cults are just religious sects that are new, horrible, or both.
My people have something called the Litany of Tarski, for just these situations. It is from one of our most ancient texts.
If [rationalsim is a cult], I want to believe that [rationalism is a cult]. If [rationalism is not a cult], I want to believe that [rationalism is not a cult]. Let me not become attatched to beliefs I do not want.
Should we look for a crux? I think I've got one.
How does rationalism affect one's values? If you really wanted me to be rationalist, what might cause the most friction in converting me?
A large confounding factor in observations of rats is that LW is modally libertarian tech-adjacent American. So it might be difficult to distinguish rat from libertech.
Do you think those clusters of traits are distinguishable from each other? Or is libertarianism (for example) a rationalist value?
How would the values and behaviour of a 35yo Brazilian schoolteacher, compared to a 22yo English CS major, change, if they both started reading the sequences and found them compelling?
What I'm pointing at:
Take a group of people with similar demographics, they'll already share a chunk of values to start with. If you hang out with bunch of people long enough, you'll converge on similar beliefs, because by sharing sources of information, you'll have pretty similar perspectives on the world.
You think (?) that the movement prescribes a narrow set of values.
It does prescribe being effective (instrumental rationality), for which having accurate beliefs (epistemic rationality) is useful. The convergence of beliefs and perspectives is just what happens when any number of people associate closely.
The crux being: my "rationalist" draws a circle around epistemic and instrumental rationality, whereas your "rationalist" also includes a larger chunk of the common values and beliefs of rationalist people.
Thanks. "Dissolution" appears twice, once before and once after "integration" and "vipassana sickness". Which definition is better?
Puzzle for you: Who thinks the latest ads for Gemini are good marketing and why?
AI generated meditating capybara: "Breath in (email summarisations)... Breath out (smart contextual replies)"
It summarises emails. It's not exciting, it's not technically impressive, and it isn't super useful. It's also pretty tone-deaf, a lot of people feel antipathy toward AI, and inserting it into human communication is the perfect way to aggravate this feeling.
"Create some crazy fruit creatures!"
Yes? And? I can only see this as directed at children. If so, where's the... fun part? There's nothing to engage with, no game loop. They'd get bored of it within minutes.
You want to show off how impressive your product is. People are saying there's an AI bubble. So you REALLY want interesting, fun, novel, or useful applications for your tech.
It's Google! They know about ads! They've lots of money! they CAN come up interesting, fun, novel, or useful applications for their tech.
Why didn't they?!
Think of problems as Lean does: A problem state consists of some hypotheses/assumptions, a goal, and tactics we can apply to hypotheses to infer new statements. We seek to infer a statement with the type of the goal.
Some problems only require making the right local step at each successive problem state. That's what makes them easy in some sense. Hard problems require determining (something about) the path before useful progress can be made. I think this is intuitive, if not I can give examples.
Complication: I have variable mental clarity and energy levels.
Completing a task well first requires understanding how a task breaks down into specific actions. Then the follow through only requires executing the local steps on path. The first part is "solve a hard problem". Requires good mental clarity. The second requires cognitive work.
Any concrete action I take ends up being just local step in the immediate context's problem state that doesn't have any persistent effect on my ability to assess and resolve problem states, and diminishes my reserves of energy. Feel the difference between completing a task vs practising a technique: I want persistent effects that help me respond to challenges, and the work capacity to benefit from this ability.
Challenges like "Learn to use a new mode of public transport in an unfamiliar city", "Prove Cauchy's theorem for finite groups", "how to pass this exam" are all difficult for the same reasons.
How to solve problems (read: do anything substantial) when clarity and capacity are variable/limited?
I have variable levels of cognitive function that I can't predict. How can I learn/study, maintain routine, and make plans?
How do I improve my cognitive work capacity?
Blindsight - Peter Watts