I grew up in Arizona and live here again now. It has had a good system of open enrollment for schools for a long time, meaning that you could enroll your kid into a school in another district if they have space (though you'd need to drive them, at least to a nearby school bus stop). And there are lots of charter schools here, for which district boundaries don't matter. So I would expect the impact on housing prices to be minimal.
Godzilla strategies now in action: https://simonwillison.net/2022/Sep/12/prompt-injection/#more-ai :)
No super detailed references that touch on exactly what you mention here, but https://transformer-circuits.pub/2021/framework/index.html does deal with some similar concepts with slightly different terminology. I'm sure you've seen it, though.
Is the ordering intended to reflect your personal opinions, or the opinions of people around you/society as a whole, or some objective view? Because I'm having a hard time correlating the order to anything in my wold model.
This is the trippiest thing I've read here in a while: congratulations!
If you'd like to get some more concrete feedback from the community here, I'd recommend phrasing your ideas more precisely by using some common mathematical terminology, e.g. talking about sets, sequences, etc. Working out a small example with numbers (rather than just words) will make things easier to understand for other people as well.
My mental model here is something like the following:
Slightly rewritten, your point above is that:
The training data is all written by authors in Context X. What we want is text written by someone who is from Context Y. Not the text which someone in Context X imagines someone in Context Y would write but the text which someone in Context Y would actually write.After all, those of us writing in Context X don't actually know what someone in Context Y would write; that's why simulating/predicting someone in Context Y is useful in the first place.
The training data is all written by authors in Context X. What we want is text written by someone who is from Context Y. Not the text which someone in Context X imagines someone in Context Y would write but the text which someone in Context Y would actually write.
After all, those of us writing in Context X don't actually know what someone in Context Y would write; that's why simulating/predicting someone in Context Y is useful in the first place.
If I understand the above correctly, the difference you're referring to is the difference between:
Similar things could be done re: the "stable, research-friendly environment".
The internal interpretation is not something we can specify directly, but I believe sufficient prompting would be able to get close enough. Is that the part you disagree with?
Alas, querying counterfactual worlds is fundamentally not a thing one can do simply by prompting GPT.
Citation needed? There's plenty of fiction to train on, and those works are set in counterfactual worlds. Similarly, historical, mistaken, etc. texts will not be talking about the Current True World. Sure right now the prompting required is a little janky, e.g.:
But this should improve with model size, improved prompting approaches or other techniques like creating optimized virtual prompt tokens.
And also, if you're going to be asking the model for something far outside its training distribution like "a post from a researcher in 2050", why not instead ask for "a post from a researcher who's been working in a stable, research-friendly environment for 30 years"?
Please consider aggregating these into a sequence, so it's easier to find the 1/2 post from this one and vice versa.
Sounds similar to what this book claimed about some mental illnesses being memetic in certain ways: https://astralcodexten.substack.com/p/book-review-crazy-like-us