How do self-identities work is not a straightforward question.
I think that self-identity in humans mostly works the way Steve Andreas describes in his book. My answer would be something along the lines of, a tupla can access memories of how the tulpa acted and it can build it's self-identity out of the relation to those in a similar way to how humans form their self-identity out of relating to their memories.
In any case, I would recommend people who don't have a good grasp of what a tupla is, not to try to use the term and broaden it in the way the comment I replied to did.
I had an annoying issue with building the Android part of my Flutter App. Long twenty step iterations with GPT-5-pro and o3-pro did not fix the issue. Neither was codex-1 able to fix the issue.
Now, with two tries GPT-5-codex managed to fix my problem. GPT-5-codex frequently manages to work for 100-110 minutes to actually fix problems which the old codex-1 wasn't able to fix well.
OpenAI seems to be surprised that GPT-5-codex usage overall needs more compute and I would guess that's because GPT-5-codex is willing to work longer on problems. It might be that they nerve it temporarily so that it won't as eagerly work for two hours on a problem.
Your original comment suggested that part of the problem is underinvestment by the US military. Underinvestment is a different problem than defense contractors like Boeing being slow and expensive.
Apart from that, the old defense contractors aren't the only ones. Anduril seems to work both on drone defense and building drones. Palantir seems to do something drone related as well (but it's less clear to me what they are doing exactly).
It might be that the key problem isn't in spending more money but in reducing the bureaucracy and the criteria that the drones need to hit.
Currently, you tell ChatGPT what you want, then take what it gives. If you’re prudent, you’ll edit that. If you’re advanced, you might adjust the prompt and retry, throw on some custom instructions, or use handcrafted prompts sourced online. And if you’re really clever, you’ll ask the AI to write the prompt to ask itself. Yet after all that, the writing quality is almost always stilted anyway!
I don't think that summarizes best practice. I think it's an important step to ask the AI to ask you to clarify the argument and needs of your writing. You don't want a one-way conversation.
But on the current margin, both small drones and defenses against such seem woefully under-invested-in by Western militaries. Like, several orders of magnitude less spending than would be optimal. It's embarrassing that Ukraine is producing millions of drones a year, and the USA is producing... thousands? Tens of thousands?
I would expect advanced drone programs of the US military are heavily classified. What makes you believe that the numbers you have access to show the true investment the US military is making in it?
If I understand your thesis right it's that you believe your simple math is more likely to result in diagnosing the end of the sleep phase than the machine learning algorithms that are trained-based on the sleep lab data to extrapolate sensor data about heart rate, temperature, movement and oxygen saturation with the correct end of the sleep phase.
Generally the first sleep cycle is measured* to last between 70 and 100 minutes, and the other cycles between 90 and 110 minutes. Those ranges mean there’s some uncertainty in when each cycle actually completes for you; it varies from person to person.
It doesn't just vary form person to person. It varies from sleep cycle to sleep cycle.
To the extend that you do notice improved sleep from what you are doing it's likely because your ritual results in you have a clear intention about when to wake up and your body organizing to meet that intention and not because your beliefs about sleep phases match your actual sleep phases.
Gordon did write https://www.lesswrong.com/posts/kJoyRgDDPzMg4Fo3Z/nice-clothes-are-good-actually for more details.
It's quite easy to buy a air quality monitor that tells you about CO2 or CO but are there monitors that actually tell you about the 100 different substances that might be a problem on airplanes that you can easy have in your carry-on?
I did drop the sequences into alphafold, and I don't see any large structural variation from the SNPs, but (a) that histidine substitution would most likely change binding rather than structure in isolation, and (b) this is exactly the sort of case where I don't trust alphafold much, because "this is one substitution away from a standard sequence, I'll just output the structure of that standard sequence" is exactly the sort of heuristic I'd expect a net to over-rely upon.
Even if the structure is correct and does look the same, the binding properties of the receptor could still be different if the histidine is in the part that's relevant for the receptor binding.
The thing you want is a tool that tells you how the receptor binding properties change through the mutation not the AlphaFold that just gives you the 3D structure. A quick question at GPT-5, suggests that there are freely available tools that tell you how the receptor binding properties change via a single point mutation.
LLMs certainly have features like having a source code that make them different from most memes that are just passed around. That however does not make them tuplas.
I don't think "build your self-identity out of memories" in this sense is one of the features that LLMs currently have.
If I ask GPT-5: "Are you a kind agent?" it answers: "Yes — I aim to be. I'm GPT-5 Thinking mini: helpful, calm, lightly dry, and built to be respectful and nonjudgmental. I won't be emotionally warm in the human sense (I don't have feelings), but I will act kindly: clear, honest, and helpful; I’ll push back when something’s unsafe or impossible; I’ll ask questions when they make the answer better; and I’ll give options rather than orders.
If you prefer a different tone — sterner, more blunt, or extra-soft — tell me and I’ll match it."
When I then ask: "Can you give me examples of you having been kind?" it answers:
"Sure — here are concrete, bite-sized examples showing how I behave kindly in practice. I won’t claim real-world memories or anecdotes (I don’t have those), but these are faithful examples of the kinds of responses you’ll get. [...] "
So it not only does not seem to boot strap an identity of being kind out of memories it's even beliefs that it does not have access to the kind of memory access that would be necessary for it.