What's the worst-case outcome, in your view?
Edit: to be clear, I believe e.g. large scale industrial disasters, or billions worth in crypto theft, or widespread privacy violations, etc. are all plausible.
But, all the same, I think it is helpful to concretize exactly what the worst personal outcome could be for yourself, as uncertainty is a significant contributor to fear itself.
It starts at sudden death of yourself and everyone else, the destruction of earth and extinction of all biological life, and a sphere of darkness eating nearby stars, and gets worse from there.
I hope 'general sense of doom' is not the reason why the OP is popular. Mythos does not significantly accelerate my timelines, and I do not see why it should significantly boost anyone's prior estimates (within 2026).
An unelected oligarchy hoards access for themselves and uses it to make decisions about my life without my input. "These tools are too powerful for people like you"
Thank you for asking :)
I didn't expect this to blow up, but I guess here we are.
The standard AI fear seems possible: we see another abrupt increase in AI capabilities this year, recursive self-improvement happens before anyone is ready for it, and we get paperclipped.
I'd previously assumed that LLMs would plateau somewhere due to architectural inefficiencies such as the fact that they run concepts through language, creating overhead compared with processing concepts directly. Mythos is an update away from that since AFAIK its architecture does not appear to be something other than a scaffolded LLM. Regardless of architecture, I did not expect Mythos-level capabilities to appear so quickly; I expected incremental improvements rather than a model that cleanly surpasses all existing ones in almost all tested benchmarks alongside finding vulnerabilities in extensively audited software like ffmpeg and OpenBSD.
Regarding personal outcomes: I currently evaluate that all of the positive utility in my life occurs in the future, and that makes me anxious about having my plans cut off by transformative AI. I'm stating this vaguely because my personal circumstances are rather complicated and I don't know how much I want to publicly disclose.
Epistemic status: untested hypothesis but seems obvious now that I've thought of it.
When choosing your note-taking method, you should take your entire learning system into account.
I've been taking handwritten notes until now, mainly due to research indicating that handwriting notes improves retention relative to typing notes. At the same time, I use Anki for active recall and spaced repetition. However, this combination makes writing paper notes superfluous - the retention benefits obtained via handwriting notes are probably dwarfed by those of active recall. In that case, I should optimize for the speed at which I can take my notes and move them into either summaries or Anki decks, a goal to which taking notes on paper is antithetical.
The "handwriting improves retention" is probably accurate but incomplete - it doesn't account for the time spent on note-taking, or for the interactions between note-taking and studying.
Personal context: taking notes at a summer camp where far more of the content is unfamiliar to me than in classes at school. The notes I take during class come out more disorganized if I don't have the time to fully process the content during class, and written on paper, incurs the overhead of having to sort through and rewrite them. In school that's at most 5-10% of my notes for a given class, but it's closer to 50% in the summer camp. Observing the terrible inefficiency made me consider overhauling my note-taking system.