If you get an email from aisafetyresearch@gmail.com , that is most likely me. I also read it weekly, so you can pass a message into my mind that way.
Other ~personal contacts: https://linktr.ee/uhuge
You mean the chevrons like this is non-standard, but also sub-standard, although it has the neat property to represent >Speaker one< and >>Speaker two<<? I can see the typography of those here is meh at best.-\
I'm excited to find your comment, osten, that reads as a pretty insightful view to me.
Let me restate what I understood your light( and welcome) critique to be: I have put "human civilization" out as an actor which lasted/endured a long time which heuristically suggests it has high resilience and robustness properties and thus deserves respect and holding the control. Here you say it did not endure much as a single structure to consider/test with Lindy, as it got changed significantly and many times, thus we maybe should split it like "feudal civilization", "democratic civilization", etc.
The other interpretation I see is that yeah, it is one structure, but ASI will keep the structure, but lead (in) it. I enjoy that argument, but it would not fully work unless AIs get the status of a physical person, but would somewhat work when it can gather human proxies whenever possible.
There's the thing where Gemini 2.5 Pro surprisingly isn't very good at geo guessing, a woman's tweet is too be linked <here>.
I'd bet the webpage parser ignored images, their contents.
Snapshot of a local(=Czech) discussion detailing motivations and decision paths of GAI actors, mainly the big developers:
Contributor A, initial points:
For those not closely following AI progress, two key observations:
Analogy to Exponential Growth: The COVID-19 pandemic demonstrated how poorly humans perceive and react to exponential phenomena (e.g., ignoring low initial numbers despite a high reproduction rate). AI development is also progressing exponentially. This means it might appear that little is happening from a human perspective, until a period of rapid change occurs over just a few months, potentially causing socio-technical shifts equivalent to a century of normal development. This scenario underpins the discussion.
Contributor C:
Contributor A:
Contributor C:
Contributor A, clarifying reasoning and premises:
the lab has limited compute
the lab has sufficient funds
the lab wants to maximize long-term profit
AI development is exponential and its pace depends on the amount of compute dedicated to AI development
winner takes all
Contributor C, response to A's premises:
hopefully you will learn
seems missing part 2.
??
Yeah, I've met the concept during my studies and was rather teasing for getting a great popular, easy to grasp, explanation which would also fit the definition.
It's not easy to find a fitting visual analogy TBH, which I'd find generally useful as I hold the concept to enhance general thinking.
No matter how I stretch or compress the digit 0, I can never achieve the two loops that are present in the digit 8.
0 when it's deformed by left and right pressure so that the sides meet seems to contradict?
Comparing to Gemma1, classic BigTech😅
And I seem to miss info on the effective context length..?
Hello @habryka, could you please adjust the text on the page to include the year when applications closed, so that it confuses people( like me) less and they won't spend reading it all wasting their time stupidly?
THANKS!