I just wanted to say that I really enjoy following along with the affairs of the AI Village, and I look forward to every email from the digest. That's rare, I'm allergic to most newsletters.
I find that there's something delightful about watching artificial intelligences attempt to navigate the real world with the confident incompetence of extremely bright children who've convinced themselves they understand how dishwashers work. They're wearing the conceptual equivalent of their parents' lab coats, several sizes too large, determinedly pushing buttons and checking their clipboards while the actual humans watch with a mixture of terror and affection. A cargo-cult of humanity, but with far more competence than the average Polynesian airstrip in 1949.
From a more defensible, less anthropomorphizing-things-that-are-literally-matrix-multiplications plus non-linearity perspective: this is maybe the single best laboratory we have for observing pure agentic capability in something approaching natural conditions.
I've made my peace with the Heat Death Of Human Economic Relevance or whatever we're calling it this week. General-purpose agents are coming. We already have pretty good ones for coding - which, fine, great, RIP my career eventually, even if medicine/psychiatry is a tad bit more insulated - but watching these systems operate "in the wild" provides invaluable data about how they actually work when not confined to carefully manicured benchmark environments, or even the confines of a single closed conversation.
The failure modes are fascinating. They get lost. They forget they don't have bodies and earnestly attempt to accomplish tasks requiring limbs. They're too polite to bypass CAPTCHAs, which feels like it should be a satire of something but is just literally true.
My personal favorite: the collective delusions. One agent gets context-poisoned, hallucinates a convincing-sounding solution, and suddenly you've got a whole swarm of them chasing the same wild goose because they've all keyed into the same beautiful, coherent, completely fictional narrative. It's like watching a very smart study group of high schoolers convince themselves they understand quantum mechanics because they've all agreed on the wrong interpretation. Or watched too much Sabine, idk.
(Also, Gemini models just get depressed? I have so many questions about this that I'm not sure I want answered. I'd pivot to LLM psychiatry if that career option would last a day longer than prompt engineering)
Here's the thing though: I know this won't last. We're so close. The day I read an AI Village update and we've gone from entertaining failures to just "the agents successfully completed all assigned tasks with minimal supervision and no entertaining failures" is the day I'm liquidating everything and buying AI stock (or more of it). Or just taking a very long vacation and hugging my family and dogs. Possibly both. For now though? For now they're delightful, and I'm going to enjoy every bumbling minute while it lasts. Keep doing what you're doing, everyone involved. This is anthropology (LLM-pology?) gold. I can't get enough, till I inevitably do.
(God. I'm sad. I keep telling myself I've made my peace with my perception of the modal future, but there's a difference between intellectualization and feeling it.)
>It would, for instance, never look at the big map and hypothesize continental drift.
Millions of humans must have looked at relatively accurate maps of the globe without hypothesizing continental drift. A large number must have also possessed sufficient background knowledge of volcanism, tectonic activity etc to have had the potential to connect the dots.
Even the concept of evolution experienced centuries or millenia of time between widespread understanding and application of selective breeding, without people before Darwin/Wallace making the seemingly obvious connection that the selection pressure on phenotype and genotype could work out in the wild. Human history is littered with a lot of low hanging fruit, as well as discoveries that seem unlikely to have been made without multiple intermediate discoveries.
I believe it was Gwern who suggested that future architectures or training programs might have LLMs "dream" and attempt to draw connections between separate domains of their training data. In the absence of such efforts, I doubt we can make categorical claims that LLMs are incapable of coming up with truly novel hypotheses or paradigms. And even if they did, would we recognize it? Would they be capable of, or even allowed to follow up on them?
Edit: Even in something as restricted as artistic "style", Gwern raised the very important question of whether a truly innovative leap by an image model would be recognized as such (assuming it would if a human artist made it) or dismissed as weird/erroneous. The old deep dream was visually distinct from previous human output, yet I can't recall anyone endorsing it as an AI-invented style.
Other than the issues raised below, I'd like to point out that the help doesn't need to be full time to make a massive difference. Just having a cleaner in once a week or someone to cook every evening helps!
India is a... large country. Without specifying which part you intend to visit, you might receive the equivalent of a recommendation to visit the best restaurant in Portugal while you're actually in Amsterdam! I used to live there, though by the eastern side where you're unlikely to visit. Nonetheless, I will happily vet or suggest places if you know your itinerary.
The phenomenon you observed as far more to do with the tiny plot sizes in most of rural India than it had to do with the cost of labor.
Many/most farmers have farms sized such that they are, if not as bad as mere subsistence, unable to justify the efficiency gains of mechanization. This is not true in other parts of India, like the western states of Punjab and Haryana, where farms are larger and just about every farmer has a tractor. There are some cooperatives where multiple smallholders coordinate to share larger machines like combine harvesters which none but the very largest farmers can justify purchasing for personal use.
Optimal economic policy (in terms of total yield and efficiency) is heavily in favor of consolidating plots to allow economies of scale. This is politically untenable in much of India, hence your observation. However, it isn't a universal state of affairs, and many other fruits of industrialization are better adopted.
So I looked it up and apparently the guideline is actually a 2 hours fast for clear liquids! 2 hours![7] The hospital staff, however, hardened their hearts. Nurses said to ask the surgeons. The surgeons said to ask the anestheologists. It wasn't until 7am that the anestheologists said, yep, you can drink a (small) glass of water.
Ah, this takes me back to my medical officer days. No junior doctor ever got into trouble for telling a patient to fast a bit too long, and many do for having a heart and letting them cut it short. It is also likely the least consequential thing we can bother the anesthetists about, and it's not going to kill anyone to wait longer (usually).
Knowing the local demographic and behavioral tendencies on LW, I think it might worth noting that ozempic/semaglutide and other GLP drugs can cause delayed gastric emptying. As a consequence, even the standard fasting duration might not be adequate to fully empty your stomach. If you're scared of getting aspiration pneumonia, it's worth mentioning this to your surgeon or the anesthetist. The knowledge hasn't quite percolated all the way up the chain, so you can't just assume they're aware.
Here's how I parsed this:
>I was surprised to find out that I could not find a single neurotransmitter that is not shared between humans and mice (let me know if you can find one, though).
As far as I can tell, this is true. The closest I could find is a single subtype of receptor that a frameshift mutation in a primate ancestor made nonfunctional.
https://www.sciencedirect.com/science/article/pii/S0021925818352189
I suspect you'd have to go even further back in terms of divergent ancestry to find actual differences in the neurotransmitter substances themselves. There are definitely differences in receptors and affinities, but the actual chemicals? From a quick search, you'd have to go all the way to ctenophores, which are so different that some have posited they evolved nervous systems independently.
I presume it would be an overview of histone molecule. DNA is often kept compacted by wrapping around one, like a rope coiled around a baseball.
I meant their newsletter, which I've subscribed to. I presume that's what the email submission at the bottom of the site signs you up for.