Sorted by New

Wiki Contributions


I've noticed that the more high-level and complex the work you're doing, the sillier your bugs get. Perhaps because you focus so much on the complex parts since they're difficult to get right, so you gloss over the more basic parts.

I don't think your pyramid is a good conceptual framework to understand programming expertise. Expertise comes mostly from seeing common/overarching patterns (which would be all over the place on your pyramid) and from understanding the entire stack - having at least some sense of how each level functions, from the high-level abstraction of RoR's ORM to object lifetime and memory concerns to how database queries happen to how the db executes it (e.g. db indexes are also relevant to your example), down to at least having played around a little with assembly language.

I don't even know Ruby or RoR, but if I had to use it for your example, my first thought would be "ok, how do I do a WHERE query in their ORM", because every db abstraction in every language and framework has to solve this problem and I've seen a lot of those solutions. And I'll know to consider eager vs lazy evaluation (what if a campaign has 1M records after filtering, maybe I want to iterate over results instead of getting a plain list), and whether campaign_id has an index, because all of those are very common concerns that crop up.

So the expertise isn't knowing a factoid "don't use x.all.filter() in RoR", it's knowing that anything that queries a database has to deal with the concerns above somehow.

Fully agree on the bias part, although specialists being incompetent isn't a thread in my article? There's an entire aside about why some research doesn't get done, and incompetence isn't among the reasons.

  1. I've read the Slate article you linked, and I think it's good. I don't see anything in there that I disagree with. The article is from 2019 when the amount of evidence (and importantly the number of people who successfully replicated it) was just one Instagram dog. Even back then in the article scientists are cautious but lukewarm and want a more rigorous study. Now we have a more rigorous study running.

    All this stuff has been addressed in the comments and in the updated article. I'm quite adamant that misinterpreting dog output is the primary danger and I don't claim confidence in specific abilities, precisely because we need more study to determine what's real and what's confirmation bias/misinterpretation.

  2. That wasn't a point about dog research, it was a point about dynamics of what kind of discoveries and research gets made more often.

  3. "in the 60s" for social/cognitive/psychology-adjacent research has to be a bit like "in mice" for medicine. Either way, people try to do something and fail, 60 years later someone comes up with an approach that works. That's a completely normal story.

I thought about taking you up on the bet at 3:1 but I don't like the "vast majority" part. I think it's too much work to specify the rules precisely enough and I've spent enough time on this already.

word order is effectively random, length of sentence does not correlate with information content

That seems to be the case with dogs, and it won't surprise me if they never progress much further than that.

I've updated the article to include a more in-depth explanation of the study design and philosophy instead of just two links (I suspect almost nobody clicked them). Also added responses to common criticism and titles and short explanations for video links (I suspect a lot of people didn't click on most of the videos). Also removed the revolution part.

If you've already read the article, I suggest you read the research and criticism parts under Bunny and watch the new Stella video I added, which is more representative of the kinds of videos that led me to watch the dog space more closely. All of the good Stella stuff is on Instagram, but not on YouTube.

I think we're mostly in agreement, and I'm not disputing that it pays to be careful when it comes to animal cognition. I'd say again that I think it's a meta-rational skill to see the patterns of what is likely to work and what isn't, and this kind of stuff is near-impossible to communicate well.

I've read about the car-nutracker thing somewhere, but without the null result from research. If you had asked me to bet I'd say it would be unlikely to work. But it's illustrative that we both still agree that corvids are smart and there's a ton of evidence for it. We just don't know the exact ways and forms, and that's how I feel about the dog thing. There's something there but we need to actually study it to know the exact shape and form.

I predict FluentPet is at best going to become a niche hobby down the road, with less than 1% of dog owners having trained their pet in 10 years.

I don't think it will be niche because it's already not niche, considering the massive viewership. But your 1% figure sounds about right as a higher bound, given the sheer number of dog owners, the amount of work required and people's low desire to train their pets. A cursory google search says 4% of US dog owners take a training class, so serious button use will have to be a fraction of that.

I think you hit the nail on the head here. When I was writing the article I definitely had someone with a high prior in mind, to the point where I expected people to say "so what, why wouldn't dogs do that if you trained them".

Sometimes people seem to put dogs closer to reflexive automatons like insects than to fellow mammals. My prior is the base affects that we feel aren't fundamentally different between us and dogs (and most higher mammals). I'm talking about stuff like fear, excitement, generalized negative or positive affect, tiredness, sexual arousal. Even something like craving a specific food, I don't see why it should be unique to us, given that dogs are often picky eaters and have favorite foods.

People with strong priors against dog intelligence seem to ascribe everything to anthropomorphism, and there's often an undertone of "these people are too soft and weak, they call themselves ridiculous things like 'dog parents', they'll project human baby characteristics onto a Furby if you gave them the chance". FWIW I don't have a dog and don't plan to, and in my experience most dogs are fairly dumb. But to me they're clearly a bit more than simple automatons blindly reacting to stimuli.

  • I agree I should've summarized the study methodology in the article. For some reason I expected people to click the links and actually read and watch everything (this is not a knock on anyone, one shouldn't expect that when writing articles).
  • There is a lot of evidence, it's just weak and easy to misinterpret, and it's in the form of youtube vids, which goes against aesthetic sensibilities of what "evidence" looks like. If you want to have a holistic picture, you'll have to actually watch a lot of them, I'm sorry.
  • I think it's quite obvious that the evidence here has a rather different shape from 19th-century medium reports or grainy VHS tapes with a blob that is claimed to be the Loch-Ness monster.
  • Being able to tell which "messing around" is likely to be fruitful is a meta-rational skill, but it can be done. Somewhat. To me this has a "shape" of something that could be fruitful, but I can't transfer the pattern matching inside my brain to yours.
  • Vids being done in good faith doesn't preclude clever hans or overinterpretation. It precludes the vids being fake or staged.
  • I and others addressed multiple times the dangers of seeing phantoms in noise and operant conditioning. I don't see anyone here who doesn't acknowledge that. Classic clever hans is unlikely when the owner doesn't know the right answer or the pet is supposed to make a decision, or alert the owner to something they're unaware of.
  • A girl has a cat and trained it to respond "yes food" to "food now hm?". Cool, that's just conditioning. But then the cat refuses food repeatedly despite getting railroaded. If it was conditioned to press "yes" after hearing "food", without understanding what "yes" means, why did this happen?
  • The danger IMO isn't clever hans as much as misinterpreting and anthropomorphizing button "speech". Which counter-intuitively may be an important part of language acquisition (in humans, at least): Kaj_Sotala wrote such an interesting comment, sadly few people read it. I'll summarize the idea: the first time a toddler raises his arms, it's random, but mom misrepresents and thinks he wants to be held and so holds it. The toddler learns the association, and the next time he holds his arms up, it's not random, now it's a deliberate signal.
  • "looks like language" != "language". Personally I don't care about philosophical arguments about the exact border between non-language communication and "true language". It's enough for me to see some elements of human language use to make this interesting, even if it doesn't check every box.
  • The pets are trained by reinforcement learning by design. Just saying "it's all reinforcement learning" isn't saying anything, the question is what exactly you reinforce, and if you manage to reinforce not just simple associations like "food" or "toy", but also states of affect and more abstract concepts.
  • Don't overuse anthropomorphizing. Claiming the dog pressing "mad" is feeling a human-like emotion and is going to hold a grudge, instead of just feeling negative affect is anthropomorphizing. Claiming that dogs can feel fear, excitement or positive/negative affect is not.
  • Claiming people here are "fooled" is a bit derogatory to others, especially with the reasons you gave for that.

Plenty of concern was raised in the comments, have you gone through all of them and all the replies?

I'm aware of comparative cognition, the people posting the pet videos are participating in ongoing research at the Comparative Cognition Lab at the University of California, San Diego. They give a description of their methodology, but the status updates appear hidden to ensure integrity of the data.

Short recap of the comments: This is a very new thing, early-stage science often looks like messing around, so don't expect lots of rigor so early. If they had a paper, I would post that. On the balance of evidence, the videos seem to be made in good faith, I don't think it's some staged viral crap. Don't discount evidence just because it's normie YouTube vids. The main claim is that there's something interesting going on that makes me suspect dogs could possibly produce something that looks like language. I'm not claiming certainty on that or on the level of dogs' supposed language ability, it's research in progress, but I think it's exciting and worth studying.

I don't think it's fair to say my dismissal of concerns is "cursory" if you include my comments under the post. Maybe the article itself didn't go deep enough, partly I wanted it to scan well, partly I wanted to see good criticism so I could update/come up with good responses, because it's not easy to preempt every criticism.

As for cursory evidence, yes it's mostly that, but cursory evidence can still be good Bayesian evidence. I think there's enough to conclude there's something interesting going on.

Are the vids even real?

For starters, all of this hinges on videos being done in good faith. If it's all creative editing of pets' random walks (heh) over the board, then of course we should dismiss everything out of the gate.

  • For Stella IIRC all of the interesting stuff is on Instagram @hunger4words, so I only had those two YouTube vids. I agree they're not the best for leading evidence.
  • Please watch this video even if you have time constraints (it works fine at 1.5x speed).
  • She shows (excessive IMO) humility and defers to who she considers experts.
  • Considers herself a "hopeful skeptic", when Bunny does something unexpectedly smart, she still wonders if it's just coincidence (at 4:05). Also she namedrops Skinner and Chomsky 😄
  • Makes a point to put "Talking" in scare quotes in her video titles.
  • Tells a realistic story, where it took "a few weeks" for Bunny to learn just a single "Outside" button, takes "a thousand tiny reinforcements" to keep learning. And shows many examples of how she does the training.
  • Explains how she teaches abstract concepts like "Love you" and acknowledges it's not the same concept to the dog, but it has "an affectionate meaning".
  • In many videos we see Bunny take a looong time to respond. The dog goes away from the board (to "think" presumably) and later comes back with an answer. Those parts are sometimes cut out but usually just sped up. If it's all fake, why include that?
  • In some Bunny videos we see random "tantrums". Which shows what a truly random walk sounds like and if she's selling us a bridge, why include that in the videos?
  • "Conversations" are mostly very mundane and doglike and she doesn't show any truly amazing feats of intellect. She doesn't even try to teach the dog to count!
  • Claims to have cameras constantly pointed at the board for research, and indeed in many of her clips there are lower quality parts shot from a constant angle. That is consistent with "something interesting happened but she wasn't filming at the time". A big tell of fake/staged videos is that someone just happened to be filming at that exact moment despite nothing seeming to prompt that.

On the balance of evidence, Alexis doesn't look like someone who's trying very hard to convince you of her magic talking dog to sell you $250 online dog communication courses. And don't say "Amazon affiliate links", even Scott has done that.

But a bigger part of why I updated towards "there's something there" is that there are several people who recreated this. Of course it's possible that every one of them is also fake, but that would be a bigger reach. Or it could be that it's easy to delude yourself and overinterpret pet output, but then the videos are still in good faith and that's what we're determining here.

  • Here's a video of Billi the cat where she repeatedly and consistently refuses food. Which is the opposite of what usually happens, the owner tries to railroad her and she still says no.
  • As with Bunny, you can see that the cat takes forever to respond.
  • If it was just clever training to always respond yes to food with no understanding, why did this happen?

Ok if vids are real, it's still all Clever Hans

I'll just link to a few comments of mine on that:

Simple button use is expected by induction, danger of over-interpreting

It can't be classic Clever Hans if owner doesn't know the right answer

It smells like Buzzfeed and I'm disappointed in LW

It kind of does, but that wasn't the model. What I had in the back of my mind is "if Eliezer gets to do it, then I get to do it too". I think the community simply likes boldly stated (and especially contrarian) claims, as long as it doesn't go too far off-balance.

I didn't consciously go for any "maneuvers" to misrepresent things. IMO the only actually iffy part is the revolution line (steelman: even if your pet can tell you what they actually want to do instead of your having to guess, that's a revolution in communication).

And I think I hedged my claims pretty well. This stuff is highly suggestive, my position is "hey, despite the trappings of looking like fake viral videos, you should look at this because it's more interesting than it looks at first glance". I expect that we'll learn something interesting, but I don't have any certainty about how much. Maybe after rigorous analysis we'll see that dogs do only rudimentary communication and the rest is confirmation bias. Maybe we'll learn something more surprising.

Normie bias

To me, this doesn't feel too dissimilar from something my cousin-who-is-into-pyramid-schemes would send me. This article in particular feels not too dissimilar from something I could imagine on e.g. Buzzfeed; it just says some big things with very little substantive evidence and some maneuvers that seem most commonly used to weakly mask the lack of credibility of the argument.

I expected more complaints of this kind, so I was pleasantly surprised. I can easily imagine structurally similar arguments from someone who thinks AI alignment or cryonics are weird "nerd woo". If we're to be good rationalist we have to recognize that most evidence isn't neatly packaged for us in papers (or gwern articles) with hard numbers and rigorous analysis. We can't just exclude the messy parts of the world and expect to arrive at a useful worldview. Sometimes interesting things happen on Instagram and Tiktok.

Minor complaints

To be honest, a few of the reasons you decide the evidence is "not compelling" are pretty weird. Why does it matter if the dog uses one paw or both paws? Why is it weird that a dog has a "bed time"? What is "seeming disinterest" from the dog and what makes you think you can see that? Why do you expect dogs to strive for brevity and being "more clear"?

I like the idea of interacting with the dog normally as you would a 2-year-old human, while having the cameras running 24/7 so less biased people can look over the data.

Yeah it's an important point that some phenomena (perhaps most phenomena) are impractical to recreate under a strict research protocol. If you tried to teach your dog with a very formal approach, you'd probably "lose the magic" that makes it happen. Kaj Sotala posted a comment that suggests that "incorrect" overinterpretation of babies' behavior is actually an important mechanism by which the learning happens! It's a slow, messy, iterative process to get that "meeting of minds".

I also really like the research setup, and I'm glad they're sourcing from several pet households. Most of the attention is on Bunny, because she's the most coherent and is actively posted on YouTube, but I believe there are quite a few more people participating, they just don't post it publicly.

And even though the learning process isn't under strict protocol, you can still design more rigorous experiments. AFAIK the mirror near Bunny's buttons was placed at the suggestion of the researchers specifically to see if she'll recognize herself in the mirror.

Load More