This is a linkpost for https://www.biorxiv.org/content/10.1101/2023.06.27.546708v1. 

Effective communication hinges on a mutual understanding of word meaning in different contexts. The embedding space learned by large language models can serve as an explicit model of the shared, context-rich meaning space humans use to communicate their thoughts. We recorded brain activity using electrocorticography during spontaneous, face-to-face conversations in five pairs of epilepsy patients. We demonstrate that the linguistic embedding space can capture the linguistic content of word-by-word neural alignment between speaker and listener. Linguistic content emerged in the speaker's brain before word articulation, and the same linguistic content rapidly reemerged in the listener's brain after word articulation. These findings establish a computational framework to study how human brains transmit their thoughts to one another in real-world contexts.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 8:35 PM
[-]Ilio10mo10

Congrats for this wonderful data collection!

I understand that you can’t share the data for ethical reasons. Do you think you could easily redo the analysis for each frequency bands? It’s orthogonal to your question, but I’d love to test the following hypothesis: speaker/listener alignement increase with frequency band (from peak alpha to 2, 4, 8 and 16 time this frequency). That’s a prediction from imagining a simpler brain where alpha means « ready for sensory input », beta means « working on that » and gamma means « I found it! », with the higher gamma the more confident in the solution (say at p<.05, .01, .001).

Just to clarify, I am not one of the authors of the linked study.