Posts

Sorted by New

Wiki Contributions

Comments

These are interesting considerations! I haven't put much thought on this yet but I have some preliminary ideas.

Semantic features are intended to capture meaning-preserving variations of structures. In that sense the "next word" problem seems ill-posed as some permutations of words preserve meaning; in reality its a hardly natural problem also from the human perspective.

The question I'd ask here is "what are the basic semantic building blocks of text for us humans?" and then try to model these blocks using the machinery of semantic features, i.e. model the invariants of these semantic blocks. Only then I'd think of adequate formulations of useful problems regarding text understanding.

So I'd say that these semantic atoms of text are actually thoughts (encoded by certain sequences of words/sentences that enjoy certain permutation-invariance). Thus semantic features would aim to capture thoughts-at-locations by finding these sequences (up to their specific permutations) and deeper layers would capture higher-level thoughts-at-locations composed of the former. This could potentially uncover some euclidian structure in the text (which makes sense as humans arguably think within the space-time framework, after Kant's famous "Space and time are the framework within which the mind is constrained to construct its experience of reality").

That being said the problems I'd consider would be some forms of translation (to another language or another modality) rather then the artificial next-word prediction.

The MVD for this problem could very well consist of 0's and 1's provided that they'd encode some simple yet sensible semantics. I'd have to think of a specific example more, it's a nice question :)

 

Thank you! The quote you picked is on point, I added an extended summary based on this, thanks for the suggestion!