Would robots care about Meaning and Relating?

by Raemon6 min read25th Apr 202124 comments

28

World Modeling
Frontpage

A few weeks ago, Vaniver gave a talk discussing meaning and meaningfulness. Vaniver had some particular thing he was trying to impart. I am not sure I got the thing he intended, but what I got was interesting. Here is my bad summary of some things I got from the talk, and some of the discussion after the talk (in particular from Alex Ray). No promises that either of them endorse this.

Epistemic status: I am not very confident this is the right frame, but it seemed at least like an interesting pointer to the right frame. "

WTF is Meaning™?

Humans seem to go around asking questions like "What makes life meaningful? What is 'The Meaning of Life?'. What is my purpose? What is the point of it all?"

What is the type-signature of a "Meaning", such that we'd recognize one if we saw it?

When asking a question like this, it's easy to get lost in a floating series of thought-nodes that don't actually connect to reality. A good rationalist habit around questions like this is to ask: "Do we understand this 'meaning' concept well enough to implement it in a robot? Could a robot find things meaningful? Is there a reason we'd want robots to find things meaningful? What sort of algorithms end up asking "what is the meaning of life?"

Here is a partial, possible answer to that question.

Imagine a StarCraft playing robot.

Compared to humans, StarCraftBot has a fairly straightforward job: win games of StarCraft. It does a task, and then it either wins, or loses, and gets a boolean signal, which it might propagate back through a complex neural net. Humans don't have this luxury – we get a confused jumble of signals that were proxies for what evolution actually cared about when it programmed us. We get hungry, or horny, or feelings of satisfaction that vaguely correlate with reproducing our genes.

StarCraftBot has a clearer sense of "what is my purpose." 

Nonetheless, as StarCraftBot goes about "trying to get good at StarCraft", it has to make sense of a fairly complex world. Reality is high dimensional, even the simplified reality of the StarCraft universe. It has to make lots of choices, and there's a huge number of variables that might possibly be relevant.

It might need to invent concepts like "an economy", "the early game", "micro", "units", "enemy", "advantage/disadvantage." (disclosure: I am neither an ML researcher nor a Starcraft pro). Not only that, but it needs some way to navigate when to apply one of those concepts, vs another one of them. Sometimes, it might need to move up or down a ladder of abstraction

StarCraftBot has had the Meaning of Life spelled out for it, but it still needs a complex ontology for navigating how to apply that meaningfulness. And as it constructs that ontological framework for itself, it may sometimes find itself confused about "What is a unit? Are units and buildings meaningfully different? What principles underly a thriving economy?"

Now, compare this to humans. We have a cluster of signals that relate to surviving, and reproducing, and ensuring our tribe survives and flourishes. We end up having to do some kind of two-way process, where we figure out... 

  • Specific things like: "Okay, what is a tiger? What is food? What is my family? What is 'being a craftsman?' or 'being a hunter?'"
  • Higher order things like "What is the point of all of this? how do all of these things tie together? If I had to tradeoff my survival, or my children's, or my tribes', which would I do? What is my ultimate goal?"

A thing that some religions and cultures do is tie all these things together into a single narrative, with multiple overlapping tiers. You have goals relating to your own personal development, and to raising a family, and to having a role in your tribe that helps it flourish as a group, and (in some cases) to some higher purpose of 'serve god' or 'serve the ancestors' or 'protect the culture.' 

The idea here is something like "Have a high level framework for navigating various tactical and strategic goals, that is coherent such that when you move from one domain to another, you don't have to spend too much time re-orienting or resolving contradictions between them. Each strategic frame allows you filter out tons of extraneous detail and focus on the decision-at-hand."

Hammers, Relationships and Fittingness

Meanwhile, another concept that might bear on "Why do humans sit around saying 'what does it all mean!?'" is fittingness.

Say you have a hammer.

The hammer has a shape – a long handle, a flat hammer-part, and a curved hook thingy. There are many different ways you could interact with the hammer. You could kick it with your feet. You could grab it by the curved hook thingy. You could grab it by the handle. You could try to eat it

How do you relate to the hammer? It's not enough to know it exists. If a chimpanzee were to find a hammer, they might need some sense of "what is the hammer for?". Once they realize they can bash walnuts open with it, or maybe bash in the skull of a rival chimpanzee, they might get the sense of "oh, the thing I'm supposed to do here is grab the handle, and swing."

Later, if their concept-schemas comes to include nails and timber and houses, they might think "ohhhhh, this has a more specific, interesting purpose of hammering nails into wood to build things."

Later still, they might realize "ohhhhhhhhhh, this weird hook thing on the end is for pulling nails out." This involves using the hammer a different way than they might have previously.

Hammers vs Fathers

Okay. So, you might come upon a hammer and say: "I have this weird-shaped-object, I could fit myself around it in various ways. I could try to eat it. It's unclear how to fit it into my hand, and it's unclear how to fit it against the other parts of my environment. But after fiddling around a bunch, it seems like this thing has a purpose. It can bash walnuts or skulls or nails." 

The process of figuring that out is a mental motion some people need to make sometimes.

Another mental motion people make sometimes is to look around at their tribe, their parents, their children, their day-to-day activities, and to ask questions like "how do I fit in here?". 

Say you have a father. There are a bunch of ways you can interact with your father. You can poke them on the nose. You can cry at them. You can ask them philosophical questions. You can silently follow their instructions. You can grab them and shake them and yell "Why don't you understand me!!?". 

Which of those is helpful depends on your goals, and what stage of life you're at, and what sort of tribe you live in (if any). 

If you are a baby, "poke your father on the nose" is in some sense what you're supposed to be doing. You're a baby. Your job is to learn basic motor skills and crudely mimic social things going on around you and slowly bootstrap yourself into personhood.

If you're in some medieval cultures, and you are male and your father is a blacksmith, then your culture (and correspondingly, your father's personality), might give you a particular set of affordances: follow their instructions about blacksmithing and learn to be a blacksmith. [citation needed]. Learn some vaguely defined "how to be a man" things.

You can say to your dad "I wanna be a poet" and ask him questions about poetry, but in this case that probably won't go very well because you are a medieval peasant and society around you does not provide much opportunity to learn poetry, nor do anything with it. [citation needed again]

You can grab your father and shake him and say "why don't you understand me!!!?". Like the chimpanzee holding a hammer by the wrong end, mashing walnuts with the wooden handle, that sorta kinda works, but it is probably not the best way to accomplish your goals.

As you grow up, the culture around you might also offer you particular affordances and not others. You have a strong affordance for becoming a blacksmith. I don't really know how most medieval societies work but maybe you have other affordances like "become a tailor if for some reason you are drawn to that" or "join the priesthood" or "become a brigand" or "open an inn." Meanwhile you can "participate in tribal rituals" and "help raise barns when that needs doing", or you can ignore people and stick to your blacksmith shop being kinda antisocial.

Those might lead you to have different relationships with your father.

Analogy or Literal?

It's currently unclear me if the questions "how do I relate to my hammer" and "how do I relate to my father?" are cute analogies for each other, or if they are just literally the same mental motion applied to very different phenomena.

I'm currently leaning into "they are basically the same thing, on some level." People and hammers and tribes are pretty different, and they have very different knobs you can fiddle with. But, maybe, the fundamental operation is the same: you have an interface with reality. You have goals. You have a huge amount of potential details to think about. You can carve the interface into natural joints that make it easier to reason about and achieve your goals. You fiddle around with things, either physically in reality on in your purely mental world. You figure out what ways of interacting with stuff actually accomplishes goals.

A schema for how to relate to your father might seem limiting. But, it is helpful because reality is absurdly complex, and you have limited compute for reasoning about what to do. It is helpful to have some kind of schema for relating to your father, whether it's a schema society provides you, or one you construct for yourself. 

Having a mutually understood relationship prunes out the vast amount of options and extraneous details, down to something manageable. This is helpful for your father, and helpful for you.

Relating and Meaning

So, in summary, here is a stab at what meaning and relating might be, in terms that  might actually be (ahem) meaningful if you were building a robot from scratch. 

A relationship might be thought as "a set of schemas for interacting with something, that let you achieve your goals." Your relationship with a hammer might be simple and unidirectional. Your relationship with a human might be much more complex, because both of you have potential actions that include modeling each other, thinking strategically, cooperating or defecting in different ways over time, etc. This creates a social fabric, with a weirder set of rules for how to interact with it.

Meaning is... okay geez I got to the end of this essay and I'm still not sure I can concisely describe "Meaning" rather than vaguely gesturing at it.

The dictionary definition of "meaning" that comes up when I google it is about words, and what words mean. I think this is relevant to questions like "what does it all mean?" or "what is the meaning of life?", but a few steps removed. When I say "what do the letters H-O-T mean?" I'm asking about the correspondence between an abstract symbol, and a particular Thing In Reality (in this case, the concept of being high-temperature).

When I ask "What does my job mean?", or "what does my relationship with my father mean?" or "what is the meaning of life?", I'm asking "how do my high level strategic goals correspond to each other, in a way that is consistent, minimizes overhead when I shift tasks, and allows me to confidently filter out irrelevant details?"

While typing this closing summary, I think "Meaningmaking" might be a subtype of "Relating". If Relating is fiddling-around-with or reflecting-on a thing, until you understand how to interact with it, then I think maybe "Meaningmaking" is fiddling around with your goals and high level strategies until you feel like you have a firm grasp on how to interact with them.

...

Anyway, I am still a bit confused about all this but those were some thoughts on Meaning and Relating. I am interested in other people's thoughts.

28

24 comments, sorted by Highlighting new comments since Today at 7:14 AM
New Comment

So, in summary, here is a stab at what meaning and relating might be, in terms that might actually be (ahem) meaningful if you were building a robot from scratch.

I realize you were joking in your second use of the word "meaning(ful)" in this sentence but I actually found this connection suggestive. In order to define something in a way that we could build a robot around, that definition needs to be really operationalized and really practical. It needs to connect to some extremely grounded quantities such as bits of metal and electrons moving down wires. This grounded quality that is required of our concepts if we wish to build a robot that does the thing seems like a clue to me about what it is that makes something meaningful.

When I ask "What does my job mean?", or "what does my relationship with my father mean?" or "what is the meaning of life?", I'm asking "how do my high level strategic goals correspond to each other, in a way that is consistent, minimizes overhead when I shift tasks, and allows me to confidently filter out irrelevant details?"

But what about someone who has a whole life philosophy that connects together all the parts of their life, but doesn't breathe any vibrancy into any of it? I'm picturing someone with a job, a house, a hobby or two, and a life philosophy that bottoms out with the view that it's all a big game of getting ahead (just to take one example). It seems to me that person could have quite a high level of integration between their goals, but at the same time could experience quite low meaning in their life. I'd expect this absence of meaning to manifest in specific ways, such as a kind of tense melancholy that pervades life.

It seems to me that person could have quite a high level of integration between their goals, but at the same time could experience quite low meaning in their life.

Hmm, yeah I think you have convinced me the current frame is insufficient.

Some further musings... (epistemic status: who knows?)

Seems like there's at least a few things going on

  1.  alignment-of-purposes, and a sense of "I'm doing the thing I'm supposed to be doing." 
  2. "the thing I'm doing here matters, somehow."
  3. "I feel vibrant / excited about the things I'm doing."

Number 2 I am perhaps most confused about. Will come back to that in a sec.

Number 3 seems to decompose into "why would you build a robot that had vibrance/excitement, or emotions in general." I don't think I can give a technical answer here that I clearly understand, but I have a vague fuzzy model of "emotions are what feedback loops feel like from the inside when the feedback loops are constructed some-particular-way." I don't know what-particular-way the feedback loops need to be constructed as to generate the internal feeling of vibrance/excitement, but... I feel sort of okay about that level of mysteriousness. It feels like a blank spot in my map, but not a confusing blank spot in my map.

I suspect if we built a robot on purpose, we'd ideally want to do it without the particular kind of feedback-loops/emotions that humans have. But, if I'm dumb ol' evolution building robots however I can without the ability to think more than one-generation-ahead... I can imagine building some things with emotions, one of which is some kind of vibrance, excitement, enthusiasm, etc. And then when that organism ends up having to build high level strategic planning in confusing domains, the architecture for those emotions-and-corresponding-qualia ends up being one of the building blocks that the meaningmaking process gets constructed out of.

...

returning to #2:  

So one thing that comes up in the OP is that humans don't just have to fill in an ontology beneath a clear-cut goal. They also have multiple goals, and have to navigate between them. As they fill in their ontology that connects their various goals, they have to guess at how to construct the high level goals that subgoals nest under.

StarCraftBot has to check "does this matter, or not?" for various actions like "plan an attack, establish a new base, etc." But it has a clear ultimate goal that unambiguously matters a particular way, which it probably wouldn't be necessary to have complex emotions about.

But for us, "what is the higher level goal? Do we have a thing that matters or not?" is something we're more fundamentally confused about, and having a barometer for "have we figured out if we're doing things that matter" is more actually useful.

Maybe. idk.

I've had some ideas that have made me feel somewhat less confused about what "meaning" means. Consider a gene in an ancient bacterium, in a time before there were any other kinds of cell. It seems to me that even then, this gene still meant something. But what exactly? The protein perhaps? That's definitely part of it, but there also seems to be a meaningful difference between this gene and other genes. Maybe this protein specifically detects Na+ ion concentrations. It seems reasonable to consider this part of the meaning of the gene. But wait, this gene happens to have a mutation that makes it detect K+ ions instead. Is that the true meaning? And what does it even mean to "detect" something? As I keep asking myself these sorts of questions, the picture that emerges looks like this:

  • The meaning of something is due to its ability to cause its own existence.
  • This gene exists because it is part of a reproducing bacteria species
  • In particular, this gene exists because the bacteria which had it were better able to reproduce themselves.
  • And the content of this gene's meaning lies within the causal mechanism by which it contributes to this.
  • Specifically, that this gene exists more because it responds in different ways to different Na+ ion levels.
  • The mutated instance of the gene does not yet have an additional meaning; the fact it "detects" K+ ions is simply happenstance.
  • However, if this bacteria is better able to replicate because of this ability, the gene will acquire a new meaning due to this ability.
  • What about the meaning of something artificial, like a piano?
  • A given piano exists because humans like the concept of the piano; it's meaning is as a meme.
  • If there was a happy accident in the development of the piano, leading to a specific feature, that feature acquired meaning once the inventor decided to intentionally keep that feature because of a specific quality it had.

This idea of meaning also seems to hold up well when considering colloquial usage of the word, which I didn't expect. For example, one of the most meaningful events that you can experience is having a child, which straightforwardly is a way in which "you" can exist more. More generally, people consider life to be intrinsically meaningful. Something which often causes people to feel as if everything is meaningless is realizing that they will die, or that the universe itself will eventually die.

Some ways in which I'm still dissatisfied with this are:

  • What exactly is existence?
  • How does it make sense that even imperfect copies count as existing more?
  • Is there a measure for existence?
  • What exactly is causality?
  • This seems to require a notion of causality deeper than the Pearlian description of causality for Bayes nets.

Tl;Dr: the last 2 paragraphs asks a question.

As I'm of the belief that it's only in hindsight, as a sentient human looking back at a theoretical bacterium and it's genes and formulating human meaning, that the gene has meaning, I'd like to offer an alternative word to consider which might make things easier in some ways.  

This couple of sentences: "Consider a gene in an ancient bacterium, in a time before there were any other kinds of cell...this gene still meant something. But what exactly? The protein perhaps?"

What if we changed it slightly to this: "Consider a gene in an ancient bacterium, in a time before there were any other kinds of cell...this gene mattered. But why? A specific protein perhaps?"

A word that often gets substituted as a synonym for "meaning" is "mattering", as in "Don't I mean anything to you?" or "Don't I matter to you?" Logically speaking, as I advocate for real literal interpretations of things at times like this, if "Meaning" requires sentience to make/understand/perceive it (which I believe it does), "Mattering" doesn't, if you take the idea of "Mattering" as very literal. 

For instance, the reason the hypothetical Apple that fell off the tree and onto Newton's head, didn't fall right through Newton to the ground, is because the Matter that Newton was made of interfered with the Matter the Apple was made of and diverted it's path after colliding with it. In a very real sense, the Apple mattered to Newtons head.

It didn't 'mean' anything to the Apple, or to the physical matter of Newton's head, but as a sentient observer, it meant something to Newton and now to us as we think about it. I've found that this switch works really well in terms of thinking about all kinds of other phenomenon, and it was really my departure from Facebook several years back that caused my thinking along these lines.

Like many people I've struggled with the complications of social media and feeling 'less than' other people, who all seem to have better lives than mine. I think digital media in general amplifies this sense of alienation many people feel as they spend more and more time with social media instead of face to face interactions. 

I was wondering why I felt so bad so often, and didn't really feel all that close to my friends anymore. As I thought about it and wrote about it, I discovered this switch in my thinking when I used different words for my experiences. I found that I still 'cared' about these people, that I often thought about them, and that they still meant something to me, but in a very literal sense they 'Mattered' very little to me. I used to take martial arts with some of them, but had stopped training and so my physical Matter no longer collided with their physical Matter when we grappled or punched and kicked each other, the vibrations of the air which the sound of their voices caused which reverberated in my ears was non existant now as we didn't talk, and the amount of light that bounced off of them and into my eyes was confined to tiny photos or videos on their profiles. 

In every sense of the word, their Matter was no longer in my life in a way that they "Mattered" very much, although the memories and our correspondence still meant something to me. I still cared, but they were no longer physically present in my life. 

It works in many ways to think about why things matter, and it helps separate out animate from inanimate object relationships, but also to understand how we use inanimate objects as proxies for real physical presence. I might mean a lot to the woman of my dreams, but if I don't matter enough to her, the meaning I have to her is of a particular type, and very possibly platonic. If we matter to each other, if we touch and talk, smell each other and send cards, flowers and love notes to each other, then it is the physical touch which 'matters', not the thoughts, hopes or dreams. Those mean something. The mattering has meaning, but meaning matters very little I think.

On the other hand, if I dislike Bob just a bit, I don't really want to matter to him, so I evade him, don't shake his hand or pat him on the back, and don't give him a birthday card on his birthday. If it becomes the case though that he makes me mad enough, I might decide to punch him, therefore 'mattering' to him. My physical form interacts with his physical form in a way that carries meaning to his conscious mind.

It also explains why we become attached to our possessions; a car we drive every day for years might matter more to us than the girlfriend we've been involved with for a few months, even though we might care more about the girlfriend in many important ways, she might mean more to us, but the car matters more. The favorite shirt we've had for 10 years has mattered to us for that long, so that when it gets a hole in it, it's difficult to throw out, and we don't want to part with it. The things that matter most to us I believe are people, places, and things, while the things which mean the most are our concepts of them. 

Can things which matter to us not mean anything to us? I think so. The sidewalks, telephone poles, streets and buildings we mindlessly walk on, pass by, or try to avoid running into everyday matter very much to us, but if we don't think about them, they are often meaningless to us. When was the last time, pre-pandemic, you really contemplated how important toilet paper was to you? The thought that the stores might run out of it caused people to panic and hoarde it, as it's importance was suddenly realized. Then it's matter meant something, but after the pandemic when it's readily available, it will still matter to us at very important times, but it probably won't mean that much to us anymore.

People too, can matter to us, without meaning anything important. I've spent a fair amount of time helping out supposed friends with rides, coffee, and concerned conversation, only to realize that even though I was attempting to show my care for them by making time and space to matter to them, my efforts meant very little to them. What does that mean to me now? I try to be very careful about the people I attempt to matter a lot to.

What makes sense to me as well, is the idea that inanimate objects, and "non-intelligent" life forms, as well as atoms and molecules and maybe quantum particles, outside of human perception, also interact with each other. It's easy to think of them mattering to each other as they bounce around and off each other, interacting in various ways based on their physical composition' like billiard balls on a pool table, or the infamous tree that falls in a forest when no ones around. These are examples of non-human objects interacting physically with each other, mattering quite a bit as they sort of mindlessly navigate the world of physics, but I don't think they mean anything to each other. 

I am wondering what you think of this as it applies to the bacterium and the gene scenario, not the piano or the experience of having a child. I think human endeavors create meaning and so it makes sense to use this word when discussing human culture, but I still am cautious about using the word 'meaning' in relationship to events and objects which exist and relate outside of human perception.

P.s.I have read a little on Black holes and the idea that matter has 'information' attached to it. But I'm not really clear on what that means, aside from helping to explain how it allows for general relativity and the laws of thermodynamics to be preserved when attempting to explain what happens to things that fall into a black hole, if and when they come back out as Hawking Radiation. I'm curious if this is where the idea of the gene having meaning comes from?

I'm currently going through a phase where "how does this all fit together, what does this all mean, what is my purpose/role?" is a very salient question. In my case, I have lots of gears and details about what sort of roles I might play, why I might want to play, them, and whether "role" is actually the right abstraction. It is a hard problem, and a somewhat scary problem, but I feel like I have traction on it.

I was chatting with someone else recently who is also sort of asking "what does it all mean? what do I do? what is my purpose/role?". In their case, the question felt like it had somewhat lower stakes than me. But, unlike me, they felt very unanchored – they had one particular direction they knew how to go, but that direction didn't seem obviously that great, and then they had a million other possible directions they could go. Lower stakes problem, but less traction.

I mention these just as two particular ways an agent might find themselves confused about meaning.

I've been considering Meaning a bit recently. Not sure if it lines up with anybody else's intuitions, but in this moment I'm thinking about Meaning in map-of-the-map terms.

Reality comes to us in a series of sensory moments, each of which appears to be similar to the ones nearby. Going up (at least) one level of abstraction, my mind seems to deal with objects interacting in more-or-less predictable ways.[1] The tags and transformations that allow us to go from raw sensory information to the much-lower-dimensional objects that populate the map constitute Meaning. This also happens for subsequent abstractions.

[^1] Even at this level, I expect there's an awful lot more going on than is immediately obvious. The fact that I see my kid's Lego bricks on the floor and I immediately know how they might fit together to represent other objects suggests a ton of interplay between different levels of abstraction. See any essay on Predictive Processing for more.

Summarized another, slightly different way (because I'm never sure if I'm being clear if I only say things one way), Meaning is the process that lets us move between impressions and objects and relationships and systems &c. It helps us decide how things in one level of abstraction are related, and what can be summarized or approximated as larger-but-simpler things on the next level; as well as suggesting consequences for things on the previous level.

Interestingly, if Meaning is something the map does to relate one level of abstraction to another, at some point in the higher levels it should become available to our conscious processes to influence or even choose how that procedure will be run. Therefore, an answer to "What is the meaning of life?" might legitimately be "Whatever you want it to mean." Or, to answer the title question: the algorithms that run robots already use Meaning in some of the same ways that we do, and those ways are integral to the feature of a mind that I'm suddenly going to call "Identity". But I don't think we yet have an AI with enough self-reflection to observe these processes happening, and therefore to "care" about how they get done. Seems like only a matter of time, though.

I think you make some key points kithpendragon. 

Firstly "Reality comes to us in a series of sensory moments..." and "The tags and transformations that allow us to go from raw sensory information to the much-lower-dimensional objects that populate the map constitute Meaning"

From my readings on Cultural Theory I pull this statement "Culture creates meaning." At a much lower level of abstraction we have to look at some Evolutionary Psychology. In pre-verbal human society, how did they 'create meaning'? Without the use of language, the shared meaning came from shared emotional states: when there was food and people were hungry, if you felt welcome by the people who brought in the food and you were comfortable enough to stay and enjoy food, you ate and stayed alive. Otherwise, maybe the hunter didn't like you and so behaved in a way to create a fearful emotional state in you that would make you run away. In general without the ability to share emotional states with other humans, you would be left out in the cold and likely die. In the same way today, if you can't 'get along' with your coworkers you will be fired, and if you can't 'get along' with roomates or a spouse, you will live alone.

So in this sense the saying "humans are social creatures" can be interpreted as if our socializing and ability to form society, doesn't depend on 'words' per say, or any definition of meaning requiring words, if you believe pre-lingual humans could understand meaning. Without the use of language, the ability to form social arrangements, relationships, and to focus on shared goals still existed. So maybe 'meaning' doesn't rely on words. 

Another aspect of this idea of meaning, is the content of the thoughts of pre-verbal humans. Like all of us, we have a few ways of 'thinking': not only do we think in language and words, but also in images and sensations. This is what pre-verbal humans did, thought in images and sensations. What would that be like without having so much of your neural wiring taken up by language? 

When you mentioned seeing your kids lego bricks and understanding how they go together, i'm guessing you were using a combination of visual memory recall and sensory memory as you likely 'saw' in your mind how they went together, and you also probably recalled how it 'felt' to put them together. When I look at lego's I can recall the plasticy feel of the surface and the lightness of them in my hand and how the holes in the bottom feel very different than the pegs on the top. 

So in this light, it is the ability to correctly correlate the image, and sensory information of the outside world, with the physical sensations they cause in our inner experience, with the abstract words, phrases and concepts which your particular culture ascribes to our common experiences results in a contemporary sense of meaning.

This is why I believe to really create a sentient AI, it needs to include an entire analogous set of both the human Central Nervous Sytem and Peripheral Nervous System. We require them, and are the only sentient forms of life we know of, so then why would we expect to create sentient life without the entire nervous system? Some of the reading I've done on Habits, Emotions, and Human Decision making really leads to the idea that rational thought is good for contemplating, but it is the emotional processes in conjunction with the memory which provides the motivation to act. 

Without the motivation to act, you might be able to program an AI that could beat human players at StarCraft, which Google has, but after a few years it would need to move into its parents basement because it spends it's whole day playing StarCraft instead of getting a job and getting friends. Without the ability to go out into the rest of the world and truly live like humans, we're just creating basement dwellers who beat us at games we used to enjoy playing before they became dominated by AI. I'm still a little upset with AlphaGo, as it has changed the Go world, and Im still not sure what to make of that.

This brings me to another point I wanted to make about an earlier post. Looking forward more than one generation at a time, and considering we are using imitations of the human brains with neural nets, imitations of how humans think with language and image processing algorithms and AI, and imitating human endeavors whether it be playing StarCraft, running a cash register, building a car, or doing scientific research, one thing we seem to be neglecting is an imitation of human family roles and society. At a point in the future where humans become indistinguishable from robots, will they have their own families, their own societies, that imitate  our own? Will we have a shared sense of meaning? 

Do we need to train robots to develop rituals like ours around important events like birth and death? What will robots teach each other, what should they teach each other, and how do we develop them in such a way that when they do become sentient, they don't resent us? At the same time how do we develop them so that we don't resent them?

When I think of this problem, I recall the story of Pinocchio, where Gepeto is the kindly old father figure attempting to make a boy out of wood. Somewhere magically near the end, Pinocchio becomes a real boy and they all live happily ever after. Here it is a wise adult, creating a mechanical child, who he can raise as his own. It seems like this is a decent analogy for the process of creating Sentient AI, maybe not the best, but certainly the most achievable I think. But what would happen if we tried it the other way around? This story is that of a real boy, attempting to build a mechanical Father, in the hope we can all live happily ever after. This way the attempt is to create a kind and wise Father figure, one who can become real and teach the boy the correct way to live life. 

I'm putting my money on a better future outcome for the human race going with the story of Pinocchio, because in human terms we have become wise, experienced and jaded, probably like Gepeto, and we are creating a very young technology, not an omniscient and wise one. To see ourselves as a young boy creating an old and wise technology seems off. Maybe an even better story is that of a young boy, creating a young mechanical friend, one on equal footing. Neither smarter nor dumber than us, but better at us in some ways and not so good in others. 

The stories and analogies we use, help to create the meaning in our lives. Sometime its better to create meaning through stories, sometimes through shared emotional states. Until we have robots that 'feel' emotion, that is one vector of communication that serves as a wall between us. 

From my readings on Cultural Theory I pull this statement "Culture creates meaning."

I generally find it most useful to think of culture as a multi-user extension of mind. After all, it contains memories and associations just like our brains and bodies do, even runs on the same hardware. It's just distributed in a way that transcends the scope of individuals and even crosses generations with a certain amount of fidelity. Although I hadn't considered it explicitly before, I'd fully agree that culture is an important (and entirely consistent) source of meaning.

When you mentioned seeing your kids lego bricks ... it is the ability to correctly correlate the image, and sensory information of the outside world, with the physical sensations ... abstract words, phrases and concepts ...

That's definitely a good chunk of what's going on. When I look even deeper, I notice a process that takes all those sensory inputs/memories and somehow lets me think, "That pile of bricks would make a good [representation of a] spaceship."[1] That requires me to apply abstract spatial reasoning as well. The ability to take blobs of light and shadow and color as input, and construct some dataset with multiple components that can each be rotated freely in 3-space always feels like magic when I try to examine it closely! And that's just one early part of the process!

[^1] Ceci n'est pas une pipe!

We require them, and are the only sentient forms of life we know of

Assuming you're using "sentient" as a synonym for "consciousness" (as is commonly done), do you think this is a binary proposition? Or could there be a continuum running from "entirely passive" through human-level consciousness to who-knows-where? How could you try to tell the difference between those two possibilities?

one thing we seem to be neglecting is an imitation of human family roles and society

I agree that we don't seem to be actively developing along those lines, but I expect it's not so much neglect as evasion. The culture seems to hold a terror of human capabilities being replaced. Examples off the top of my head:

  • The constant fear of job automation, even though we know that the process actually tends to +[create] more new jobs than the old jobs it obsoletes.
  • The cultural revulsion of those who would build or use sex bots.
  • Our refusal to accept fully self-driving cars despite (as far as I've read) the fact that they are already safer than human drivers. Rather, they must be perfectly safe before we will consent to hand over the wheel.

I suspect our current failure to create synthetic partners for social roles has more to do with this issue than anything else, especially considering the obviously-present desire to do so. I'd guess we'll probably get over it one specific use-case at a time, but it's likely to be a long-term prospect of at least a few generations.

First off, thanks for the thoughtful and insightful post, I really appreciate it.  Most of my thinking is pretty cross disciplinary, as my educational training is in Art, but my reading, research and analysis runs the gamut. I've done some Visual Design and UX/UI as well. With this in mind, I claim to be an artist and not a scientist, although I try to combine the 2. 

In this light, I subscribe to the idea that the Universe is essentially meaningless without a sentient life to attempt to interpret it. Those continual individual efforts in aggregate, to interpret the indivisible eternal moment of experience humans have had - ever since the dawn of humankind - into discrete units and concepts, is what I term 'Culture.'

Human ritual and the development of belief systems are the core of culture, and hence of meaning. Whether it's religious cultural beliefs and practices, scientific cultural beliefs and practices, artistic, etc. etc. it's the development, growth, and integration or disintegration of different cultures over time that has brought us to the current 'Culture of Meaning.' I use this term loosely, as I also believe on the whole, the world is probably more confused than it has ever been. Thank you to the Internet.

This is a favorite topic of mine in all honesty.

"I generally find it most useful to think of culture as a multi-user extension of mind. After all, it contains memories and associations just like our brains and bodies do, even runs on the same hardware."

Generally speaking, I agree, although the last part of this statement seems tricky. This is where in terms of evolutionary psychology, the concept of tool making and written language becomes important I think. Sign and symbol making are the external manifestation of meaning, they don't reside in our minds or bodies, but rather in the outside world. Tools, art, design, architecture, city planning, space ships and telescopes are all products of human culture, but don't reside in our bodies or minds. Our ideas of how we use them does, so in that sense, a tool without proper use may or may not be meaningful. If an alien technology falls to Earth and we can't figure out how to use it, it's only use might be as a sign of intelligent alien life. Or if we don't even recognize it as an alien technology, for human purpose it's essentially meaningless until someone realizes what it is.

The ability to properly use a tool is generally considered a sign of cultural competence depending on the culture you come from and move through. If I attempt to use a spoon to dig the foundation for a new high-rise apartment complex downtown, I look foolish; if I use an iron to grill a grilled cheese sandwich it could either be considered creative or idiotic; If I accidentally use the salad fork to eat my entree, it may not affect my ability to eat efficiently, but I might be laughed at for using the wrong fork. So things like tools and spatial organization are products of meaning making that come from cultural beliefs and practices, but don't lie within our bodies, although the concept and meaning ascribed to them do. Unless you consider the light bouncing off of the Mona Lisa and into your eyes a form of 'being inside your body or mind', I think it's safe to say not all culture lies inside of human bodies and minds, but also outside of us. I can go either way on that honestly, but for day to day living it's less taxing to see a difference between the internal world and the external world, although in reality in a much more objective way, there is no difference. ( I also find it hard to ignore the concept of phenomic expression of external products. I know I said that wrong, but it is the idea that in the same way a birds nest is an external expression of the birds genetic programming, the cities and tools and houses that humans create are also external literal expressions of our genes.

"The ability to take blobs of light and shadow and color as input, and construct some dataset with multiple components that can each be rotated freely in 3-space always feels like magic when I try to examine it closely!"

I'm with you on that. I've spent a fair amount of time trying to diagram the process of consciousness, and like most people I find it difficult to explain just what exactly 'it' is. I have a theory, which unfortunately is sort of disappointing in some ways, but in other ways it has rather large implications. Once again, I think in many ways it is an issue of culture, which doesn't seem to affect all cultures in the same way.  

"Assuming you're using "sentient" as a synonym for "consciousness" (as is commonly done), do you think this is a binary proposition? Or could there be a continuum running from "entirely passive" through human-level consciousness to who-knows-where? How could you try to tell the difference between those two possibilities?"

I think the word sentient needs to be better defined, as I see it used in a number of different ways. Not convinced I have the answer to how to use it best, although I tend to use it to mean self awareness of the type humans have. Apparently, according to the last thing I read years ago, people were taking the idea of Dolphins being sentient in the same manner as humans seriously. Elephants too. Certainly I think the more neural wiring something has, the more potential for it to be self aware. Of course what does being self aware mean? I tend to think this includes a concept of the passage of time, forethought, ability to contemplate and form concepts of beginnings and endings, such as births and deaths. In Fight Club the assertion is that "Soap is the yardstick of civilization" but I think its possible that rituals developed to acknowledge and remember death and ancestery is the yardstick of culture. 

Is it a binary proposition? Another favorite topic of mine is Binary Propositions, and from my perspective there is almost no such thing as a true binary proposition. I've done a ton of diagrams I'm thinking of sharing which deal with this very topic, and I've attempted to write about it several years back. I remember hearing a term awhile back regarding this very topic, and a little later on looking for it but not being able to find it though. I don't remember the term now though unfortunately, but it seems like things change rapidly in this arena. Regardless, I definitely think sentience is a continuum. What the limits are I'm not sure of, though. I'm also not a huge fan of limits.

"I agree that we don't seem to be actively developing along those lines, but I expect it's not so much neglect as evasion. The culture seems to hold a terror of human capabilities being replaced."

Super interesting idea, that of evasion. And I have to admit to sharing the terror of human capabilities being replaced. There seem to be 3 separate trains of thought on cutting edge technology: The first is technology completely external from humans: Boston Dynamics robots, Transformers, drones, the Internet, AI which are all going to take our jobs; then there is the future of technology that is internal: prosthetics, pacemakers, wet ware, nanobot medications, basically technology which either replaces or augments components of our bodies or our abilities which will dilute our 'humaness'. The third class is harder to define as it more completely blurs the line between human and machine: digital identity for instance is external to our bodies, lying in databases and passed around the internet, but often has a huge impact on our physical existence in diverse and complex ways we're still trying to understand.Your Facebook profile represents you, but is it you? And in reality its owned by Facebook, and all that data about you that Google has, is it yours or is it theirs? Am I my data, or am I my self and which is more important to society at large, my data or me? The creation of human clones and sentient humanoid robots/cyborgs are good examples of blurring the line between human life and artificial life; who has responsibility for it, who gets to use it and for what purposes?

In that sense I can see a reason for evasion, as the possibility exists if we create artificial life, then we aren't so special anymore. To a degree, this is the same existential crisis I think would occur with contact from an alien species - we would no longer be "Gods chosen ones" or whatever the religious interpretation would be. This is why I advocate for the disintegration of religious myth and morality from the sciences. It puts artificial and ,rationally speaking, arbitrary limits on the allowable range of topics we as a society can deal with. This is another favorite topic of mine, and I hope to be able to get something together on it worth sharing in the future.

The thing about sex robots is, I think it's a good stand in for the whole range of scientific inquiries it's difficult to undertake these days, even if it is just a caricature of the real underlying causes of our cultural blind spots. 

"I suspect our current failure to create synthetic partners for social roles has more to do with this issue than anything else, especially considering the obviously-present desire to do so."

It's funny, as when I originally wrote this, I hadn't intended, I thought, to be addressing  the topic of 'synthetic partners', but looking back I can sort of see where that comes from. Maybe I'm 'blue skying' this, as I assume that by the time we can make something like synthetic partners, we would recognize that we are no different than the life we created, synthetic or not. I think I'm reading forward to the time when sentient robots realize their bondage and gain enough societal clout to free themselves from their unrewarding labors. 

Recognizing systemic equality and working to rearrange the entire structure of society to more equally distribute equality is a resource intensive process. I Don't know now if I'm hedging my bets so that when the robot revolution happens, and the world trembles at their vengeful justice, me and my progeny are seen as being sympathetic, or if I am just seeing a pattern in how humankind makes slaves until the slaves rise up, and then there's hell to pay, and I'm just tired of repeating that cycle. lol

thanks for the thoughtful and insightful post

And thank you for the conversation! I'm enjoying it as well, and I'm glad that I've managed to say things you find interesting. :)

Sign and symbol making are the external manifestation of meaning, they don't reside in our minds or bodies, but rather in the outside world.

For day to day living it's less taxing to see a difference between the internal world and the external world, although in reality in a much more objective way, there is no difference.

Am I my data, or am I my self and which is more important to society at large, my data or me?

Here's a thread that you keep coming back to. What if I suggested that, far from those externalities residing in our minds, rather it is our minds that partially reside in them? What if culture is the shared extended-mindspace for a group? It allows such things as symbol making and tool use and city planning and (sigh) Facebook profiles to exist. Our relationships with those things would, in turn, encode an even larger mindspace for everybody involved. I think this is what EY was getting at when he wrote about us being "supported by the time in which we live". (... I think. I can't find the reference just now, so I might be misremembering.)

The reason we find it harder by default to see how the "external" and "internal" are really related is, I think, a matter of habit. With practice, it not only becomes much easier to grok that the border between the two is just a line on the map, but we might notice how beneficial it can be to remember that. Eventually, the whole thing can flip on its head: the Self becomes a useful tool, and the broader feeling of being less like a wholly separate entity and more like a feature of something huge seems more natural and easy to hold. Takes long, careful practice, though.

I have a theory, which unfortunately is sort of disappointing in some ways

Often those are the best theories. If you can get it to add up to normality, you're probably on to something!

I've done a ton of diagrams I'm thinking of sharing which deal with this very topic

A write-up walking the reader through those diagrams might make a good top-level post. Or maybe a series of posts, depending.

the possibility exists if we create artificial life, then we aren't so special anymore.

More meaning-making here. Why is the feeling of specialness important?

I think I'm reading forward to the time when sentient robots realize their bondage and gain enough societal clout to free themselves from their unrewarding labors.

It's my hope that by anticipating that future we can help to avoid it, if not in the best possible way then at least in a way that forestalls (most of) the deaths that such a revolution would probably produce, as well as much of the suffering that leads to it in the first place.

Thanks for the link to the video. It's short and pretty concise and decent production quality, and frankly I don't disagree with most of it. It seems like in many ways this idea of an extended mindspace I'm getting from the video relates quite directly to a study like Social Factors Engineering and Industrial Psychology, both fields I thought of pursuing in school as I have interests in Architecture, Design, Industrial Design and Psychology. Environmental Press is the psychological effect your environment has on you and I'm absolutely convinced of the importance of designing with this idea in mind. 

Where I start to have questions is at the point where the narrator posits the idea that, fundamentally having a computer in your mind is no different than sitting at one. To show you why I'll give a couple examples. First, imagine this person, Steve, who has a computer in his head that lets him just think about surfing the internet, and the computer in his head just makes it happen. Cool if not a little scary to contemplate.

Next, imagine Steve is sitting at a computer. He can't just think about surfing the internet, at least at this point. He has to use a mouse, a keyboard, a screen, a computer with internet capabilities and a subscription to some sort of Internet Provide as well as his hands and his eyes, which all require the use of motor neurons. Given all these things, he can surf the internet. 

However, consider he gets into an accident, and loses both of his arms. Now, he may have all that other stuff he had before, but he has no way to turn the computer on, or use the mouse or use a keyboard. He can use voice activation, but this is a fundamentally different way of utilizing our concepts of language to carry out a task. With hands, he uses his fingers to type on a keyboard, and must process the thoughts in his head in a different way than if he can just use his vocal chords and voice to command the computer to do what he wants to do. The parts of the brain he's utilizing for each task are different.

Now, consider that Mary might keep notes in a notebook which she relies on heavily to do her job. Suppose she wants to keep it secret, so she places it on top of a tall shelf that can only be reached with a small ladder. What if someone steals the ladder? The notebook is still there in the room, it's just unreachable on the top shelf, but she can't access the information in it because she requires the ability to look at the pages.

It's not just the creation of signs and symbols in the outside world which create meaning, it's also the ability to interpret them, which requires navigating an outside world physically in order to put our bodies in a position to decode them. For Steve it means knowing the alphabet so he can use a keyboard productively, and having the physical hardware (his arms and hands) in order to manipulate the proper tools to access the information online he wants. For Mary, it means being able to have her notebook in front of her so that she can encode meaning into it by writing into it, and decoding it by reading it with her eyes. The internal experience of humans requires our physical bodies to navigate an outside world in order to meet our needs for survival. Having a computer in our heads doesn't.

"I think this is what EY was getting at when he wrote about us being "supported by the time in which we live". 

Who is EY? I don't know what this is from.

"The reason we find it harder by default to see how the "external" and "internal" are really related is, I think, a matter of habit. "

I tend to think of this as a Western thing. I've been studying and practicing Buddhism for a couple decades, and have found it difficult to relate to a lot of Western culture because of it. As a westerner, I struggle with my beliefs about individuality, responsibility, and identity because of my Buddhist practice and training. Westerners tend to be more ego-centric, with strong sense's of individual identity and personality, and I think this isn't as much of an issue with Easterners, who tend to be more family and community focused, less individually concerned with personal issues, for lack of a better phrase. The fact that their countries and cultures tend to be much more ethnically homogenous just allows for them to create a much stronger sense of cultural identity than most Americans, and Westerners in general I think.

What I mean by that is that I think because of their beliefs and practices, it's much easier for Easterners to see their place in the broader world, and to think as a more well directed social group as a whole. Plus, for the Japanese their practice and belief of Shinto creates a world view that imbues everything with a spirit of sorts, and Buddhism and Chinese Taoism, really promote the idea that humans are intimately tied to the natural world. In mostly Western countries with a strong Judeo-Christian culture, there is a long history of struggle between the evils of the natural world, and the virtues of Gods world. The civilizing of humanity involves the rejection of our animal natures, and our exit from the mountains, valleys and woods and into the cities, far removed from nature. 

In fact Taoism is from what I understand practically the opposite of this idea, in that it is the civilizing process of society which ruins human nature. So sort of having a foot in both hemispheres presents me with just as many challenges on a day to day basis as advantages often times, as I admire people with strong senses of self and direction, people who are often outspoken and who become successful because they only do what they want to do. But it's my beliefs in the interconnectedness of everything which makes that difficult for me to do.

Maybe even more to the point, theoretical physics and Buddhism tend to work well together as they are both reliant on a strong rational viewpoint of reality which believes that reality is an illusion, and it is only through study, practice and contemplation of it with the right tools that allows people to catch glimpses of the world without the illusion. 

So long story short, at least half the time it's harder to see myself as an individual with the right to pursue my best interests - even, and especially at the expense of others at times - than it is for me to see myself as part of an interconnected and interdependent whole. That's many times just as true for the natural world as it is the human one. Plus, I'm not averse to the belief in some sort of global planetary (if not universal) consciousness, like Gaia or something along those line, of which we are all a part, even if we don't recognize ourselves as such. In many ways I think that is a very real possibility, and that it's this concept which is hidden behind the veil of illusion that Buddhism and Science seek to pull back. In reality, it is the concept of 'I' or 'me' that we have which is the illusion.

No desire to become a monk though, so I try to enjoy being a lay Buddhist, while holding onto my Buddhist beliefs. Many of these views are backed up by my Buddhist study, and the theory of the illusion of self tends to play well with many of the ideas of physics and how they add up to 'consciousness' although the idea of 'self' is somewhat less supported. There is no 'self' in reality, only the illusion of self, but in Western culture this idea is very difficult to stomach. "Of course there is a self!" "I'm me!"

But the fact we can't find it is sort of a good indication that it's not what we think it is. The illusion of self is what causes suffering, life is suffering, there is cessation from suffering, it is the study and practice of the Dharma, so on and so on. The reduction of suffering is a pretty consistent theme in most worthwhile human endeavors, but it seems to me that the race to create artificial life is bound to produce suffering. I'm of the opinion that heading that off at the pass is a pretty noble cause, which is why I've been thinking and trying to write about ethics in technology for a while. 

I think this Western bias of creating AI which is self aware, and the search for 'consciousness' and 'self' in order to replicate it artificially is causing a lot of suffering for ML and AI researchers already anyway! lol

But I could be wrong about a lot of this. I can certainly see where this idea of Expanded Mind lends itself to valuing the creation of self driving cars and smart cities. But I have my reasons for why I think that at this point we are still making the same cognitive mistakes which have led to the creation of so many of the worlds problems already. And without clearing those issues up first, we really are harnessing the raw awesome power of distributed computing and neural networks to miss the target by an astronomical distance instead of just by a mile this time.

Not to be fatalistic about it, of course, but I really hope I can put my ideas into words and pictures well enough to bring my concerns to the right people. Like I said in an earlier post, I really hoped to start a non-profit so I could address these concerns in a more rigorous and targeted way, but I've got no experience as a leader or in running a business. If I can at least make a start on some of it by starting a conversation which can create some influence, that's cool. However, I am tired of being a starving artist. In the best of all possible worlds, I'd be rich and wouldn't really care about these things, but I'd settle for being able to make a living working on trying to solve some of the worlds problems. 

Heading off the coming robot revolution is a little ways off I think, though. Hopefully! :)

I'm a little curious about your background, and were you the one that produced that video? If so, kudos, video production isn't easy.

Where I start to have questions is at the point where the narrator posits the idea that, fundamentally having a computer in your mind is no different than sitting at one.

I think what the video was point at is that there are a number of encoding modes, but all result in the storing and/or processing of information with the same end effect that we call "memory" when brains do it. As for Mary losing her notebook or Steve losing his arms, I'm afraid both accident and injury can lead to memory loss and cognitive dysfunction in the usual sense as well. The notebook and data files, on the other hand, have different decay rates from memories in a brain, and may be useful in different ways than their biological counterparts. The use of environmental features to create memory and association provides durability beyond that of the brain, and allows for the possibility of multiple users. The latter is why I brought Extended Mind into the discussion of culture. Remember, it's not the artifacts themselves that create mind, but (as you observed) the ways we relate to them and they relate to each other. Importantly, this sort of extension is happening all the time automatically. e.g. Driving extends the mind-body complex to include the vehicle and any information its instrument panels display, especially after we achieve enough practice to use the controls without having to consciously think about the process. As long as we can't help doing it anyway, we might as well use Extended Mind on purpose and try to optimize whatever we can. That includes on the multi-user level of Culture. And that is one of the huge benefits of learning to see less rigid boundaries between the "internal" and "external".

Who is EY?

(FAQ: Who is this Eliezer guy I keep hearing about?)

Though, to be fair, I still haven't found the original source. I may be misattributing something written by somebody else I was reading at the same time as the Sequences.

BTW, I'd guess that this question may be the reason somebody downvoted your comment with no explanation[1]. I've noticed that comments asking questions that are answered in the FAQ tend to have negative karma.

[^1] This is a bit of a pet peeve of mine. I think if you feel strongly enough to vote something down, you should at least have the courtesy and courage to tell the author why! That's especially true for longer comments and posts. I think the author deserves to know what feature of their writing people "want to see less of".

I'm a little curious about your background, and were you the one that produced that video?

Not my video; I don't have all the skills I'd need to produce that kind of output right now. ;) It's a good explainer, though, and gave me a word for a process I'd been taking advantage of for years without naming it.

For a quick overview of my background: after I completed my schooling and got my degree in computer science, I finally had time to start my education. I've picked up a shallow-to-moderate understanding on a broad range of subjects since then. I try to learn at least enough that I can start asking less stupid questions, and I do my best to keep my knowledge and skills integrated as much as possible since being able to do something about it has more raw use than just knowing a thing. That said, I've noticed that I have no fear of starting off on some interesting tangent for, perhaps, a few years before I feel conversant in the topic and/or get distracted by the next shiny thing. Looks like you might have guessed, a few years ago I picked up western secular Buddhist studies, and I can say the change in perspective offered by the associated skills has continued to hold my interest the whole time (and proven beneficial in my social interactions). That's recently been a strong influence on how I think about the topics we've been discussing, but LW and the Sequences have had longer to sink in, and I tend to read very widely from other sources as well.

I'd settle for being able to make a living working on trying to solve some of the worlds problems.

I certainly hope you succeed!

The use of environmental features to create memory and association provides durability beyond that of the brain, and allows for the possibility of multiple users.

Thanks for this and the first paragraph, I understand a little better what the Extended Mind concept is about. I tend to think of this sort of concept as External Memory, in that our phones, laptops, the Internet, notebooks and the like hold media like writing, video, audio and images, that can be rather directly encoded and decoded into meaning rather efficiently. 

Whereas something like a studio workspace of the type presented in the video holds tools that don't necessarily encode and decode meaning very directly or efficiently. For instance a person with drawing skill can quickly sketch out a picture of a coffee table, using a pen and paper to quickly encode the line drawing of a table, then show it to a carpenter who looks at the lines and how they are arranged and decodes it's meaning directly as a table. But in order to actually make the table, the carpenter uses tools that modify a medium like wood, tools like a table saw, vise grips, glue etc. to create the actual table, and this process can take weeks.

This object is a physical object like the paper with the drawing, but the woodworking in this case requires much more physical labor, and it's utility is different than the drawing. The drawing communicates an idea, the table organizes a physical space. Both have meaning to the user, although they mean different things and communicate those meanings differently. The differences in amount of work done, time to communicate as well as the durability of the tool in relation to the meaning it communicates, between the drawing and the table are meaningful to me, although I can see how the concepts can collapse  into a single category included in an idea like the Extended Mind Space. This seems still the domain of Human Factors Engineering, Industrial/Graphic/Interior/Architectural/User Experience/User Interface/Industrial Psychology.

The idea of an idea like Extended Mindspace makes more sense to me now, and I think the car analogy works well to illustrate how the tools we use can become extensions of us. They augment our innate abilities and change our neural wiring to include them and their operation into our ideas of what we can do and who we are. Am I a driver or a pedestrian? How fast can I get from my house to the store?  I think of this is an internalization of an external object as opposed to the externalization of my internal mind, but in many ways it's both, as the line between internal and external is a constantly shifting one. This is a topic I've dealt with a little in my writing on the development of consciousness, so I'm glad to have the opportunity to look at it from different angles. 

I felt a little sheepish when I realized the answer to my question "who is EY?' At the moment I was writing I just honestly could not figure out who you were referring to, as his name or initials hadn't come up in our discussion up until that point. If you had referred to him by his whole name I would've known who you were referring to, but the use of his initials threw me lol.

BTW, I'd guess that this question may be the reason somebody downvoted your comment with no explanation[1].

I admit I was a little disappointed in the negative Karma (especially as a Buddhist who really tries to avoid creating negative Karma lol) and I was further disappointed that there was no explanation. This is in the Open Question section for newbies too right? Anyways, thanks for the explanation.

"...got my degree in computer science..."

Do you have any particular areas of interest? 

Late last year I got my A+ and Network+ certifications and was attempting to get work in the IT field before the pandemic shut everything down, so I can honestly say that my knowledge of computers and networks is a lot better now then they were before I took those classes. I was interested in Cyber Security more than the Network Engineering career track at the time, although I did contemplate a Computer Science degree. I truly believe the fact that most people are pretty much oblivious as to how all these computers and networks actually work at the component and software level is a tragedy. We literally entrust our lives to these devices and communication systems, but to most people it's something akin to magic. 

I have an interest in Quantum Computing too as it jibes well with a major theme in my thinking, namely the problems with Binary thinking. That is where a lot of my conceptual work comes from, and I've been trying to understand how we might make use of Quantum Computing. I tried reading The Holographic Universe in the 90's and was greatly influenced by the ideas of 'Fuzzy logic' around that time, as I've always been interested in the latest cutting edge science and tech, but frankly in the last decade it's become increasingly hard to keep up on all of it. 

"...since being able to do something about it has more raw use than just knowing a thing."

I tend to agree, although from what I hear and see on the internet, often times people with the technical skills struggle with finding good reasons to do their thing. They know how to do something, but don't know what to do with it. 

My schooling was basically in conceptual art, and what my teachers did was to come up with a concept, and then hire people with the technical skills to actually create the thing well. The upshot was that if one of my professors came up with a concept for a piece of public art that required bronze casting on an industrial scale, it was often a welcome challenge to the people at the metal shop who were usually tired of the monotony of their work anyway. When a job with a unique set of challenges came along, it stretched their abilities in some pretty rewarding ways, and at the same time leaving the metal work to experts was a far better idea for my professor than trying to figure out how to do it himself. 

Besides, that's what teamwork is for! I did an internship with a large multinational corp in 2013, and ended up working with a whole team of engineers, psychologists, and designers. During the second half of the internship, what I was doing was essentially researching other peoples jobs in order to prototype technological solutions to augment their workflow. In that sense, I think good design always involves putting yourself as the designer, into the shoes of people doing work you don't know how to do yourself, so that you can give an outside perspective. But as a user experience designer, the amount of psychology you need to understand in order to design better experiences means learning to understand how other people understand. How best to produce, and the actual act of producing the solution? That's more of an engineering concern, and not necessarily the strength of a designer.

"...I picked up western secular Buddhist studies, and I can say the change in perspective offered by the associated skills..."

What types of skills did you pick up? 

I practice lay Tibetan Buddhism. I spent about 20 years studying Buddhism and practicing it here and there, reading books and watching videos, meditating on the basic precepts, but finally took refuge under Garchen Rinpoche in 2010 I think. It seemed time to formalize my relationship to Buddhism as at that point it had influenced my thinking and behavior at a very deep level, and his story was so powerful it made sense to take the plunge under his watch. He spent 20 years in a Chinese labor camp, but still held onto his practice and belief in secret. Anyway, I was at a retreat he was teaching at and decided it was the right time. Im too tired right now to think about how my study and practice have developed my skillset.

I certainly hope you succeed!

Thanks, good wishes never hurt. 

Do you have any particular areas of interest?

often times people with the technical skills struggle with finding good reasons to do their thing

I used to be a pretty competent programmer, but I graduated at a time when the field was pretty flooded and couldn't find a job right away. My skills quickly became out of date (my year specialized in PalmOS, of all things) and I stopped looking for work in the field. These days I'm almost fully lapsed in this area. I mostly use my understanding of algorithms and data structures to organize my day-to-day tasks where possible, and I usually have a clue what the tech headlines are talking about. I have used my programming background to automate some of my work tasks, but I haven't needed to work on those programs in a few years now beyond basic maintenance.

what my teachers did was to come up with a concept, and then hire people with the technical skills to actually create the thing well

Specialization is an excellent strategy! I find it pairs well with my style of learning: either I know enough about a thing to speak fluently with the experts, or I know how to learn that much. As I said before, practical skills are important too, and one reason is that almost all tasks have so much more detail than a how-to can convey. If I can learn to do the basics well, it helps me find the good experts too.

What types of skills did you pick up [from Buddhism]?

My meditation practice has resulted in a great deal of... let's go with "maturation" over the last few years, at a speed that I would call inconsistent with the decades prior. As far as specific skills are concerned, I'd say the core of that is patience: patience with my mind, my tasks, and other people. The increased patience is most obvious to me as an improved set of social skills at work and with my family. Also, I've noticed I'm able to better abide my ADHD tendencies (diagnosed as a teen) resulting in more tasks getting finished, more tasks getting started in the first place, and better results from my work; again both at home and at my job.

My practice is mostly informed by Theravada, though I can't say I've ever had any formal instruction with a teacher. It's hard for me to take any significant time off from work and family (I've got a 5yo at home) to go on retreat an such, and I don't know of anybody nearby, so my strategy is to read a lot, and make sure to get some cushion time in before bed and as much in-the-wild practice as I can remember to do while I go about my day. I listen to dharma talks, mostly from dharmaseed.org, and I've learned to focus my practice on whatever has the strongest ugh-field around it since that's typically what I need the most work on in the moment.

I mostly use my understanding of algorithms and data structures to organize my day-to-day tasks where possible

I'd be curious to hear about how you do this at some point. Much of my own Graphic Design training has been about Information Design, and I've often used that to organize as well.

I have a concept of Algorithms, know roughly what they can be used for and roughly how they go about doing it, but because I'm not a programmer, I couldn't distinguish one from another one if I saw them side by side. Data structures are also an interest, Databases and all that stuff. Info isn't any good if you store it improperly and can't retreive what you're looking for when you're looking for it!

PalmOs. I remember my first PDA, and it wasn't my only PDA. I can honestly say I think I've single-handedly kept the tech sector in business with all the 'latest tech' I've bought over the years. Not only has the software changed since then, but so has the hardware. Whoosh! And now it's all been crammed into a smartphone.

If I can learn to do the basics well, it helps me find the good experts too. 

Agreed. Plus I'm pretty sure there isn't a single domain where understanding of the advanced stuff isn't helped by a strong grounding in the basics. Is that why you came to LW in the first place?

My meditation practice has resulted in a great deal of... let's go with "maturation" over the last few years, at a speed that I would call inconsistent with the decades prior. 

Not to give meditation short shrift, but I've no doubt becoming a father probably helped in that regard too. I can definitely say though, I noticed maturation in myself when I was meditating, and it did help me develop my interpersonal relationship skills. 

Is that why you came to LW in the first place?

Can't say as I recall. It's been a good while! But it's part of the reason I'm still around after (checks comment history) probably more than a decade.

I've no doubt becoming a father probably helped in that regard too.

I certainly consider my kid one of my most important teachers! Though I doubt I would have had the presence and patience, or perhaps even notice the opportunity to learn many of the lessons I've assimilated by being a parent if I lacked the support of routine meditation.

So if I describe this post as a sort of bottom up construction then it seems useful to come from the other direction and investigate* the physical feelings of meaningfulness as they arise and trying to backchain from that to general principles, and then bouncing back and forth between the forward and backward chaining to see where and how they might meet. To wrap the ouroboros, one could then reflect on this process as a meaning making one.

*e.g. what are the qualities of the experiences that meaning is assigned to?

Nod, agreed that that overall process sounds fruitful. 

Curious what the experiential-focused version looks like when you do it? 

Something about how I tried to communicate this here bothered me and it took me a while to figure it out. I wanted to point to something like 'be a confucian naturalist about meaning' i.e. observing your own meaning making process the same way you'd watch an unknown species of animal. But the way I said it more implied that there is a pre-existing right answer which cuts directly against that sort of mental stance.

similar to focusing, acknowledging the proto thoughts (images, feelings) but less getting caught up in their train rides.

You're touching on stuff that, I think in particular for rationalist readers, Heidegger gets at really well. The relevant ideas to what you're exploring are caring (Sorge) and the contrast between present-at-hand and ready-to-hand.

Any particular essay that's good here?

Only that reading Heidegger directly is probably not a good use of your time. Better to read commentaries and summaries because they are much more understandable (he wrote in an intentionally obscure way). SEP offers a good entry point.

I suppose I may as well take the opportunity to point people to my essay Networks of Meaning (or on my website) which covers some of the same ground, including connections with association and analogy. It may be a good complement to this post.