Posts

Sorted by New

Wiki Contributions

Comments

Is there a database listing, say, article, date, link and tags?  That would give you the ability to find trending tags.  It would also allow a cluster analysis and a way to find articles that are similar to a given article, "similar" meaning "nearest within the same cluster".

I agree with the separation, but offer a different reason.  Exploratory writing can be uncensored; public writing invites consideration of the reaction of the audience.

As an analogy, sometimes I see something on the internet that is just so hilarious... my immediate impulse is to share it, then I realize that there is no upside to sharing because I pretend to be the type of person who wouldn't even think that was funny.  Similarly, on more philosophical subjects, sometimes I will have an insight that is better kept private.

You see what I did there?  If I were writing this in my journal, I'd include a concrete example.  However, this is a public comment, and it's smarter not to.

I copied our discussion into my PKM, and I'm wondering how to tag it... it's certainly meta, but we're discussing multiple levels of abstraction.  We're not at level N discussing level N-1, we're looking at the hierarchy of levels from outside the hierarchy.  Outside, not necessarily above.  This reinforces my notion that structure should emerge from content, as opposed to trying to fit new content into a pre-existing structure.

I was looking for real-life examples with clear, useful distinctions between levels.  

The distinction between "books about books" and "books about books about books" seems less useful to me.  However, if you want infinite levels of books, go for it.  Again, I see this as a practical question rather than a theoretical one.  What is useful to me may not be useful to you.

I don't see this as a theoretical question that has a definite answer, one way or the other.  I see it as a practical question, like how many levels of abstraction are useful in a particular situation.  I'm inclined to keep my options open, and the idea of a theoretical infinite regress doesn't bother me.

I did come up with a simple example where 3 levels of abstraction are useful:

  • Level 1: books
  • Level 2: book reviews
  • Level 3: articles about how to write book reviews

We're using language to have a discussion.  The fact that the Less Wrong data center stores our words in a way that is unlike our human brains doesn't prevent us from thinking together.

Similarly, using a PKM is like having an extended discussion with myself.  The discussion is what matters, not the implementation details.

I view my PKM as an extension of my brain.  I transfer thoughts to the PKM, or use the PKM to bring thoughts back into working memory.  You can make the distinction if you like, but I find it more useful to focus on the similarities.

As for meta-meta-thoughts, I'm content to let those emerge... or not.  It could be that my unaided brain can only manage thoughts and meta-thoughts, but with a PKM boosted by AI, we could go up another level of abstraction.

I don't see your distinction between thoughts and notes.  To me, a note is a thought that has been written down, or captured in the PKM.

No, I don't have an example of thinking meta-meta-rationally, and if I did, you'd just ask for an example of thinking meta-meta-meta-rationally.  I do think that if I got to a place where I needed another level of abstraction, I'd "know it when I see it", and act accordingly, perhaps inventing new words to help manage what I was doing.

I am a fan of PKM systems (Personal Knowledge Management).  Here the unit at the bottom level is the "note".  I find that once I have enough notes, I start to see patterns, which I capture in notes about notes.  I tag these notes as "meta".  Now I have enough meta notes that I'm starting to see patterns... I'm not quite there yet, but I'm thinking about making a few "meta meta notes".

Whether we're talking about notes, memes or rationality, I think the usefulness of higher levels of abstraction is an emergent property.  Standing at the base level, it's hard to anticipate how many levels of abstraction would eventually be useful, but standing at abstraction level n, one might have a better idea of whether to go to level n+1.  I wouldn't set a limit in advance.

What's the problem with infinite regress?  It's turtles all the way up.

Load More