Sorted by New

Wiki Contributions


Are you doing this from within Obsidian with one of the AI plugins?  Or are you doing this with the ChatGPT browser interface and copy/pasting the final product over to Obsidian?

Thank you for sharing this.  FYI, when I run it, it hangs on "Preparing explanation...".  I have an OpenAI account, where I use the gpt-3.5-turbo model on the per-1K-tokens plan.  I copied a sentence from your text and your prompt from the source code, and got an explanation quickly, using the same API key.  I don't actually have the ChatGPT Plus subscription, so maybe that's the problem.

ChatGPT has changed the way I read content, as well.  I have a browser extension that downloads an article into a Markdown file.  I open the Markdown file in Obsidian, where I have a plugin that interacts with OpenAI.  I can summarize sections or ask for explanations of unfamiliar terms.

On the server side, Less Wrong has a lot of really good content.  If that content could be used to fine-tune a large language model... it would be like talking to ChatLW instead of ChatGPT.

Things like explanations and poems are better done on the user side, as you have done.

Is there a database listing, say, article, date, link and tags?  That would give you the ability to find trending tags.  It would also allow a cluster analysis and a way to find articles that are similar to a given article, "similar" meaning "nearest within the same cluster".

I agree with the separation, but offer a different reason.  Exploratory writing can be uncensored; public writing invites consideration of the reaction of the audience.

As an analogy, sometimes I see something on the internet that is just so hilarious... my immediate impulse is to share it, then I realize that there is no upside to sharing because I pretend to be the type of person who wouldn't even think that was funny.  Similarly, on more philosophical subjects, sometimes I will have an insight that is better kept private.

You see what I did there?  If I were writing this in my journal, I'd include a concrete example.  However, this is a public comment, and it's smarter not to.

I copied our discussion into my PKM, and I'm wondering how to tag it... it's certainly meta, but we're discussing multiple levels of abstraction.  We're not at level N discussing level N-1, we're looking at the hierarchy of levels from outside the hierarchy.  Outside, not necessarily above.  This reinforces my notion that structure should emerge from content, as opposed to trying to fit new content into a pre-existing structure.

I was looking for real-life examples with clear, useful distinctions between levels.  

The distinction between "books about books" and "books about books about books" seems less useful to me.  However, if you want infinite levels of books, go for it.  Again, I see this as a practical question rather than a theoretical one.  What is useful to me may not be useful to you.

I don't see this as a theoretical question that has a definite answer, one way or the other.  I see it as a practical question, like how many levels of abstraction are useful in a particular situation.  I'm inclined to keep my options open, and the idea of a theoretical infinite regress doesn't bother me.

I did come up with a simple example where 3 levels of abstraction are useful:

  • Level 1: books
  • Level 2: book reviews
  • Level 3: articles about how to write book reviews

We're using language to have a discussion.  The fact that the Less Wrong data center stores our words in a way that is unlike our human brains doesn't prevent us from thinking together.

Similarly, using a PKM is like having an extended discussion with myself.  The discussion is what matters, not the implementation details.

I view my PKM as an extension of my brain.  I transfer thoughts to the PKM, or use the PKM to bring thoughts back into working memory.  You can make the distinction if you like, but I find it more useful to focus on the similarities.

As for meta-meta-thoughts, I'm content to let those emerge... or not.  It could be that my unaided brain can only manage thoughts and meta-thoughts, but with a PKM boosted by AI, we could go up another level of abstraction.

I don't see your distinction between thoughts and notes.  To me, a note is a thought that has been written down, or captured in the PKM.

No, I don't have an example of thinking meta-meta-rationally, and if I did, you'd just ask for an example of thinking meta-meta-meta-rationally.  I do think that if I got to a place where I needed another level of abstraction, I'd "know it when I see it", and act accordingly, perhaps inventing new words to help manage what I was doing.

Load More