PickleBrine
Is intelligent induction even possible?
In the course of doing some research into A(G)I models I've found myself stuck on one conundrum. One of the core feautures of general intelligence seems to be something like domain-independent pattern finding - a truly *general* intelligence would be able to "suss things out" so to speak in a...
Intro & Proposal for AGI Model
Hello all, this is my first time posting in this forum, so I look forward to your feedback. First, some brief background info about myself: I have an undergraduate background in Computer Science & Philosophy and am considering pursuing a Master's in AI. Apart from my formal education I have...
It seems to me people are still anthropomorphizing (or maybe "phrenomorphizing" might be more apt) the chain-of-thought "reasoning". With respect to the issue of AI alignment I don't think it matters that much that they do this, LLMs don't have egos or the potential to set goals, have motives or intents, etc. They just learn a kind of mess of alien abstractions that optimizes their text-producing behavior for some training data. The real issue is that these abstractions probably often do not correlate with the actual ideas that are represented by word tokens, and so you get hiccups like hallucinations or these chain-of-thought snippets that make it look like it's actually thinking... (read more)