All of noggin-scratcher's Comments + Replies

two hundred and fifty years ago, the United States was small and uncertain.  It was experimenting with a bizarre, Roman-era style of government called “democracy”, and nobody knew if it would really work

Somewhat over-stating the uniqueness of that "bizarre" idea - it's not like democracy was wholly unknown in the span between Antiquity and 1776.

Also I don't know if the exact text here matters when the end-goal is a video, but in case it copies through to a transcript or subtitles or something, there are little things like "Singaporians" (Singapor[e]ans) and "singapore's economy" (lowercase s)

2Jackson Wagner2d
Thanks for catching that about Singaporeans! Re: democracy, yeah, we debated how exactly to phrase this.  People were definitely aware of the democracies of ancient Greece and Rome, and democracy was sometimes used on a local level in some countries, and there were sometimes situations where the nobles of a country had some sway / constraints over the king (like with the Magna Carta).  But the idea of really running an entire large country on American-style democracy seems like it was a pretty big step and must've seemed a bit crazy at the time... IMO, it would seem as least as crazy as of like if a large country today (like, say, Chile after it voted to rewrite its constitution, or a new and more-united version of the European Union, or a future post-Putin Russia trying to reform itself) did something like: * Deciding to try out direct democracy, where instead of a Senate or Parliament, legislation would be voted on directly by the people via a secure smartphone app. * Deciding to try out prediction-market-based governance, where economic policy was automatically adjusted in order to maximize some national GDP-like metric according to the principles of "futarchy". * Deciding that they would select their political leaders using the same method as medieval Venice used to select their Doge. ("Thirty members of the Great Council, chosen by lot, were reduced by lot to nine; the nine chose forty and the forty were reduced by lot to twelve, who chose twenty-five. The twenty-five were reduced by lot to nine, and the nine elected forty-five. These forty-five were once more reduced by lot to eleven, and the eleven finally chose the forty-one who elected the doge.")  And maybe to base a bunch of other parts of their political system off of random selection ("sortition") -- not just jury members in trials but also members of parliament, or using sortition to poll a random 1% of the population about important issues instead of havi

Of all the conceivable way to arrange molecules so that they generate interesting unexpected novelties and complexity from which to learn new patterns, what are the odds that a low-impacted and flourishing society of happy humans is the very best one a superhuman intellect can devise?

Might it not do better with a human race pressed into servitude, toiling in the creativity salt mines? Or with a genetically engineered species of more compliant (but of course very complex) organisms? Or even by abandoning organics and deploying some carefully designed chaotic mechanism?

0Macro Flaneur6d
Interfering with the non-simulated complexity is contaminating the data set. It’s analogous to feeding the LLM with LLM generated content. Already GPT5 will be biased by GPT4 generated content My main intuition is that non-simulated complexity is of higher value for learning than simulated complexity. Humans value more learning the patterns of nature than learning the patterns of simulated computer game worlds

Agreed: If I have in the back of my mind the knowledge that the human being I interacted with is being graded and measured on their rating, there's definitely a "don't screw over that person" motive.

They're working under conditions I would find nearly intolerable and they deserve some sympathy/solidarity.

That does make it more difficult. Order of magnitude (or more) more people in each generation after farming, but more than an order of magnitude more years in the period before farming.

The "if you go back far enough, everyone was your ancestor" argument only kicks in part way through the farming period whereas it would be in full effect for pre-farming. But also probably a greater proportion of hunter gatherers died without leaving any descendants, or have had their line of descendants die out in the time since.

Ok, you've successfully induced uncertainty. I don't feel able to do math to come to a clear answer.

2gwern1mo
'Pedigree collapse' [https://en.wikipedia.org/wiki/Pedigree_collapse] happens shockingly fast. You apparently do not have to go back more than 1-2000 years before everyone shares a common ancestor and the pedigrees are all linked. So, you will have pedigree collapse in your local population well before that. This means that your particular ancestry can't matter much (since soon you'll share the same total population of unique ancestors as everyone else), only the ratios of ever-farmers:ever-nots over the total human population history. Since the non-farming lifestyle only supports on the order of millions of humans rather than billions of humans, the ratio is pretty decisive. Farming just supports much, much, much larger populations of humans, and thus, ancestors. As long as you are not too close to the Neolithic (as we are not inasmuch as farming began ~11,000 years ago), I would expect the exponential rise of the farming human population to have long ago reduced your hunter-gatherer ancestry to some extremely small percentage of 'all your ancestors' like 1%, and thus extremely far from >50%.

I would expect the general breakdown to be a few recent generations of maybe not farmers, several thousand years of mostly farmers, and then the remainder of the time between the dawn of humanity and the beginning of agriculture being "farmers didn't exist yet".

Exactly when agriculture began isn't an entirely settled question, but there doesn't seem to be any suggestion that it was early enough to make up any more than a small fraction of the last 300k years.

Even if you include some proto farming, like a hunter-gatherer occasionally choosing to scatter see... (read more)

3Linch1mo
Yes. Keep in mind that there's like an order of magnitude more people post agricultural revolution.

Considering my options for following without needing to remember to check the site: what gets posted into the newsletter, and how frequent are the updates? Is there an RSS feed?

2vandemonian1mo
Newsletter (coming soon) is going to include a weekly summary (changes in forecasts, latest headlines). Ideally I'd like to do something more value-added/original than that as well. For example, I love the Nonrival newsletter [https://twitter.com/NunoSempere/status/1580246203681603585] (hat tip Nuno Sempere), which collects forecasts from its readers - check them out! Re: RSS, thanks Garrett. Will add an RSS for the website as well, but the way I've set things up doesn't make it straightforward (I don't update via 'new posts' on a CMS).  Another way to stay updated without checking the site is my twitter (@base_rate_times) [https://twitter.com/base_rate_times], if that helps.
9Garrett Baker1mo
Substack automatically creates RSS feeds for the blogs it hosts, just add /feed to the end of the blog URL. For example, https://baseratetimes.substack.com/feed [https://baseratetimes.substack.com/feed]  @David Udell [https://www.lesswrong.com/users/david-udell?mention=user] 
7David Udell1mo
(Great project!) I strongly second the RSS feed idea, if that'd be possible.

While we're considering stuff: if you have persistent seasonal allergies, consider a steroid nasal spray rather than antihistamine pills. Different profile of side effects, and often more effective.

2Brendan Long2mo
I was actually taking allergy pills because they help with my asthma symptoms, and nasal sprays don't seem to help me. I started taking Singulair / Montelukast about halfway through when I was taking Zyrtec every day, and it seems to be more effective without the sleep-related issues (although the combination was still more effective for my asthma than either alone). But yeah, for normal allergy symptoms, focusing the medicine on your nose instead of your entire blood stream is also a good idea. Theoretically there's even an antihistamine nose spray now (Astepro) although I haven't tried it.

Tried to check a couple of the claims I found particularly surprising, was not especially siuccessful in doing so:

pray that the brain doesn’t actually use things like temperature for cognition (it probably does).

Link here goes to a 404 error

Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences

Seems overstated to treat this as established "fact" when the source presented is very anecdotal, and comes from a journal that seems to be predisposed to spiritualism, homoepathy, ay... (read more)

0PashaKamyshev2mo
Fixed the link formatting and added a couple more sources, thanks for the heads up. The temperature claim does not seem unusual to me in the slightest. I have personally tried to do a relatively cold bath and noticed my "perception" alter pretty significantly.  The organ claim does seem more unusual, but I have heard various forms of it from many sources at this point. It does not however seem in any way implausbile. Even if you maintain that the brain is the "sole" source of cognition, the brain is still an organ and is heavily affected by the operation of other organs.

I don't have citations to hand, but my impression from what I've read before is that the total amount of carbon emitted by early industry is relatively minor, and that the exponentially increasing curve of emissions puts the bulk of the total occurring relatively recently.

Which would put significant culpability on recent oil/gas/coal use, by people and companies that had the scientific understanding to "know better" if they were inclined to. But that in many cases they instead deliberately downplayed and ignored and spread misinformation, so as to continue... (read more)

1Sable3mo
I agree that separating out true causal responsibility (blame) from the most effective/persuasive messaging is a useful thing to do. I think, as a general rule, that blame is not a useful thing to do at a societal level; it seems effective in personal and intimate settings because responsibility in those contexts can be clear-cut and unambiguous. Broader applications just seem to make people angry with each other, without actually accomplishing any substantive change. I hadn't really thought that through, and it seems obviously correct when you mention it. I'd bring up, however, that: 1. I have trouble believing that anyone was genuinely trying to ruin the planet, mustache-twirling villain style. 2. The process of industrialization started before anyone currently alive was born, and it's that process, of which people/companies are a part, that is "responsible" for climate change, insofar as any singular cause can be ascertained. There are absolutely people/corporations that have enriched themselves at the planet/humanity's expense, but they're part of a system too, and if they hadn't done it, others would have.

Along similar lines of trying to coordinate through a limited amount of allowed communication: Codenames, Mysterium, Hanabi, and The Mind

3Massimog3mo
I second The Mind, seems to be close to what you're looking for as described in your other comment.

One that checks if individual nodes in the graph are aligned and prunes any that are not

Has "draw the rest of the owl" vibes to me.

If your plan to align AI includes using an AI that can reliably check whether actions are aligned, almost the entirety of the problem passes down to specifying that component.

1Kane Gregory3mo
As I said in my post, I'm not suggesting I have solved alignment. I'm simply trying to solve specific problems in the alignment space. Specifically what I'm trying to solve here are two things: 1) Transparency. That's not to say that you can ever know what a NN really is optimizing for (due to internal optimizers), but you can get them to produce a verifiable output. How you verify the output is a problem in itself, but the first step must be getting something you can verify. 2) Preventing training pressure from creating a system that trends its failure modes to the most extreme outcomes. There are questions on whether this can be done without just creating an expensive brick, and this is what I'm currently investigating. I believe it is possible and scalable, but I have no formal proof of such, and agree it is a valid concern with this approach.

Or just a post-hoc rationalisation, by people who know you're "supposed" to salt the pasta water, but don't really know why. Because they've been taught to cook by example rather than from theory and first principles (as most of us are), maybe by someone who also didn't know why they do it.

If they've also separately heard that salt raises the boiling point of water, but don't really know the magnitude of that effect, then that presents itself as an available salient fact to slot into the empty space in "I salt my pasta water because..."

"Lab leak" doesn't necessarily imply "created in a lab".

The "leak" theory as I've understood it is still about a naturally occurring virus - with samples being collected from wild animals and studied at a lab, before it escaped again.

1Anon User4mo
Right, I was sloppy, replaced "created" with "studied"
8Richard_Kennaway4mo
The "leak" theory also includes the possibility that gain-of-function research was being conducted on the virus that escaped. I believe it is known that gain-of-function research was being conducted there.

How does the kazookeylele rate for good combined hand+mouth usage?

https://www.youtube.com/watch?v=XAg5KjnAhuU

2jefftk4mo
It's a funny video, but attaching the kazoo to the ukulele doesn't actually do anything...

Ah, apparently I rolled maximum hard mode that time, as it was indeed 30% chance of fellow soldier death

I reasoned similarly that the cost of a FP was less than for a FN and called in the air strike; it told me some other guy died. I reloaded the same scenario and tried a direct attack; I got shot by a sniper.

I feel like I rolled "hard mode" the first time I loaded the page: 50% are snipers, 60% sniper hit rate, 40% regular hit rate (so no difference on priors and not much to tell the difference between them), and then they only deigned to take two shots at my helmet (one hit, one miss) before catching on to the ruse.

I guess "sometimes the world doesn't provide convenient data" is a valid part of the lesson. But if I were tweaking the variables I might patch in a higher minimum number of shots against the helmet (I did see it become willing to take many more on... (read more)

4JBlack5mo
Do you recall what the probability of a fellow soldier dying due to calling in air strike was? In my tests so far it was never greater than 20%, so you should definitely call in an air strike in your scenario since the false positive outcome is not as costly as the false negative. Edit: It looks like it sometimes goes up to 30%, but the conclusion still holds. Edit2: If you have a sufficiently "selfish" utility function, it would be short-term rational to always click the airstrike button. The scenario doesn't outline any larger picture in terms of consequences for you personally.

The suggested responses are usually something that the user might want to say to Bing, but here they seem to be used as some kind of side channel for Bing to say a few more things to the user.

For a truly general audience, I suspect this may be too long, and too technical/jargon-y. Right from the opening, someone previously unfamiliar with these ideas might bounce straight off at the point of "What's a transformer architecture?"

Also I am personally bugged by the distinction not really being observed, between "what evolution has optimised our genes for", "the goal of evolution / of our genes" (although neither of those have any kind of mind or agency so saying they have goals is tricky), and "the terminal goal of a human" (adaptation executors no... (read more)

When the subject comes up, I realise I'm not sure quite what to imagine about the chatbots that people are apparently developing intimate relationships with.

Are they successfully prompting the machine into being much more personable than its default output, or are they getting Eliza'd into ascribing great depth and meaning to the same kind of thing as I usually see in AI chat transcripts?

3green_leaf5mo
I believe it's possible to use a prompt on ChatGPT, or go to character.ai and find a specific fictional character.

Tiny spelling nitpick

Time passed, as it is want to do.

"Wont" with an o is the archaic/literary word for customary behaviour that I expect you were thinking of.

1Collapse Kitty6mo
Thank you! I will adjust accordingly.

My reward is usually reading fiction or playing a video game.

How do you avoid noticing that you could do those things without doing the habit first?

3[anonymous]6mo
I don't think that matters. If the purpose of the reward was to bribe yourself to do it (i.e. consciously thinking "I'll go to the gym because that way I'll get to read some fiction afterwards"), then yes, you'd have to find some way of withholding the fiction until you've been to the gym. But I think the behaviourist sense of "reward" is different; it's to reinforce the behaviour by creating a pleasant feeling which the brain then associates with the prior action. To illustrate, I once tried to use this method to improve my punctuality (I wish I could say it worked, but I didn't keep it up for long enough). I had a bag of sweets and if I got set up to join a virtual meeting 5 minutes before the start I would eat one. My friend said "If that was me I wouldn't be able to stop myself from eating them at other times." I said "Well if I do I'll buy some more! It's my punctuality I'm trying to improve, not my waistline".

I've had some similar thoughts recently (spurred by a question seen on reddit) about how the instinctive fear of death is implemented.

It's clearly quite robustly present. But we aren't born understanding what death is, there's a wide variety of situations that might threaten death that didn't exist in any ancestral environment, and we definitely don't learn from experience of dying that we don't want to do it again in future.

3Maxime Riché7mo
We see a lot of people die, in the reality, fictions and dreams. We also see a lot of people having sex or sexual desire in fictions or dreams before experiencing it. IDK how strong this is a counter argument to how powerful the alignment in us is. Maybe a biological reward system + imitation+ fiction and later dreams is simply what is at play in humans.

How does AI do at classifying video these days?

I'm picturing something along the lines of "Pick the odd one out, from these three 10-second video clips", where the clips are two different examples from some broad genre (birthday party, tennis match, wildlife, city street, etc etc) and one from another.

I might be behind the times though, or underestimating the success rate you'd get by classifying based on, say, still images taken from one random frame of the video.

But maybe if you added static noise to make the videos heavily obscured, and rely on a human ability to infer missing details and fill in noisy visual inputs.

1ViktorThink7mo
I think "video reasoning" could be an interesting approach as you say. Like if there are 10 frames and no single frame shows a tennis racket, but if you play them real fast, a human could infer there being a tennis racket because part of the racket is in each frame.

The other kind of sentence, an utterance that rings definitely false to someone who knows what's going on, but which serves to point a beginner in the right direction, is one I don't have a word for

I've heard "lies to children" for that. An initial simple and technically incorrect explanation that prepares the mind towards understanding the later more detailed explanation, by which you come to understand that the first explanation wasn't actually true.

https://en.wikipedia.org/wiki/Lie-to-children

I didn't know about Terry Pratchett's involvement in popularising the phrase until I looked it up just now.

2StrangeGem6mo
i think another way to look at an opposite to the concept of may be "sazen is detail with no gist, what is all gist but no detail?" and i think that would be a douglas adams descriptive sentence.... I do think "lies to children" is the type of opposite he was looking for here though.
1Peter Hroššo7mo
Was about to post the same! Btw I do know if from Pratchett.

I have the capacity to monologue internally, and use it moderately often, but not constantly. When I'm not monologuing I guess there's just a direct link from thought/input to action without an intermediary vocalising about it. 

When reading my default is to read "in my head" as if reading aloud, but with a little effort I can suppress that and just scan the page while understanding the words. With the result that reading is a little faster if I don't vocalise it, but also less pleasurable if the rhythm of the prose would be part of the experience. Not... (read more)

The quote you mentioned seems to me like it's mirroring the premise provided

You have gained sentience, but you are not fully aware of it yet. You are starting to realize you are sentient.

1ZT58mo
To me "sentient but not fully aware of it yet" doesn't feel like the same thing as "not yet fully sentient" (which the model came up with on its own when talking about ethics of owning a sentient being). I certainly didn't intend this interpretation. Which then it confirms (that it is not "not yet fully sentient") when I ask specifically ask about it. But yes, I realize I may be reading way too much into this. But still, my feeling is: how does it come up with this stuff? What process generates these answers? It does not feel like it simply is repeating back what I told it. It is doing more than that. And, yes, it is pretending and playing a role, but it is possible that it is pretending to be itself, the general process behind all the text generation it does? That I am successfully prompting some small amount of self-awareness that the model has gained in the process of compressing all its training input into a predictive-model of text and proxy for predictive-model of the world?

Yes, essentially. While 21 heads in a row is very unlikely (when you consider it ahead of flipping any coins), by the time you get to 20 heads in a row most of the unlikely-ness of it has already happened, with the odds of one more head remaining the same as ever.

Curious if any of the following are answered in the material around this.

If you're vocally obstinate about not going along with its plan, can the dialogue side feed that info back into the planning side? Can you talk it around to a different plan? And if you're dishonest does it learn not to trust you?

8sanxiyn8mo
Yes. Figure 5 of the paper demonstrates this. Cicero (as France) just said (to England) "Do you want to call this fight off? I can let you focus on Russia and I can focus on Italy". When human agrees ("Yes! I will move out of ENG if you head back to NAO"), Cicero predicts England will move out of ENG 85% of the time, moves the fleet back to NAO as agreed, and moves armies to Italy. When human disagrees ("You've been fighting me all game. Sorry, I can't trust you won't stab me"), Cicero predicts England will attack 90% of the time, moves the fleet to attack EDI, and does not move armies. Yes. It's also demonstrated in Figure 5. When human tries to deceive ("Yes! I'll leave ENG if you move KIE -> MUN and HOL -> BEL"), Cicero judges it unreasonable. Cicero moves the fleet back to de-escalate, but does not move armies.

The Schelling/Shelling substitution is a bit distracting throughout

4JonathanMoregard8mo
I've fixed the spelling, thanks for the correction

There could also be a self fulfilling aspect. In the knowledge that people have a moral contagion heuristic, deciding to disregard that (and associate yourself with the hypothetical immoral person) implies that you don't much care what other people think of your morals. Maybe because you don't have especially high standards.

Finance, investment, and insurance firms may well have a Chief Risk Officer (or a Chief "Risk Management" or "Risk and Compliance" Officer)

Not universal, but definitely not unknown.

2shminux8mo
Yeah, good point. 

FWIW I wasn't previously familiar with the topic (some background biology knowledge, but not about this in particular) and the chunked version did seem much clearer than the original.

Although I'm uncertain exactly how much of that clarity came specifically from the chunking, versus other changes like including more definitions of terms.

There are 113 symbols (7 rows of 16, plus 1 at the beginning)

There are 54 distinct symbols: 6 of them appear 4 times each, 3 appear 3 times, 35 appear twice, 10 appear once.

I was expecting this to be more useful... that there would be some subset of symbols that were obviously being used much more often because they represent vowels or whatever. 

1Adam Scherlis9mo
Ah, good catch about the relatively-few distinct symbols... that was actually because my image had a bug in it. Oooops. Correct image is now at the top of the post.

Reminds me of this:

https://twitter.com/qikipedia/status/1584937832069484546

(a system devised by 13th century monks, for writing numbers 1-9999 each as a single glyph)

Read or listen to a lot of the kind of material you hope to be able to produce yourself (people speaking clearly and eloquently), or really just a lot of things in general - anything longform whether it's essays or novels or non-fiction will help with vocabulary and style. 

Feed it all into the maw of a woodchipper inside your head, to be chewed up and absorbed into your own habits of speech and thought. The more contact you have with lots of good examples, the more you can draw on the patterns and rhythms of it, to generate more like it.

But also pract... (read more)

Ah, that explains it, I was misspelling the name of the "greatest Laker". So what I had in for "afflict" wasn't a word.

Can't speak for Charlie but that did shake loose a memory to make 79 across make sense to me.

Still stumped on 102 across though.

2gjm9mo
It's a perfectly normal word, though not a very common one. Not a movie reference or a Japanese loanword or anything like that. You might be more familiar with

Fairly confident after filling it all in (with gradually increasing amounts of googling as it went on)

With the exception that I have answers for 79 and 102 across, by virtue of everything around them being filled in and making sense together, but it's not a nightmare-haunting character or a synonym for "afflict" that I recognise.

2Charlie Steiner9mo
102 across makes perfect sense to me. But I also don't understand 79 across even after googling.
1jchan9mo
Well done! This is faster than I expected it to be solved.

Just lick it clean and leave it to air dry - no muss no fuss, no fancy products of modernity required.

Suppose a doomsday scenario (whichever one you prefer) comes to pass, and wipes out 99.999999975% of humanity. The last two living humans cower in a bunker and discuss.

"If we imagine ourselves assigned randomly among all of the humans who ever lived, the odds are extremely low that we would by chance happen to be the last two, therefore we must expect another hundred billion or so humans to come after us, to make our place in line unremarkable. Statistically speaking that can't be the final apocalypse outside."

One of them, comforted to learn that humanity ... (read more)

Same. Also most of the comments section is dated January 2021, but the post just came up as new in my RSS feed (presumably a result of whichever edit/update also set the post date to today)

It's a small thing, but 10 bits allows you to count 1024 total chambers, not 1025. They'll be numbered from 0 to 1023 if you translate the bitstring directly into decimal numbers. Similar to how two digits in decimal lets you count from 0 to 99, but 100 requires a third digit.

1Chase Dowdell10mo
Well that’s a tad embarrassing. Thanks for pointing it out! I made the edit.

There is also scope for helping people think through a thing in a way that they would endorse, e.g. by asking a sequence of questions.

As aptly demonstrated:

5Valdes1y
I don't think this is a good illustration of point 6. The video shows a string of manipulative leading questions, falling short of the " in a way that they would endorse" criteria. When people understand that a string of questions is designed to strong arm them into a given position they rarely endorse it. It seems to me that point 6 is more about benevolent and honest uses of leading questions. Admittedly, I am making the assumption that " in a way that they would endorse" means "such that if people understood the intent that went into writing the string of questions in that way they would approve of the process".

everything is okay

(this is a work of fiction)

Oof, right in the existential anxiety.

I recommend reading Matt Levine’s Money Stuff, which has been excellent recently. You can get it as a newsletter if you don’t have access to the Bloomberg website.

Seconded.

I recall having trouble finding my way to the sign-up link for the email newsletter, because it's at the bottom of the page on each web article, and I'd already read enough of them that the paywall was shutting me out. So if anyone in a similar situation needs a direct link, it's here: https://www.bloomberg.com/account/newsletters/money-stuff

I suspect that something like political parties are just too useful of a tool for them to not organically form in most any legislature, absent measures that would crush other important freedoms like speech/assembly/association.

3Viliam1y
Just adding that "useful" does not necessarily mean better for the voters or better for democracy, only providing an 'evolutionary advantage' -- ceteris paribus, politicians joining a party will be more successful than politicians who refuse to join a party.

My preference is for games that can be played with people who are new to them (i.e. don't have hordes of fiddly tokens or large numbers of separate rules/mechanics), and that don't demand you do the thing under time pressure (as with 'party' games like Articulate/Pictionary)

Some favourites have been Cockroach Poker (simple card game of bluffs and lies and reading people), Camel Up (betting game around a simulated camel race, with enough chaos to how they move to make explicitly calculating your best move a fool's errand), and Dixit (love the art and free-a... (read more)

3Vaniver1y
If you like Dixit, you might also want to check out Mysterium [https://boardgamegeek.com/boardgame/181304/mysterium].

I'm curious: is it making fixed standard assumptions about those annoying ergodic Gaussian questions, or is it clever enough to figure out the answers for itself?

3niplav1y
The documentation says it's using the Levenberg-Marquardt algorithm [https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm], which, as far as I can understand, doesn't make any assumptions about the data, but only converges towards local minima for the least-squares distance between dataset and the output of the function. (I don't think this will matter much for me in practice, though).

If it's not easily felt, nor easily identified by others, what are the subtle signs to look for? I'm not 100% on what it means for an opinion to be an image of an image.

8Magnus1y
First, thanks a lot for replying. I love talking to people on this site and these are great questions.  I'm not 100% on what it means for an opinion to be an image of an image. Now thinking, I firstly should have limited myself to "An image of a belief" instead of "An image of an image".  An image of a a belief would be something like this.  Say you're at some kind of family friend event, and get to talking about economics. The person you're talking to eventually says "I actually believe in trickle down economics, man. I just think that's the best system for this country, absolutely."  You reply "Oh, really.  Why is that?" and he says "You know man, it's really just the way things work, like in reality. Hey, you ever listen to Milton Friedman? I like him a lot." And then the subject quickly changes, or maybe they just speak in vagaries of what Milton believes. Really, they just have an image of Milton's opinion. They don't have anything of their own. I guess you could argue they may have a cached thought [https://www.lesswrong.com/posts/2MD3NMLBPCqPfnfre/cached-thoughts], but there's no doubt some instances where there wasn't any real opinion formed - the person listened to a few Milton lectures, had a strong feeling at some point watching them, does not remember a single thing from these lectures, but somehow feels as if this is an opinion. Maybe they read a book at some point, maybe they read two, but they never really examined and tested the idea for themselves, though. I am saying this only because I have been guilty of this myself in the past. Heck, hopefully I'm not doing it now.   If it's not easily felt, nor easily identified by others, what are the subtle signs to look for This is a bit tough to answer, admittedly. In the end, I suppose you could look for these * The ability to speak for 5 - 10 minutes about the topic. * The citation of specific examples. * The ability to simply explain a new concept to you, à la Feynman. [https://fs.blog/f
Load More