Posts

Sorted by New

Wiki Contributions

Comments

How /when / where did you learn to write so well?

"It is shameful that we did.
Like, by what standard is building gigantic forty-story-high indoor replicas of Venice, Paris, Rome, Egypt, and Camelot side-by-side, filled with albino tigers, in the middle of the most inhospitable desert in North America, a remotely sane use of our civilization’s limited resources?
And it occurred to me that maybe there is no philosophy on Earth that would endorse the existence of Las Vegas. Even Objectivism, which is usually my go-to philosophy for justifying the excesses of capitalism, at least grounds it in the belief that capitalism improves people’s lives. Henry Ford was virtuous because he allowed lots of otherwise car-less people to obtain cars and so made them better off. What does Vegas do? Promise a bunch of shmucks free money and not give it to them."

I'm still reading through this awesome contribution, but off top I can say that you aren't fully accounting for the fact that this game is played iteratively. Long term outcomes are determined by maximizing gains and avoiding annihilation on any given "turn", rather than by planning for some specific long term outcome.

It is perfectly possible, and even likely, for a move taken on turn one to look really poor when looking from the vantage point of turn ten.

The world isn't an equation f(x) wherein you can evaluate the limit and plan accordingly. You have to optimize within some frame of reference that will almost certainly be suboptimal in the following frame of reference.

"This is the much-maligned – I think unfairly – argument in favor of monarchy. A monarch is an unincentivized incentivizer. He actually has the god’s-eye-view and is outside of and above every system. He has permanently won all competitions"

The reason monarchy is maligned is because the monarch has never actually won all competitions. They must continue to act to maintain their mandate and / or fend off those looking to dethrone them.

I didn't learn this until I was 24. Up until that point I always thought "abc is only happening because bad / stupid actors are doing ijk instead of xyz".

Trump getting elected was actually the precise moment that I realized placing expectations on reality is futile, and you have to attempt to perceive reality in the most unadulterated way possible.

I had a sense I was kind of overstepping when I wrote that...

Do those AI frameworks tend to be very discerning though? I imagine they tend to have high recall and low precision on valid test cases too dissimilar from the single training case.

"You need the right architecture.  You need, maybe, just maybe, an architecture that can tell us a thing or two about the human brain."

I liked this article. I don't think GPT-2 can tell us anything about how the human brain works, though.

Regardless of how well GPT-2 writes, it does not understand language, remotely. I've taken an excerpt from https://openai.com/blog/better-language-models/ :

"GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text"

Comparing AI to human neurology is off the mark in my estimation, because AIs don't really learn rules. They can predict outcomes (within a narrow context), but the AI has no awareness of the actual "rules" that are leading to that outcome - all it knows is weights and likelihoods.

This reality actually drives one of the key distinctions between human neurology and AI - humans often only need one record in their "training set" in order to begin making wildly accurate predictions, because humans turn that record into a rule that can be immediately generalized, while an AI would be positively useless with so little data.

A good example:

Imagine a human being is introduced to a deer for the first time. They are told that the animal they are looking at is called a "deer". From then on, every single time they see a deer, they will know it's a deer, without any other pieces of data.

In contrast, building an AI that could begin correctly identifying images of deer after being exposed to just one training record, without sacrificing the AIs ability to be as discerning as it needs to be (is that a bird in that brush?), is extraordinarily out of reach at the moment.

EDIT: Also thought I'd point out that GPT-2's efficacy declines substantially as the length of its writing increases (I forget the exact number but after 300 words or something it all goes to mumbo jumbo). That, to me, strongly indicates that GPT-2 is not imitating human neurology at all.