johnswentworth

Sequences

From Atoms To Agents
"Why Not Just..."
Basic Foundations for Agent Models
Framing Practicum
Gears Which Turn The World
Abstraction 2020
Gears of Aging
Model Comparison

Wiki Contributions

Comments

Sorted by

But there are many counter examples of this not being a real concept. See here for many of them: https://www.thediff.co/archive/bullshit-jobs-is-a-terrible-curiosity-killing-concept/

That link has lots of argument against Graeber's particular models and methodology, but doesn't actually seem to argue that much against bullshit jobs as a concept. Indeed, it explicitly endorses to some extent the sort of model used in this post in various places (like e.g. explicitly calling out corporate empire-building as a thing which actually happens). For instance, this example:

The fake job in question is basically a contribution to that glamour: a receptionist who doesn't have much work to do. But this could end up being a money-saving proposition if the company is able to attract workers, and pay them less, by treating the presence of an assistant as a perk.

First things first:

My current working model of the essential "details AND limits" of human mental existence puts a lot of practical weight and interest on valproic acid because of the paper "Valproate reopens critical-period learning of absolute pitch".

This is fascinating and I would love to hear about anything else you know of a similar flavor.

As for the meat of the comment...

I think this comment didn't really get at the main claim from the post. The key distinction I think it's maybe missing is between:

  • Concepts which no humans have assigned words/phrases to, vs
  • Types of concepts which no humans have assigned a type of word/phrase to

So for instance, nemawashi is a concept which doesn't have a word in English, but it's of a type which is present in English - i.e. it's a pretty ordinary verb, works pretty much like other verbs, if imported into English it could be treated grammatically like a verb without any issues, etc.

I do like your hypothesis that there are concepts which humans motivatedly-avoid giving words to, but that hypothesis is largely orthogonal to the question of whether there are whole types of concepts which don't have corresponding word/phrase types, e.g. a concept which would require not just new words but whole new grammatical rules in order to use in language.

Ithkuil, on the other hand, sounds like it maybe could have some evidence of whole different types of concepts.

... is that why this post has had unusually many downvotes? Goddammit, I was just trying to convey how and why I found the question interesting and the phenomenon confusing. Heck, I'm not even necessarily claiming the Wentworld equilibrium would be better overall.

The main testable-in-principle predictions are that economic profits causally drive hiring in large orgs (as opposed to hiring causing economic profits), and that orgs tend to expand until all the economic profit is eaten up (as opposed to expanding until marginal cost of a hire exceeds marginal revenue/savings from a hire). Actually checking those hypotheses statistically would be a pretty involved project; subtle details of accounting tend to end up relevant to this sort of thing, and the causality checks are nontrivial. But it's the sort of thing economists have tools to test.

It did happen in Wentworld, the resulting corporate structure just doesn't look suspiciously hierarchical, and the corporate culture doesn't look suspiciously heavy on dominance/submission.

Hard to know the full story of what it would look like instead, but I'd guess the nominal duties of Earth!management would be replaced with a lot more reliance on people specialized in horizontal coordination/communication rather hierarchical command & control, plus a lot more paying for results rather than flat salary (though that introduces its own set of problems, which Wentworlders would see as one of the usual main challenges of scaling a company).

A big problem with this post is that I don't have a clear idea of "tanha" is/isn't, so can't really tell how broad various claims are. With that in mind, I want to lay out the closest sane-sounding interpretation I see of that section, and hopefully get feedback on what that interpretation does/doesn't capture about the points you're trying to make.

Jaynes talks about the "mind projection fallacy", in which people interpret subjective aspects of their own models as properties of the world. An example: people interpret their own lack of knowledge/understanding about a phenomenon as the phenomenon itself being inherently mysterious or irreducibly complex. I think mind projection especially happens with value judgements - i.e. people treat "goodness" or "badness" as properties of things out in the world.

Cognitively speaking, treating value as a property of stuff in the world can be useful for planning: if I notice that e.g. one extra counterfactual gallon of milk would be high-value (where the counterfactual intuitively says "all else equal"), then I go look for plans which get me that extra gallon of milk, and I can factor that search apart from much of the rest of my planning-process. But the flip side of assigning value to counterfactuals over stuff-in-the-world is fabricated options: I do not actually have the ability to make a gallon of milk magically appear before me without doing anything else, that's a fabricated option useful as an intermediate cognitive step in planning, it's not a real option actually available to me. The only things a real plan can counterfact over are my own actions, and only insofar as those actions are within my realistic possibility space.

Your section on "tanha" sounds roughly like projecting value into the world, and then mentally latching on to an attractive high-value fabricated option.

How well does that capture the thing you're trying to point to?

"Reward function" is a much more general term, which IMO has been overused to the point where it arguably doesn't even have a clear meaning. "Utility function" is less general: it always connotes an optimization objective, something which is being optimized for directly. And that basically matches the usage here.

  • If takeoff is slow-ish, a pivotal act (preventing more AGIs from being developed) will be difficult.
  • If no pivotal act is performed, RSI-capable AGI proliferates. This creates an n-way non-iterated Prisoner's Dilemma where the first to attack, wins.

These two points seem to be in direct conflict. The sorts of capabilities and winner-take-all underlying dynamics which would make "the first to attack wins" true are also exactly the sorts of capabilities and winner-take-all dynamics which would make a pivotal act tractable.

Or, to put it differently: the first "attack" (though might not look very "attack"-like) is the pivotal act; if the first attack wins, that means the pivotal act worked, and therefore wasn't that difficult. Conversely, if a pivotal act is too hard, then even if an AI attacks first and wins, it has no ability prevent new AI from being built and displacing it; if it did have that ability, then the attack would be a pivotal act.

Yeah, this is an open problem that's on my radar. I currently have two main potential threads on it.

First thread: treat each bit in the representation of quantities as distinct random variables, so that e.g. the higher-order and lower-order bits are separate. Then presumably there will often be good approximate natural latents (and higher-level abstract structures) over the higher-order bits, moreso than the lower-order bits. I would say this is the most obvious starting point, but it also has a major drawback: "bits" of a binary number representation are an extremely artificial ontological choice for purposes of this problem. I'd strongly prefer an approach in which magnitudes drop out more naturally.

Thus the second thread: maxent. It continues to seem like there's probably a natural way to view natural latents in a maxent form, which would involve numerically-valued natural "features" that get added together. That would provide a much less artificial notion of magnitude. However, it requires figuring out the maxent thing for natural latents, which I've tried and failed at several times now (though with progress each time).

Load More