DanArmak

Wiki Contributions

Comments

Nuclear power has the highest chance of The People suddenly demanding it be turned off twenty years later for no good reason. Baseload shouldn't be hostage to popular whim.

Thanks for pointing this out!

A few corollaries and alternative conclusions to the same premises:

  1. There are two distinct interesting things here: a magic cross-domain property that can be learned, and an inner architecture that can learn it.
  2. There may be several small efficient architectures. The ones in human brains may not be like the ones in language models. We have plausibly found one efficient architecture; this is not much evidence about unrelated implementations.
  3. Since the learning is transferable to other domains, it's not language specific. Large language models are just where we happened to first build good enough models. You quote discussion of the special properties of natural language statistics but, by assumption, there are similar statistical properties in other domains. The more a property is specific to language, or necessary because of the special properties of language, the less it's likely to be a universal property that transfers to other domains.

Thanks! This, together with gjm's comment, is very informative.

How is the base or fundamental frequency chosen? What is special about the standard ones?

the sinking of the Muscovy

Is this some complicated socio-political ploy denying the name Moskva / Moscow and going back to the medieval state of Muscovy?

I'm a moral anti-realist; it seems to me to be a direct inescapable consequence of materialism.

I tried looking at definitions of moral relativism, and it seems more confused than moral realism vs. anti-realism. (To be sure there are even more confused stances out there, like error theory...)

Should I take it that Peterson and Harris are both moral realists and interpret their words in that light? Note that this wouldn't be reasoning about what they're saying, for me, it would be literally interpreting their words, because people are rarely precise, and moral realists and anti-realists often use the same words to mean different things. (In part because they're confused and are arguing over the "true" meaning of words.)

So, if they're moral realists, then "not throwing away the concept of good" means not throwing away moral realism; I think I understand what that means in this context.

When Peterson argues religion is a useful cultural memeplex, he is presumably arguing for all of (Western monotheistic) religion. This includes a great variety of beliefs, rituals, practices over space and time - I don't think any of these have really stayed constant across the major branches of Judaism, Christianity and Islam over the last two thousand years. If we discard all these incidental, mutable characteristics, what is left as "religion"?

One possible answer (I have no idea if Peterson would agree): the structure of having shared community beliefs and rituals remains, but not the specific beliefs, or the specific (claimed) reasons for holding them; the distinctions of sacred vs. profane remains, and of priests vs. laymen, and of religious law vs. freedom of actions in other areas, but no specifics of what is sacred or what priests do; the idea of a single, omniscient, omnipotent God, but not that God's attributes, other than being male; that God judges and rewards or punishes people, but no particulars of what is punished or rewarded, or what punishments or rewards might be.

ETA: it occurs to me that marriage-as-a-sacrament, patriarchy, and autocracy, have all been stable common features of these religions. I'm not sure if they should count as features of the religion, or of a bigger cultural package which has conserved these and other features.

Atheists reject the second part of the package, the one that's about a God. But they (like everyone) still have the first part: shared beliefs and rituals and heresies, shared morals and ethics, sources of authority, etc. (As an example, people sometimes say that "Science" often functions as a religion for non-scientists; I think that's what's meant; Science-the-religion has priests and rituals and dogmas and is entangled with law and government, but it has no God and doesn't really judge people.)

But that's just what I generated when faced with this prompt. What does Peterson think is the common basis of "Western religion over the last two thousand years" that functions as a memeplex and ignores the incidentals that accrue like specific religious beliefs?

They are both pro free speech and pro good where "good" is what a reasonable person would think of as "good".

I have trouble parsing that definition. You're defining "good" by pointing at "reasonable". But people who disagree on what is good, will not think each other reasonable.

I have no idea what actual object-level concept of "good" you meant. Can you please clarify?

For example, you go on to say:

They both agree that religion has value.

I'm not sure whether religion has (significant, positive) value. Does that make me unreasonable?

Amazon using an (unknown secret) algorithm to hire or fire Flex drivers is not a instance of "AI", not even in the buzzword sense of AI = ML. For all we know it's doing something trivially simple, like combining a few measured properties (how often they're on time, etc.) with a few manually assigned weights and thresholds. Even if it's using ML, it's going to be something much more like a bog standard Random Forest model trained on 100k rows with no tuning, than a scary powerful language model with a runaway growth trend.

Even if some laws are passed about this, they'd be expandable in the directions of "Bezos is literally an evil overlord [which is a quote from the linked article], our readers/voters love to hate him, we should hurt him some more"; and "we already have laws establishing protected characteristics in hiring/firing/housing/etc; if black-box ML models can't prove they're not violating the law, then they're not allowed". The latter has a very narrow domain of applicability so would not affect AI risk.

What possible law or regulation, now or in the future, would differentially impede dangerous AI (on the research path leading to AGI) and all other software, or even all other ML? A law that equally impedes all ML would never get enough support to pass; a law that could be passed would have to use some narrow discriminating wording that programmers could work around most of the time, and so accomplish very little.

Epistemic status: wild guessing:

  1. If the US has submarine locators (or even a theory or a work-in-progress), it has to keep them secret. The DoD or Navy might not want to reveal them to any Representatives. This would prevent them from explaining to those Representatives why submarine budgets should be lowered in favor of something else.

  2. A submarine locator doesn't stop submarines by itself; you still presumably need to bring ships and/or planes to where the submarines are. If you do this ahead of time and just keep following the enemy subs around, they are likely to notice, and you will lose strategic surprise. The US has a lot of fleet elements and air bases around the world (and allies), so it plausibly has an advantage over its rivals in terms of being able to take out widely dispersed enemy submarines all at once.

  3. Even if others also secretly have submarine locators, there may be an additional anti-sub-locator technology or strategy that the US has developed and hopes its rivals have not, which would keep US submarines relevant. Building a sub-locator might be necessary but not sufficient to building an anti-sub-locator.

Load More