Дмитрий Зеленский

Posts

Sorted by New

Wiki Contributions

Comments

"But there are problems of principal here." - principle?

You (or, rather, Dr. Hanson) should definitely rename futarchy... I can't stop thinking about it meaning rule of futanaris :D

On a more serious note, I think the allegory fails to disentangle the faults of Bayes's Law being necessary to follow and the faults of the need to maintain the corresponding bureaucracy.

While admitting to be ignorant of most of the current evidence, I have to note my priors are currently strongly in favor of criminalization (at least for traders and for free use; using in hospitals and in research may or may not be different). Marijuana use, from what I know, lowers IQ by several integer points for some time, causing this (by being a trader or as in the next sentence) is a crime by itself (arguably worse than, say, breaking someone's toe which is clearly criminal). Decriminalization would cause a temporary spike in its use, and for that see above. De-decriminalization is likely to cause turmoil because people are opposed to change. And the possibly-strawman argument against criminalization I just adapted from somewhere in my head that you can implant "trading quantities" of a drug to frame someone does not work: you can likewise implant any substance which is forbidden to be kept freely, like, say, an explosive.

"The Duplicator (not yet posted on LW)" - now posted, n'est ce-pas?

Unfortunately, it is quite difficult to taboo a term when discussing how (mis)interpretation of said term influenced a survey.

Moreover, even if my understanding is ultimately not what the survey-makers had in mind, the responding researchers having the same understanding as me would be enough to get the results in the OP.

I would say that, in ideal world, the relevant skill/task is "given the analysis already at hand, write a paper that conveys it well" (and it is alarming that this skill becomes much more valuable than the analysis itself, so people get credit for others' analyses even when they clearly state that they merely retell it). And I fully believe that both the task of scientific analysis (outputting the results of the analysis, not its procedure, because that's what needed for non-meta-purposes!) and the task outlined above will be achieved earlier than an AI that can actually combine them to write a paper from scratch. AND that each new simple task in the line to the occupation further removes their combination even after the simple task itself is achieved.

"In your mind's eye, it seems, you can see before you the many could-worlds that follow from one real world." Isn't it exactly what many-worlds interpretation does to QM (to keep it deterministic, yada-yada-yada; to be fair, Brandon specifically stated he is not considering the QM sense, but I am not sure the sense he suggested himself is distinct)? There are worlds that are (with not-infinitesimally-low probability-mass) located in the future of the world we are now (and they are multiple), and there are worlds that are not. The former are "realizable", and they "follow" - and whether they are reachable depends on how good the "forward search process that labels certain options as reachable before judging them and maximizing" is. My intuition says that "could" can mean the former, rather than "whatever my mind generated in the search as options" (and, moreover, that the latter is a heuristics of the mind for the former). (Unless, of course, the real bomb under this definition is in "probability-mass" hiding the same "could-ness", but if you are going to tell me that QM probability-mass is likewise reducible to labeling by a search process and this is the "correct answer", I will find this... well, only mildly surprising, because QM never ceases to amaze me, which influences my further evaluations, but at least I don't see how this obviously follows from the QM sequence.)

Moreover, this quotation from Possibility and Could-ness seems to hint to a similar (yet distinct, because probability is in the mind) problem.
> But you would have to be very careful to use a definition like that one consistently.  "Could" has another closely related meaning in which it refers to the provision of at least a small amount of probability. 

Well, that's not quite true. Let's go to the initial example: you need to write a linguistic paper. To this, you need at least two things: perform the lingustic analysis of some data and actually put it in words. Yet the latter needs the internal structure of the former, not just the end result (as would most currently-practical applications of a machine that does a linguistic analysis). The logic behind trees, for instance, not just a tree-parsed syntactic corpus. A neural network (RNN or something) making better and quicker tree-parsed syntactic corpora than me would just shrug (metaphorically) if asked for the procedure of tree-making. I am near-certain other sciences would show the same pattern for their papers.

Managing AI would also have to manually handle information flow between other AIs more generally, which is kinda "automatic" for human minds (though with some important exceptions, leading to the whole idea of mental modules a la Fodor).

Well, it were specifically B1 mass production droids which were made incredibly cheap and so with, let's say, not the best AI ever. A rare model like HK-47 was superior to usual (neither Force-amplified nor decades-of-training-behind-Mandalore) humans; and the latter case could also be a difference in available weaponry (if your weapon cannot penetrate amplified beskar armor and you only find this out in the moment of attack, you'd need to be very smart to immediately find a way to win or retreat before the Battle Reflexes guy shuts you off).

As for FTL - I wouldn't be so sure, history of research sometimes makes strange jumps. Romans were this close to going all steampunk, and a naive modern observer could say "having steam machines without gunpowder seems unlikely". Currently we don't know what, if anything, could provide FTL, and the solution could jump on us unexpectedly and unrelatedly to AI development.

Load More