I think I mostly agree with this, but from my perspective it hints that you're framing the problem slightly wrong. Roughly, the problem with the outsourcing-approaches is our inability to specify/verify solutions to the alignment problem, not that specifying is not in general easier than solving yourself.
(Because of the difficulty of specifying the alignment problem, I restricted myself to speculating about pivotal acts in the post linked above.)
But you don't need to be able to code to recognize that a software is slow and buggy!?
About the terrible UI part I agree a bit more, but even there one can think of relatively objective measures to check usability without being able to speak python.
In cases where outsourcing succeeds (to various degrees), I think the primary load-bearing mechanism of success in practice is usually not "it is easier to be confident that work has been done correctly than to actually do the work", at least for non-experts.
I find this statement very surprising. Isn't almost all of software development like this?
E.g., the client asks the developer for a certain feature and then clicks around the UI to check if it's implemented / works as expected.
"This is what it looks like in practice, by default, when someone tries to outsource some cognitive labor which they could not themselves perform."
This proves way too much.
I agree, I think this even proves P=NP.
Maybe a more reasonable statement would be: You can not outsource cognitive labor if you don't know how to verify the solution. But I think that's still not completely true, given that interactive proofs are a thing. (Plug: I wrote a post exploring the idea of applying interactive proofs to AI safety.)
No, that's not quite right. What you are describing is the NP-Oracle.
On the other hand, with the IP-Oracle we can (in principle, limited by the power of the prover/AI) solve all problems in the PSPACE complexity class.
Of course, PSPACE is again a class of decision problems, but using binary search it's straightforward to extract complete answers like the designs mentioned later in the article.
Your reasoning here relies on the assumption that the learning mostly takes place during the individual organisms lifetime. But I think it's widely accepted that brains are not "blank slates" at birth of the organism, but contain significant amount of information, akin to a pre-trained neural network. Thus, if we consider evolution as the training process, we might reach the opposite conclusion: Data quantity and training compute are extremely high, while parameter count (~brain size) and brain compute is restricted and selected against.
Thank you for writing about this! A minor point: I don't think aerosolizing monkeypox suspensions using a nebulizer can be counted as gain of function research, not even "at least kind of". (Or do I lack reading comprehension and misunderstood something?)
Hypothesis: If a part of the computation that you want your trained system to compute "factorizes", it might be easier to evolve a modular system for this computation. By factorization I just mean that (part of) the computation can be performed using mostly independent parts / modules.
Reasoning: Training independent parts to each perform some specific sub-calculation should be easier than training the whole system at once. E.g. training n neural networks of size N/n should be easier (in terms of compute or data needed) than training one of size N, given th...
Yes, this mask is more of a symbolic pic, perhaps Simon can briefly explain why he chose this one (copyright issues I think).
Yep, it's simply the first one in open domain that I found. I hope it's not too misleading; it should get the general idea across.
Well, if your chances of getting infected are drastically reduced, then so is the use of the "protect others" effect of wearing the mask, so overall these masks are likely to be very useful.
That said, a slightly modified design that filters air both on the in- and the out- breath might be a good idea. This way, you keep your in-breath filters dry and have some "protect others" effect.
[...] P3 masks, worn properly, with appropriate eye protection while maintaining basic hand hygiene are efficient in preventing SARS-CoV-2 infection regardless of setting.
If this is true, then this is a great idea and it's somewhat suprising that these masks are not in widespread use already.
I suspect the plan is a bit less practical than stated, as I expect there to be problems with compliance, in particular because the mask are mildly unpleasant to wear for prolonged periods.
They have a copy at our university library. I would need to investigate how to scan it efficiently, but I'm up for it if there isn't an easier way and noone else finds a digital copy.
Definitely Main, I found your post (including the many references) and the discussion very interesting.
I still agree with Eli and think you're "really failing to clarify the issue", and claiming that xyz is not the issue does not resolve anything. Disengaging.
The paper had nothing to do with what you talked about in your opening paragraph
What? Your post starts with:
My goal in this essay is to analyze some widely discussed scenarios that predict dire and almost unavoidable negative behavior from future artificial general intelligences, even if they are programmed to be friendly to humans.
Eli's opening paragraph explains the "basic UFAI doomsday scenario". How is this not what you talked about?
It depends on the skill difference and the size of the board, on smaller boards the advantage is probably pretty large: Discussion on LittleGolem
Regarding the drop of unemployment in Germany, I've heard it claimed that it is mainly due to changing the way the unemployment statististics are done, e.g. people who are in temporary, 1€/h jobs and still receiving benefits are counted als employed. If this point is still important, I can look for more details and translate.
EDIT: Some details are here:
...It is possible to earn income from a job and receive Arbeitslosengeld II benefits at the same time. [...] There are criticisms that this defies competition and leads to a downward spiral in wages and the l
Isn't "exploring many unusual and controversial ideas" what scientists usually do? (Ok, maybe sometimes good scientist do it...) Don't you think that science could contribute to saving the world?
From 3.3
To do we would want to put the threatened agent
to do so(?) we would
From 3.4
an agent whose single goal is to stymie the plans and goals of single given agent
of a single given agent
From 4.1
then all self-improving or constructed superintelligence must fall prey to it, even if it were actively seeking to avoid it.
every, or change the rest of the sentence (superintelligences, they were)
From 4.5
There are goals G, such that an entity an entity with goal G
a superintelligence will goal G can exist.
You're right, but isn't this a needless distraction from the more important point, i.e. that it doesn't matter whether we humans find interesting or valueable what the (unfriendly-)AI does?
I dunno, I think this is a pretty entertaining instance of anthropomorphizing + generalizing from oneself. At least in the future, I'll be able to say things like "for example, Goertzel - a genuine AI researcher who has produced stuff - actually thinks that an intelligent AI can't be designed to have an all-consuming interest in something like pi, despite all the real-world humans who are obsessed with pi!"
Some very small things that caught my attention:
On page 6, you mention "Kryder's law" as support for the accelerator of "massive datasets". Clearly larger diskspace enables us to use larger datasets, but how will these datasets be created? Is it obvious that we can create useful, large datasets?
On page 10, you write (editability as an AI advantage) "Of course, such possibilities raise ethical concerns.". I'm not sure why this sentence is there, is editability the only thing that raises these concerns? If yes, what are the
The possibility of an intelligence explosion seems to be an extraordinary belief.
Extraordinary compared to what? We already now that most people are insane, so that belief beeing not shared by almost everybody doesn't make it unlikely a priori. In some ways the intellgence explosion is a straightforward extrapolation of what we know at the moment, so I don't think your critisism is valid here.
...What evidence justified a prior strong enough as to be updated on a single paragraph, written in natural language, to the extent that you would afterwards devote
Filled out the survey. The cryonics-question could use an option "I would be signed up if it was possible where I live."
I'm through the whole text now, did proofreading and changed quite a bit, some terminological questions remain.
Same here. All in all, great job everybody!
My guess would be: If the integrity check gets corrupted, the mutated nanomachine could possibly "work", but if the decryption routine gets corrupted, the instructions can't get decrypted and the nanomachine wouldn't work.
Don't you believe in flying saucers, they ask me? Don't you believe in telepathy? — in ancient astronauts? — in the Bermuda triangle? — in life after death? No, I reply. No, no, no, no, and again no. One person recently, goaded into desperation by the litany of unrelieved negation, burst out "Don't you believe in anything?" "Yes", I said. "I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."
Isaac Asimov
I think to make it work we should add a third condition:
If this condition is not satisfied, and people have different priorities for the different dimensions/criteria, the existence of multiple alternatives needs no further explanation, and we can't derive any conclusion about "betterness".
There is so much advice for self-improvement here and in the rest of the Internet! I personally use the following strategy:
Being rational does not mean that you "improve" your arguments but never change the bottom line.
(Just saying, I'm not sure if you meant it that way.)
This may simply be because he is european, I have the feeling the she is not so well known/influential on this side of the atlantic. (My only evidence is that I first heard about her on Scott Aaronson's blog, incidentalliy where I first heard about Overcoming Bias, too.)
Clearly, I do not understand how this data point should influence my estimate of the probablity that general, computationally tractable methods exist.
Of course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.
The question is not whether "quantum computers can fundamentally be more efficient then classical computers", but if quantum mechanical entanglement can be used by the brain, which seems to be improbable. I asked a professor of biophysics about this issues, he knew about the result concerning photosynthesis and was pretty sure that QM does not matter for simulating the brain.
I don't believe these "practical" problems ("can't try long enough") generalize enough to support your much more general initial statement. This doesn't feel like a true rejection to me, but maybe I'm misunderstanding your point.