All of Simon Fischer's Comments + Replies

I don't believe these "practical" problems ("can't try long enough") generalize enough to support your much more general initial statement. This doesn't feel like a true rejection to me, but maybe I'm misunderstanding your point.

2[comment deleted]3mo

I think I mostly agree with this, but from my perspective it hints that you're framing the problem slightly wrong. Roughly, the problem with the outsourcing-approaches is our inability to specify/verify solutions to the alignment problem, not that specifying is not in general easier than solving yourself.

(Because of the difficulty of specifying the alignment problem, I restricted myself to speculating about pivotal acts in the post linked above.)

2johnswentworth3mo
Fair. I am fairly confident that (1) the video at the start of the post is pointing to a real and ubiquitous phenomenon, and (2) attempts to outsource alignment research to AI look like an extremely central example of a situation where that phenomenon will occur. I'm less confident that my models here properly frame/capture the gears of the phenomenon.

But you don't need to be able to code to recognize that a software is slow and buggy!?

About the terrible UI part I agree a bit more, but even there one can think of relatively objective measures to check usability without being able to speak python.

6johnswentworth3mo
True! And indeed my uncle has noticed that it's slow and buggy. But you do need to be able to code to distinguish competent developers, and my uncle did not have so many resources to throw at the problem that he could keep trying long enough to find a competent developer, while paying each one to build the whole app before finding out whether they're any good. (Also I don't think he's fully aware of how bad his app is relative to what a competent developer could produce.)

In cases where outsourcing succeeds (to various degrees), I think the primary load-bearing mechanism of success in practice is usually not "it is easier to be confident that work has been done correctly than to actually do the work", at least for non-experts.

I find this statement very surprising. Isn't almost all of software development like this?
E.g., the client asks the developer for a certain feature and then clicks around the UI to check if it's implemented / works as expected.

2johnswentworth3mo
At least in my personal experience, a client who couldn't have written the software themselves usually gets a slow, buggy product with a terrible UI. (My uncle is a good example here - he's in the septic business, hired someone to make a simple app for keeping track of his customers. It's a mess.) By contrast, at most of the places where I've worked or my friends have worked which produce noticeably good software, the bulk of the managers are themselves software engineers or former software engineers, and leadership always has at least some object-level software experience. The main outsourcing step which jumps between a non-expert and an expert, in that context, is usually between the customer and the company producing an app. And that's exactly where there's a standardized product. The bespoke products for non-expert customers - like e.g. my uncle's app for his business - tend to be a mess.

"This is what it looks like in practice, by default, when someone tries to outsource some cognitive labor which they could not themselves perform."
This proves way too much.

I agree, I think this even proves P=NP.

Maybe a more reasonable statement would be: You can not outsource cognitive labor if you don't know how to verify the solution. But I think that's still not completely true, given that interactive proofs are a thing. (Plug: I wrote a post exploring the idea of applying interactive proofs to AI safety.)

2johnswentworth3mo
I think the standard setups in computational complexity theory assume away the problems which are usually most often blockers to outsourcing in practice - i.e. in complexity theory the problem is always formally specified, there's no question of "does the spec actually match what we want?" or "has what we want been communicated successfully, or miscommunicated?".

No, that's not quite right. What you are describing is the NP-Oracle.

On the other hand, with the IP-Oracle we can (in principle, limited by the power of the prover/AI) solve all problems in the PSPACE complexity class.

Of course, PSPACE is again a class of decision problems, but using binary search it's straightforward to extract complete answers like the designs mentioned later in the article.

Your reasoning here relies on the assumption that the learning mostly takes place during the individual organisms lifetime. But I think it's widely accepted that brains are not "blank slates" at birth of the organism, but contain significant amount of information, akin to a pre-trained neural network. Thus, if we consider evolution as the training process, we might reach the opposite conclusion: Data quantity and training compute are extremely high, while parameter count (~brain size) and brain compute is restricted and selected against.

2jacob_cannell9mo
Much depends on what you mean by learning and mostly, but the evidence for some form of blank slate is overwhelming. Firstly most of the bits in the genome must code for cellular machinery and even then the total genome bits is absolutely tiny compared to brain synaptic bits. Then we have vast accumulating evidence from DL that nearly all the bits come from learning/experience, that optimal model bit complexity is proportional to dataset size (which not coincidentally is roughly on order 1e15 bits for humans - 1e9 seconds * 1e6 bit/s), and that the tiny tiny number of bits needed to specify architecture and learning hyperparams are simply a prior which can be overcome with more data. And there is much more [https://www.lesswrong.com/posts/9Yc7Pp7szcjPgPsjf/the-brain-as-a-universal-learning-machine#:~:text=the%20universal%20learning%20machinery%2Falgorithm,result%20of%20continuous%20self%2Doptimization.].

Thank you for writing about this! A minor point: I don't think aerosolizing monkeypox suspensions using a nebulizer can be counted as gain of function research, not even "at least kind of". (Or do I lack reading comprehension and misunderstood something?)

Hypothesis: If a part of the computation that you want your trained system to compute "factorizes", it might be easier to evolve a modular system for this computation. By factorization I just mean that (part of) the computation can be performed using mostly independent parts / modules.

Reasoning: Training independent parts to each perform some specific sub-calculation should be easier than training the whole system at once. E.g. training n neural networks of size N/n should be easier (in terms of compute or data needed) than training one of size N, given th... (read more)

1Lucius Bushnaq1y
To clarify, the main difficulty I see here is that this isn't actually like training n networks of size N/n, because you're still using the original loss function.  Your optimiser doesn't get to see how well each module is performing individually, only their aggregate performance. So if module three is doing great, but module five is doing abysmally, and the answer depends on both being right, your loss is really bad. So the optimiser is going to happily modify three away from the optimum it doesn't know it's in. Nevertheless, I think there could be something to the basic intuition of fine tuning just getting more and more difficult for the optimiser as you increase the parameter count, and with it the number of interaction terms. Until the only way to find anything good anymore is to just set a bunch of those interactions to zero.  This would predict that in 2005-style NNs with tiny parameter counts, you would have no modularity. In real biology, with far more interacting parts, you would have modularity. And in modern deep learning nets with billions of parameters, you would also have modularity. This matches what we observe. Really neatly and simply too. It's also dead easy to test. Just make a CNN or something and see how modularity scales with parameter count. This is now definitely on our to do list.  Thanks a lot again, Simon!
Yes, this mask is more of a symbolic pic, perhaps Simon can briefly explain why he chose this one (copyright issues I think).

Yep, it's simply the first one in open domain that I found. I hope it's not too misleading; it should get the general idea across.

3Decius3y
Using a picture of a product to illustrate a discussion about it would be fair use even if there were copyrightable elements of the picture.
1jmh3y
Responding more to the other post but seems perhaps more sensible here as this seems more visible. The use of these masks has have serious drawback in the case of verbal communications. So, out of the box off the shelf these would not be a long term solution -- assuming worst case and no cure/vaccine and the virus stays around for years and years. However, we can use written communications if needed. Moreover, it would not be that hard to put communications into the mask some way. One, just a cheap internal mic and and external speaker. You could also integrated it via bluetooth and your smartphone or just dedicated bluetooth ear buds/head set getting paired with the internal mic. Obviously some new protocols for dealing with a a multi device setting would be needed but I cannot imagine that we don't already have solutions that are 80 to 90 percent ready to be apply to the specific setting. The upside here is that such innovations to the mask may well have positive value in existing use cases as well.

Well, if your chances of getting infected are drastically reduced, then so is the use of the "protect others" effect of wearing the mask, so overall these masks are likely to be very useful.

That said, a slightly modified design that filters air both on the in- and the out- breath might be a good idea. This way, you keep your in-breath filters dry and have some "protect others" effect.

[...] P3 masks, worn properly, with appropriate eye protection while maintaining basic hand hygiene are efficient in preventing SARS-CoV-2 infection regardless of setting.

If this is true, then this is a great idea and it's somewhat suprising that these masks are not in widespread use already.

I suspect the plan is a bit less practical than stated, as I expect there to be problems with compliance, in particular because the mask are mildly unpleasant to wear for prolonged periods.

1Yandong Zhang3y
The outlook of kids with a respiratory is kind of scary, as shown in HK. I did not see any other issue with this strategy.
3EGI3y
Thanks for pointing these things out, I probably should have adressed them more. I could think of several reasons for this. * Many (most?) health care professionals do not know of these masks or do not think of them as "medical equipment". * People do not realize that filters can be used multiple times thus dismissing the idea as logistically impossible / even more expensive than FFP masks for everyone * People think that all masks do not work (well) to prevent transmission * People think that these masks are "overkill", not realizing that a well fitting!!! reusable silicone mask is actually much less unleasant to wear than FFP masks. ... problems with compliance ... unpleasant to wear for prolonged periods. Yes, to a degree that is true. This should be addressed by... * Well fitting masks, at least 5 to 10 different types as discribed above with state of the art low resistance filters * Requiring people to wear masks only if there is actual risk of infection as described above * Rigorous enforcement especially in places where there are lots of people around (public transit, dense work places, schools and so on)

They have a copy at our university library. I would need to investigate how to scan it efficiently, but I'm up for it if there isn't an easier way and noone else finds a digital copy.

Definitely Main, I found your post (including the many references) and the discussion very interesting.

I still agree with Eli and think you're "really failing to clarify the issue", and claiming that xyz is not the issue does not resolve anything. Disengaging.

The paper had nothing to do with what you talked about in your opening paragraph

What? Your post starts with:

My goal in this essay is to analyze some widely discussed scenarios that predict dire and almost unavoidable negative behavior from future artificial general intelligences, even if they are programmed to be friendly to humans.

Eli's opening paragraph explains the "basic UFAI doomsday scenario". How is this not what you talked about?

0[anonymous]8y
The paper's goal is not to discuss "basic UFAI doomsday scenarios" in the general sense, but to discuss the particular case where the AI goes all pear-shaped EVEN IF it is programmed to be friendly to humans. That last part (even if it is programmed to be friendly to humans) is the critical qualifier that narrows down the discussion to those particular doomsday scenarios in which the AI does claim to be trying to be friendly to humans - it claims to be maximizing human happiness - but in spite of that it does something insanely wicked. So, Eli says: ... and this clearly says that the type of AI he has in mind is one that is not even trying to be friendly. Rather, he talks about how its And then he adds that ... which has nothing to do with the cases that the entire paper is about, namely the cases where the AI is trying really hard to be friendly, but doing it in a way that we did not intend. If you read the paper all of this is obvious pretty quickly, but perhaps if you only skim-read a few paragraphs you might get the wrong impression. I suspect that is what happened.
1Regex7y
After having read Worm I will say this much: it engages the creative thinking of the reader.

Awesome, a meetup in Cologne. I'll try to be there, too. :)

It depends on the skill difference and the size of the board, on smaller boards the advantage is probably pretty large: Discussion on LittleGolem

2lukeprog10y
Thanks!

Regarding the drop of unemployment in Germany, I've heard it claimed that it is mainly due to changing the way the unemployment statististics are done, e.g. people who are in temporary, 1€/h jobs and still receiving benefits are counted als employed. If this point is still important, I can look for more details and translate.

EDIT: Some details are here:

It is possible to earn income from a job and receive Arbeitslosengeld II benefits at the same time. [...] There are criticisms that this defies competition and leads to a downward spiral in wages and the l

... (read more)
4Kaj_Sotala10y
Damnit. Fixed again, hopefully for real this time.
0Stuart_Armstrong11y
Cheers!

Isn't "exploring many unusual and controversial ideas" what scientists usually do? (Ok, maybe sometimes good scientist do it...) Don't you think that science could contribute to saving the world?

4IlyaShpitser11y
What I am saying is "exploring unusual and controversial ideas" is the fun part of science (along with a whole lot of drudgery). You don't get points for doing fun things you would rather be doing anyways.
-1TimS11y
Some of the potentially useful soft sciences research is controversial. But essentially no hard sciences research is both (a) controversial and (b) likely to contribute massive improvement in human well-being. Even something like researching the next generation of nuclear power plants is controversial only in the sense that all funding of basic research is "controversial."

This is a basic strategy in (and may be practiced by playing) the game of Hex).

From 3.3

To do we would want to put the threatened agent

to do so(?) we would

From 3.4

an agent whose single goal is to stymie the plans and goals of single given agent

of a single given agent

From 4.1

then all self-improving or constructed superintelligence must fall prey to it, even if it were actively seeking to avoid it.

every, or change the rest of the sentence (superintelligences, they were)

From 4.5

There are goals G, such that an entity an entity with goal G

a superintelligence will goal G can exist.

You're right, but isn't this a needless distraction from the more important point, i.e. that it doesn't matter whether we humans find interesting or valueable what the (unfriendly-)AI does?

I dunno, I think this is a pretty entertaining instance of anthropomorphizing + generalizing from oneself. At least in the future, I'll be able to say things like "for example, Goertzel - a genuine AI researcher who has produced stuff - actually thinks that an intelligent AI can't be designed to have an all-consuming interest in something like pi, despite all the real-world humans who are obsessed with pi!"

Thanks for making me find out what the Roko-thing was about :(

Some very small things that caught my attention:

  • On page 6, you mention "Kryder's law" as support for the accelerator of "massive datasets". Clearly larger diskspace enables us to use larger datasets, but how will these datasets be created? Is it obvious that we can create useful, large datasets?

  • On page 10, you write (editability as an AI advantage) "Of course, such possibilities raise ethical concerns.". I'm not sure why this sentence is there, is editability the only thing that raises these concerns? If yes, what are the

... (read more)

The possibility of an intelligence explosion seems to be an extraordinary belief.

Extraordinary compared to what? We already now that most people are insane, so that belief beeing not shared by almost everybody doesn't make it unlikely a priori. In some ways the intellgence explosion is a straightforward extrapolation of what we know at the moment, so I don't think your critisism is valid here.

What evidence justified a prior strong enough as to be updated on a single paragraph, written in natural language, to the extent that you would afterwards devote

... (read more)
2XiXiDu12y
For me there are a lot of things that sound completely sane but might be just bunk. Antimatter weapons, grey goo, the creation of planet eating black holes by particle accelerators, aliens or string theory. I don't have the necessary education to discern those ideas from an intelligence explosion. They all sound like possibilities to me that might or might not be true. All I can do is to recognize their extraordinary status and subsequently demand the peer-review and assessment of those ideas by a larger and educated audience. Otherwise I run the risk of being swayed by the huge utility associated with those ideas, I run the risk of falling prey to a Pascalian mugger. Is Luke Muehlhauser that competent when it comes to all the fields associated with artificial intelligence research? It's not old, it becomes more relevant each day. Since I first voiced skepticism about the topic they expanded to the point of having world-wide meetings. At least a few people playing devil's advocate is a healthy exercise in my opinion :-)

Ok, I'm glad you interpreted my comment as constructive criticism. Thanks for your efforts!

I found it incredibly annoying that he seems to think that uncertainty is in the territory.

2spencerg12y
Thank you for pointing that out, it would have been better if I had spoken more carefully. I definitely don't think that uncertainty is in the territory. Please interpret "there is great uncertainty in X" as "our models of X produce very uncertain predictions."

Filled out the survey. The cryonics-question could use an option "I would be signed up if it was possible where I live."

6RomeoStevens12y
or I will be signing up as soon as I have a steady paycheck.

I'm through the whole text now, did proofreading and changed quite a bit, some terminological questions remain.

Same here. All in all, great job everybody!

My guess would be: If the integrity check gets corrupted, the mutated nanomachine could possibly "work", but if the decryption routine gets corrupted, the instructions can't get decrypted and the nanomachine wouldn't work.

1DSimon12y
Hm, makes sense. I suppose I was imagining that if the parent is already at the point where it's doing the assembly, then we already know from earlier that the parent is correct, and the verification issue now only applies to the child machine. However, I hadn't considered the possibility that the parent's data could get mutated after the parent's assembly, but that would certainly be possible, and create a single point of vulnerability at a simple integrity check's implementation.

Don't you believe in flying saucers, they ask me? Don't you believe in telepathy? — in ancient astronauts? — in the Bermuda triangle? — in life after death? No, I reply. No, no, no, no, and again no. One person recently, goaded into desperation by the litany of unrelieved negation, burst out "Don't you believe in anything?" "Yes", I said. "I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."

Isaac Asimov

Please consider posting your reply here, I would be interested in reading it!

0David Althaus12y
I wrote you a PM.

I think to make it work we should add a third condition:

  1. There is only one dimension on which the alternatives are compared

If this condition is not satisfied, and people have different priorities for the different dimensions/criteria, the existence of multiple alternatives needs no further explanation, and we can't derive any conclusion about "betterness".

There is so much advice for self-improvement here and in the rest of the Internet! I personally use the following strategy:

  1. Save/bookmark everything that might be/become important
  2. Prioritize what you want to improve upon first, improve this, and start again

Being rational does not mean that you "improve" your arguments but never change the bottom line.

(Just saying, I'm not sure if you meant it that way.)

0Alex_Altair12y
Completely understood. This was about internal honesty.

This may simply be because he is european, I have the feeling the she is not so well known/influential on this side of the atlantic. (My only evidence is that I first heard about her on Scott Aaronson's blog, incidentalliy where I first heard about Overcoming Bias, too.)

5Paul Crowley12y
He's perfectly familiar with the works of Ayn Rand - as knb says, I guess he felt that the reference to libertarians suffices to ensure that the audience understand that singularitarians aren't the sort of people you want to be associated with.

Clearly, I do not understand how this data point should influence my estimate of the probablity that general, computationally tractable methods exist.

0timtyler12y
To me it seems a lot like the question of whether general, computationally tractable methods of compression exist. Provided you are allowed to assume that the expected inputs obey some vaguely-sensible version of Occam's razor, I would say that the answer is just "yes, they do".

I don't care about that specific formulation of the idea; maybe Robin Hanson's formulation that there exists no "grand unified theory of intelligence" is clearer? (link)

0timtyler12y
Clear - but also clearly wrong. Robin Hanson says: ...but the answer seems simple. A big part of "betterness" is the ability to perform inductive inference, which is not a human-specific concept. We do already have a powerful theory about that, which we discovered in the last 50 years. It doesn't immediately suggest implementation strategy - which is what we need. So: more discoveries relating to this seem likely.

Of course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.

1timtyler12y
IMO, it is best to think of power and breadth being two orthogonal dimensions - like this. * narrow <-> broad; * weak <-> powerful. The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there's the idea that if you are broad, you can't be very deep, and be able to be computed quickly. I don't think that idea is correct. I would compare the idea to saying that we can't build a general-purpose compressor. However: yes we can. I don't think the idea that "there is no such thing as general intelligence" can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.

His argument seems much better to me, I tried(!) to make a point similar to "there is no grand unified theory of intelligence" here.

I tried the second virtue. I'm wondering what good translation for "belief" and "the Way" are.

The question is not whether "quantum computers can fundamentally be more efficient then classical computers", but if quantum mechanical entanglement can be used by the brain, which seems to be improbable. I asked a professor of biophysics about this issues, he knew about the result concerning photosynthesis and was pretty sure that QM does not matter for simulating the brain.

0Davorak12y
I was trying to express in my post that the extra efficiency gained from a switch to quantum computers only matters when it makes the simulation practical rather impractical with the current resources. This transition would only happen if the brain used quantum algorithms with a fundamental advantage over classical computing, which I assigned a low probability to. Meaning that a QM computer would probably not be necessary. It sounds like we agree in conclusion but are failing to comunicate some details or disagree on some details.
Load More