Silas: Look, as soon you find a 1 bit, you've solved the problem with certainty. You have no remaining doubt about the answer. And the expected number of queries until you find a 1 bit is O(1) (or 2 to be exact). Why? Because with each query, your probability of finding a 1 bit is 1/2. Therefore, the probability that you need t or more queries is (1/2)^(t-1). So you get a geometric series that sums to 2.
(Note: I was careful to say you succeed with certainty after an expected number of queries independent of n -- not that there's a constant c such tha...
Silas: "Solve" = for a worst-case string (in both the deterministic and randomized cases). In the randomized case, just keep picking random bits and querying them. After O(1) queries, with high probability you'll have queried either a 1 in the left half or a 1 in the right half, at which point you're done.
As far as I know this problem doesn't have a name. But it's how (for example) you construct an oracle separating P from ZPP.
Don: When you fix the goalposts, make sure someone can't kick the ball straight in! :-) Suppose you're given an n-bit string, and you're promised that exactly n/4 of the bits are 1, and they're either all in the left half of the string or all in the right half. The problem is to decide which. It's clear that any deterministic algorithm needs to examine at least n/4 + 1 of the bits to solve this problem. On the other hand, a randomized sampling algorithm can solve the problem with certainty after looking at only O(1) bits on average.
Eliezer: I often tel...
I am interested in what Scott Aaronson says to this.
I fear Eliezer will get annoyed with me again :), but R and Stephen basically nailed it. Randomness provably never helps in average-case complexity (i.e., where you fix the probability distribution over inputs) -- since given any ensemble of strategies, by convexity there must be at least one deterministic strategy in the ensemble that does at least as well as the average.
On the other hand, if you care about the worst-case running time, then there are settings (such as query complexity) where randomness ...
(the superscripts didn't show up: that was N^googol and 2^N)
Um, except that we also don't know whether there are computations that can be checked in N time but only performed in Ngoogol time. The situation is qualitatively the same as for N versus 2N.
Otherwise, of course a larger environment can outsmart you mathematically.
No, not of course. For example, suppose P were equal to PSPACE. Then even though a larger environment could fundamentally outsmart you mathematically (say by solving the halting problem), it couldn't prove to you that it was doing so. In other words, the situation with polynomial-time computation would be more-or-less the same as it is with unlimited computation: superintelligent machines could only prove their superintelligence to other superintelligent machines.
That the situatio...
In fact, it's just bloody hard to fundamentally increase your ability to solve math problems in a way that "no closed system can do" just by opening the system. So far as I can tell, it basically requires that the environment be magic and that you be born with faith in this fact.
As Wei mentioned, P≠NP is basically the conjecture that this isn't true: i.e., that you can exponentially increase your ability to solve math problems by your environment being magic and your not being born with faith in that fact. So for example, if your environment im...
Carl: I'm not sure, but I'd certainly try such a pill were the effects reversible.
I don't have a problem, my environment has a problem.
Eliezer, I'm in complete sympathy with that attitude. I've had only limited success so far at nerdifying the rest of the world, but I'll keep at it!
Lara: As far as I can tell, there are four basic problems.
First, if adults constantly praise and reward you for solving math problems, writing stories, and so on, then you aren't forced to develop interpersonal skills to the same extent most kids are. You have a separate source of self-worth, and it may be too late that you realize that source isn't enough. (Incidentally, the sort of interpersonal skills I'm talking about often get conflated with caring for others' welfare, which then leads to moral condemnation of nerds as egotistical and aloof. But th...
how much money (or other utility-bearing fruit) would you demand (or pay Scott) to take a drug which lowered your IQ by x pts?
Here's the funny thing: given who I am now, I would not pay to have my IQ lowered, and indeed would pay good money to avoid having it lowered, or even to have it raised. But I would also pay to have been, since early childhood, the sort of person who didn't have such an intelligence-centric set of priorities. I'm not transitive in my preferences; I don't want to want what I want.
Eliezer, I don't think there's a necessary tradeoff between intelligence (the academic rather than interpersonal kind) and happiness at the far nerd end of the spectrum---just that the way society is currently organized, it seems to be both true and common knowledge that there is (cf. Lara Foster's comment). Though despite the temptation, I can't justify dwelling on this phenomenon for too long---any more than on physical appearance, parental wealth, or any other aspect of our lives that we might love to "choose wisely" but can't. Unlike many o...
We are the cards we are dealt, and intelligence is the unfairest of all those cards.
I completely agree with that statement, though my interpretation of it might be the opposite of Eliezer's. From The Simpsons:
Lisa: Dad, as intelligence goes up, happiness often goes down. In fact, I made a graph! [She holds up a decreasing, concave upward graph on axes marked "intelligence" and "happiness"] Lisa: [sadly] I make a lot of graphs.
Apparently many people just don't have a mental bin for global risks to humanity, only counting up the casualties to their own tribe and country. Either that or they're just short-term thinkers.
Eliezer, I certainly worry about global risks to humanity, but I also worry about the "paradoxes" of utilitarian ethics. E.g., would you advocate killing an innocent person if long-term considerations convinced you it would have an 0.00001% chance of saving the human race? I'm pretty sure most people wouldn't, and if asked to give a reason, might say that they don't trust anyone to estimate such small probabilities correctly.
I would have exploded the first bomb over the ocean, and only then used it against cities if Japan still hadn't surrendered. No matter how many arguments I read about this, I still can't understand the downsides of that route, besides the cost of a 'wasted bomb.'
But what's just as tragic as the bomb having been used in anger, is that it wasn't finished 2-3 years earlier -- in which case it could have saved tens of millions of lives.
What you seem to want is an intelligence that is non-human but still close enough to human that we can communicate with it. Although it's not clear what we'd have to talk about, once we get past the Pythagorean theorem.
How about P vs. NP? :-)
Can't we imagine the SF writers reasoning that they're never going to succeed anyway in creating "real aliens," so they might as well abandon that goal from the outset and concentrate on telling a good story? Absent actual knowledge of alien intelligences, perhaps the best one can ever hope to do is to write "hypothetical humans": beings that are postulated to differ from humans in just one or two important respects that the writer wants to explore. (A good example is the middle third of The Gods Themselves, which delves into the fami...
The algorithm has to assume many different possible actions as having been taken, and extrapolate their consequences, and then choose an action whose consequences match the goal ... The algorithm, therefore, cannot produce an output without extrapolating the consequences of itself producing many different outputs.
It seems like you need to talk about our "internal state space", not our internal algorithms -- since as you pointed out yourself, our internal algorithms might never enumerate many possibilities (jumping off a cliff while wearing a clow...
Will: Yeah, it's 1/4, thanks. I somehow have a blind spot when it comes to constants. ;-)