All of Thomas's Comments + Replies

AFAIK, Eliezer Yukowsky is one of Everett's Multiple Worlds interpretation of QM, proponents. As such, he should combine the small, non-zero probability that everything is going to go well with AGI, and this MWI thing. So, there will be some branches where all is going to be well, even if the majority of them will be sterilized. Who cares for those! Thanks to Everett, all will look just fine for the survivors.

I see this as a contradiction in his belief system, not necessarily that he is wrong about AGI. 

5Adele Lopez1mo
The alignment problem still has to get solved somehow in those branches, which almost all merely have slightly different versions of us doing mostly the same sorts of things. What might be different in these branches is that world-ending AGIs have anomalously bad luck in getting started. But the vast majority of anthropic weight, even after selecting for winning branches, will be on branches that are pretty ordinary, and where the alignment problem still had to get solved the hard way, by people who were basically just luckier versions of us. So even if we decide to stake our hope on those possibilities, it's pretty much the same as staking hope on luckier versions of ourselves who still did the hard work. It doesn't really change anything for us here and now; we still need to do the same sorts of things. It all adds up to normality.
I think this is a bad way to think about probabilities under the Everett interpretation, for two reasons. First, it's a fully general argument against caring about the possibility of your own death. If this were a good way of thinking, then if you offer me $1 to play Russian roulette with bullets in 5 of the 6 chambers then I should take it -- because the only branches where I continue to exist are ones where I didn't get killed. That's obviously stupid: it cannot possibly be unreasonable to care whether or not one dies. If it were a necessary consequence of the Everett interpretation, then I might say "OK, this means that one can't coherently accept the Everett interpretation" or "hmm, seems like I have to completely rethink my preferences", but in fact it is not a necessary consequence of the Everett interpretation. Second, it ignores the possibility of branches where we survive but horribly. In that Russian roulette game, there are cases where I do get shot through the head but survive with terrible brain damage. In the unfriendly-AI scenarios, there are cases where the human race survives but unhappily. In either case the probability is small, but maybe not so small as a fraction of survival cases. I think the only reasonable attitude to one's future branches, if one accepts the Everett interpretation, is to care about all those branches, including those where one doesn't survive, with weight corresponding to |psi|^2. That is, to treat "quantum probabilities" the same way as "ordinary probabilities". (This attitude seems perfectly reasonable to me conditional on Everett.)
Some of it goes from the atmosphere into the oceans. Other parts go from the ocean into the atmosphere.  There are complex computer models that estimate all those flows. 

CO2 is rather quick in abandoning the atmosphere via dissolving in water. If that wasn't so, the lakes in the mountains would be without life, but they aren't. It's CO2 that enables photosynthesis there, nothing else. The same CO2, which was not so long ago still in the air.

Dissolving CO2 in water is also a big thing in (Ant)Arctic oceans. A lot of life there is a witness of that. 

Every cold raindrop has some CO2 captured.

So that story of "CO2 persisting in the atmosphere for centuries" is just wrong.

Ocean acidification is generally seen as one of the problems of climate change. While it might not be a problem in some bodies of water that otherwise would have little carbon, it's a problem in the major oceans.  It's a factor that's fully accounted for in the climate models. 

Upvoted for a fresh, non-forced by an ultra-utilitaristic POV. With this approach, p(Giga-doom) is also much lower, I guess.

If and only if a month has exactly two characters in its Roman numeral, then it has less than 31 days. No exception. 

Save your knuckles!

4Charlie Steiner8mo
There's a poem I know: Thirty days has September, April, June, and November. All the rest have thirty-one. Except for February.
Come on, if you didn't click the button, you have some explaining to do!
1Lone Pine8mo
Also, only eat oysters in a month with an R in the name.

Sure, but "alpine villages" or something alike, were called "astronomical waste" in the MIRI's language from the old days. When the "fun space", as they called it, was nearly infinite. Now they say, its volume is almost certainly zero.

I know, that "Right now no one knows how to maximize either paper clips ...". I know. But paper clips have been the official currency of these debates for almost 20 years now. Suddenly they aren't, just because "right now no one knows how to"?

And then, you are telling me what is to be done first and how? 

Yes it’s an important insight that paper clips are a representative example of a much bigger and simpler space of optimization targets than alpine villages.

As I see, nobody is afraid of "alpine village life maximization", as some are afraid of "paper-clip maximization". Why is that? I wouldn't mind very much, a rouge superintelligence which tiles the Universe with alpine villages. In the past discussions, that would be "astronomical waste", now it's not even in the cards anymore? We are doomed to die, and not to be "bored for billion of years in a nonoptimal scenario". Interesting.

Right now no one knows how to maximize either paper clips or alpine villages. The first thing we know how to do will probably be some poorly-understood recursively self-improving cycle of computer code interacting with other computer code. Then the resulting intelligence will start converging on some goal and converge on capabilities to optimize it extremely powerfully. The problem is that that emergent goal will be a lot more random and arbitrary than an alpine village. Most random things that this process can land on look like a paper clip in how devoid of human value they are, not like an alpine village which has a very significant amount of human value in it.

Okay, I didn't know that. I find all his accounts quite interesting to read, and quite consistent with each other, too. Despite the fact, that they are from different times. 

On topic, he was quite wrong in this particular Ukraina-Russia case. But who wasn't?

3Ben Pace1y
Thanks for being understanding! I agree, reading Samo's writing is quite interesting :)

FYI, Samo Burja's username here is [removed by Ben Pace].

8Ben Pace1y
Hey Thomas, pardon my edit, and perhaps you have good reason to think otherwise, but I currently believe that Samo would prefer to not have his pseudonym be public. I'll check in with him to confirm and revert if not, I just want to take extra care to not accidentally dox someone, and I think it's good to lean on the side of caution with deanonymization. Added: his non-pseudonymous account is of course Samo Burja []. Added2: Samo confirmed he does not wish to be deanonymised.

After nearly 300 years of not solving the so-called Fermat's last problem, many were skeptical that's even (humanely) possible. Some, publicly so. Especially some of those, who were themselves unable to find a solution, after years of trying. 

Now, something even much more important is at stake. Namely, how to prevent AI to kill us all. The more important, but maybe also even (much?) easier problem, after all.

It's not over yet.

A Neural Network, observing itself, instead of some other input, could be intriguing, if not perhaps even conscious? The input layer is smaller than all those hidden layers and synapses between them. But the input layer may hover across its own interior, just as it normally hovers over many cat pictures. Has anyone already tried it?

Given that neural network quines ( [] ) are a thing, it's plausible that a sufficiently complex neural network could indeed observe itself even without explicitly using itself as an input, assuming you mean 'performing calculations on its own weights' by observing itself. Although this would likely require training to do so. (This is mainly an excuse to tell others that neural network quines exist.)
What would be the goal?

It was a slippery slope, with those Neural Networks. They were able to do more and more things, previously unimagined to be possible for them. It was a big surprise for everyone, how good they were at chess, 3600 or so Elo points. Leela Chess Zero invented some theoretical breakthroughs, soon to be exploited by more algorithmic, non-NN chess engines like Stockfish, for its position evaluation function. Even back then, I was baffled by people expecting that this propagation will soon stop, due to some unexpected effect, which never came. Not in chess, nor a... (read more)

It's nothing wrong with the Googling method. Besides, one could search for <<pizzeria at the end of the world papa mamma>>. Should work now, too.

Maybe, next year's solution to this problem will be "at least 13".

You are right. It's 12 or more different kinds of pizza. If it was 1 kind of pizza served, he could be certain, that one kind of pizza was at the majority (in that case at all minus one) orders. Since somebody could order a salad. But only one without pizza, to avoid equal orders by pizza kind.

Even if there were 11 different pizza kinds on the menu, Marco could be sure, there is the majority kind of pizza there. Since this Fraenkel conjecture has been proved up to 11 by now.

But for 12 or more, no one really knows yet. Probably it's true, but who knows. Congratulation, you were rather quick. Despite the fact, the problem formulation looks vague to you.

I searched for <<family of subsets closed under union "more than half">> and the Wikipedia page about the conjecture was the first result :-).

After some time, a new math puzzle.

I'm having trouble figuring out what the question actually is. On the face of it (taking, for now, only things that Martha and Marco and their computer actually affirm and ignoring  one thing that seems at first like obvious hyperbole) it seems that what we're told is: there's a (presumably finite) set P of pizza-types and a set O of subsets of P ("orders"), whenever two sets are in O so is their union, and O contains no singleton sets. This obviously isn't enough to tell us anything interesting. But then there's also this stuff about whether there's an element of P that's in more than half the elements of O. Marco thinks probably yes (are we supposed to infer something from this? it seems like we'd need to know a lot more than we're told about exactly how much Marco knows and how good he is at reasoning) and he says "Today’s state of the art mathematics isn’t powerful enough to ascertain that this is actually the case!" (are we supposed to take this literally? not powerful to ascertain this given what information?) Is the idea that given |P| or |O| or both along with the statements in the first paragraph, but no other information, present-day mathematics is not able to determine whether or not there is necessarily an element of P in more than half the elements of O?

I don't see how "2.5 batteries" could mean "independence from fossil fuels". 

It's the grid-level storage problem. Solar panels don't produce much energy at night, wind doesn't blow all the time, etc., so you need to produce extra energy and store it for when production is less than demand.

Perhaps, it's not you who is missing something.

I recently discovered there's no closed-form formula for the circumference of an ellipse

Yes. I've asked the computer, to give me some simple approximation formula then. This came out:




It's quite good when b >> a.

Well, Oracle, which under 1000 words question, would be answered by the most influential answer for our future? What answer to which question would be the most earthshattering?  

How can a swarm of nuclear asbestos superintelligent nanobots be synthesised using common household items? (The rhetoric in the answer will keep your guard down for just long enough to publish it.)

CROSSPOST from my blog:

The R0 factor for this illness, which denotes the average number of people infected by a carrier, isn't a constant, it's a function of time. R0 = R0 (time). In fact, it's a function of more parameters and not just time. For example, if quarantined, R0 should be close to 0. There are many unknown factors here, of course, some even known. Some push this now well known R0 term bellow 1, others above 1. It's all about reducing R0 below 1, and the ... (read more)

I've done some benchmarking in 2018. I benchmarked an "AI software" we devised, by some benchmarks mostly I invented, too. Which doesn't look very good, I know, but bear with me!

For one, I have given an unsolved Sudoku puzzle to this software with two working names, "Spector" and/or "Profounder". It concluded, that for every X and every Y: X==Y implies that column(X) != column(Y) and row(X)!=row(Y). (Zero Sudoku topic knowledge by Spector is, of course, a necessary condition.)
With several unsolved Sudoku puzzles, S... (read more)

For Eve and her apple pieces. She may eat one piece per second and stay in Paradise forever because at any given moment only a finite number of pieces has been eaten by her.

If her eating pace doubles every minute, she is still okay forever.

Only if she, for example, doubles her eating pace after every say 100 pieces eaten, then she is in trouble. If she supertasks.

I tend to agree. I don't know is it just a habit or something else, like a conservative profile of myself and many others, but that doesn't really matter.

The new site isn't that much better. Should be substantially better than this one for a smooth transition.

Please, focus only on what has been said and not on how it has been said.

Now, there is a possibility that all is wrong from my side. Of course I think how right I am, but everybody thinks that anyway. Including this Temple guy with his "don't code yet"! I wonder what people here think about that.

One more disagreement perhaps. I do think that this AlphaGo Zero piece of code is an astonishing example of AI programming, but I have some deep doubts about Watson. It was great back then in 2011, but now they seem stuck to me.

Knowledge is information error-corrected (adapted) to a purpose (problem).

No. Knowledge is just information. If you have some information how to solve a particular problem, it's still "just information".

There are no hard and fast rules about how error-corrected or to what

Those rules are just some information, some data. How "fast and hard" are they? When there is a perfect data about the fastest checking algorithm, then it's still "just data".

The field started coding too early and is largely wasting its time.

Per... (read more)

Your post reads to me as unfriendly, uncurious, and not really trying to make progress in resolving our disagreements. If I've misinterpreted and you'd like to do Paths Forward, let me know. []
Well, the purpose of my comment was to clarify my views as the author of the link. Do I understand correctly that you disagree that the discussion format can influence the quality of the discussion?
If you fail to get your n flips in a row, your expected number of flips on that attempt is the sum from i = 1 to n of i*2^-i, divided by (1-2^-n). This gives (2-(n+2)/2^n)/(1-2^-n). Let E be the expected number of flips needed in total. Then: Hence (2^-n)E = (2^-n)n + 2 - (n+2)/2^n, so E = n + 2^(n+1) - (n+2) = 2^(n+1) - 2

There are 143 primes between 100 and 999. We can, therefore, make 2,924,207 3x3 different squares with 3 horizontal primes. 50,621 of them have all three vertical numbers prime. About 1.7%.

There are 1061 primes between 1000 and 9999. We can, therefore, make 1,267,247,769,841 4x4 different squares with 4 horizontal primes. 406,721,511 of them have all four vertical numbers prime. About 0.032%.

I strongly suspect that this goes to 0, quite rapidly.

How many Sudokus can you get with 9 digit primes horizontally and vertically?

Not a single one. Which is quite ob... (read more)

I'm sure you're right that the fraction of all-horizontals-prime grids that have all verticals prime tends to 0 as the size increases. But the number of such grids increases rapidly too.

Say, that we have N-1 lines, with N-1 primes. Each N digits. What we now need is an N digit prime number to put it below.

Its most significant digit may be 1, 3, 7 or 9. Otherwise, the leftmost vertical number wouldn't be prime. If the sum of all N-1 other rightmost digits is X, then:

If X mod 3 = 0, then just 1 and 7 are possible, otherwise the leftmost vertical would be divisible by 3. If X mod 3 = 1, then 1, 3, 7 and 9 are possible. If X mod 3 = 2, then just 3 and 9 are possible, otherwise the leftmost vertical would be divisible by 3.

The probability is ... (read more)

It is indeed quite complicated. But if you handwavily estimate the results of all that complexity -- the probabilities of divisibility by various things -- then the estimate you get is the one cousin_it gave earlier [], because the Prime Number Theorem is what you get when you estimate the density of prime numbers by treating divisibility-by-a-prime as a random event. (Which for many purposes works very well.)

Interesting line of inferring... I am quite aware how dense primes are, but that might not be enough.

I have counted all these 4x4 (decimal) crossprimes. There are 913,425,530 of them if leading zeros are allowed. But only 406,721,511 without leading zeros.

if leading zeros ARE allowed, then there are certainly arbitrary large crossprimes out there. But if leading zeros aren't allowed, I am not that sure. I have no proof, of course.

Well, I said something in line with "people may need some stuff to live and declaring that we should "put people before that stuff" is a silly way to present the situation". Maybe not as silly as it's a demagoguery.

But then I changed my mind and decided to not participate in a discussion at all. But somehow couldn't erase this now empty box.

Read my reply to Dagon.
[This comment is no longer endorsed by its author]Reply
Was this meant to be a reply?


But I saw this:

Time to put humans before business.

Time to put humans before oxygen, too? Silly stuff.

Humans existed before business. Not at the tech level we have today. Humans might exist after businesses go extinct, that is the dream of singularity and post-scarcity economies. But with the tech we have today, yep this is not going to fly.
I don't understand why it's silly. I don't understand why you're comparing business to oxygen. Lest you fall prey to the fundamental attribution error, I don't agree with the article, and I think a lot of it is applause lights. The core sentiment of humanity first isn't one I subscribe to either (I'm an individualist), but the philosophy behind the article is one I can under and appreciate. It's one I can pass an ideological Turing test for. You seem to be caricaturing the position in the article, and that isn't very epistemically healthy.

The bottom and the rightmost prime can both have only odd digits without 5. The probability for each prime to fit there is then only (2/5)^N times that. We can't see them as independent random numbers.

Here's another fun argument. The question boils down to "how common are primes?" And the answer is, very common. We can define a subset of positive integers as "small" if the sum of their reciprocals converges, and "large" otherwise. For example, the set of all positive integers is large (because the harmonic series diverges), and the complement of any small set is large. Well, it's possible to prove that the set of all primes is large, while the set of all numbers not containing some digit (say, 7) is small. So once you go far enough from zero, there are way more primes than there are numbers not containing 7. Now it doesn't sound as surprising that you can make squares out of primes, does it?
If you're pointing out that my argument isn't rigorous, I know. It can be overcome by some kind of non-random conspiracy among primes. But it needs to be a hell of a strong conspiracy, much stronger than what you mention. Even if the whole square had to consist of only 1 3 7 9, you'd still have 4^(N^2) possible squares, and 1/N^(2N) of them would still be a huge number. Example, just for fun: Heck, I can even make these: Bottom line, primes are much more common than you think :-)

Very well. What do you think, are there arbitrary large squares possible or not?

I think not. Even in binary notations NxN and above, probably don't exist for an N, large enough.

I'm pretty sure arbitrarily large squares are possible. Here's an argument that assumes primes behave like random numbers, which is often okay to assume. By the prime number theorem [], the chance that an N-digit number is prime is proportional to 1/N. So the chance that N^2 random digits arranged in a square will form 2N primes (N rows and N columns) is about 1/N^(2N). But the number of ways to select N^2 digits is 10^(N^2) which easily overwhelms that.

Congratulation! It's essential that you don't tell the algorithm, at least for now. You have an extra solution, where every horizontal has its equal vertical. Which is perfectly okay, but I wonder if that is the property of your algorithm?

No, it gives plenty of non-symmetric solutions as well. Here's one:
Well, it seems that there is a "crossword" of size 270343 []. That's in decimal; in binary the same approach gives you 37156667 [].
Nice exercise, thank you! With the right algorithm, even a slow language like Python can find an 8x8 square of primes in less than a minute: Here's my code [] if anyone's curious. The idea is simple, precompute all suffixes of primes and then fill the square by backtracking from the bottom right corner.

My country isn't from the list of the default choices. So I type it and pressed Enter. It's all I remember.

This issue should be fixed now, thanks for your report.
That helps, thank you.

Previous session is set to be finished.

Your browser reports that it was used previously to answer this survey. We are resetting the session so that you can start from the beginning.

Click here to start the survey.

I have just pressed Enter after my country's name. Fix this!

I think I'm going to need some more information. Can't fix a bug I can't reproduce.

There is a mountain on the Moon's south pole, where the Sun is always shining. Except when it's covered by Earth, which is rare and not for a long time. A great place for a palace of the Solar System's Emperor.

Can't use the Moon. It's already booked and reserved for a computronium.

Kepler's law isn't O=constant*A. This is very wrong, silly even.

No problem this week, just an appreciation for people of LessWrong who can be right, when I am wrong.

Say, that SB has only 10 tries to escape.

The DM (Dungeon Master) tosses his 10 coins and SB tosses her 20 coins, even before the game begins.

There are 2^30, which is about a billion possible outputs. More than half of them grants her freedom.

We compute her exit by - At the earliest x>y condition in each output bit string, the DM has also the freeing coin toss.










I think you must just have an error in your code somewhere. Consider going round 3. Let the probability you say "3" be p_3. Then according to your numbers Since the probability of escaping by round 3 is the probability of escape by round 2, plus the probability you don't escape by round 2, multiplied by the probability the coin lands tails, multiplied by the probability you say "3". But then p_3 = 11/49, and 49 is not a power of two!

Your basic idea is right here. But ... this product isn't that straightforward.

Say it's the 100th session. It is a lot of ways, that x becomes greater than y exactly this time. Especially if the formula is x>y+sqr(y) or something alike from the Chebyshev's arsenal.

If the session is then 101st, this new small probability isn't much smaller than it was in the 100th session.

Still, you may be right that the product (1-p_n)*(1-p_n+1) ... converges to 1/2 at the most.

Well, I doubt it.

Don't doubt it, do the math (and [] helps a LOT with this. Provide any formula for probability of guessing "Nth wakeup" such that it sums to 1 (or less) from 1 to infinity. Calculate the sum from 1 to infinity of the product of this and the 0.5^n chance that you're currently on the Nth wakeup. You will never find one that sums to better than 0.5. Your weirdness using X and Y is not helping - any algorithm you can state eventually comes out to "some probability for each N of guessing N". And when you view it that way, you'll see that the sum has to be less than 50%.

Interesting. Do you agree that every number is reached by the z function defined above, infinite number of times?

And yet, every single time z != sleeping_round? In the 60 percent of this Sleeping Beauty imprisonments?

Even if the condition x>y is replaced by something like x>y+sqrt(y) or whatever formula, you can't go above 50%?

Extraordinary. Might be possible, though.

You clearly have a function N->N where eventually every natural number is a value of this function f, but f(n)!=n for all n.

That would be easier if it would be f(n)>>n almost always. But sometimes is bigger, sometimes is smaller.

Yes, definitely. Yes. I proved it. Well, on average we have f(n)=n for one n, but there's a 50% chance the guy won't ask us on that round.
Load More