All of NoSuchPlace's Comments + Replies

I think it would be more accurate to say that the test was meant to check whether the TV shows were effective than whether the children had a maximal inherent tendency towards virtuousness.

If I understand this correctly Deepmind is using each token in at most one update (They say they are training for less than one epoch), which means that it is hard to say anything about data efficiency of DL from this paper since the models are not trained to convergence on the data that they have seen.

They are probably doing this since they already have the data, and a new data point is more informative than an old one, even if your model is very slow to extract the available information with each update.

Searching his twitter, he barely seems to have mentioned GPT at all in 2020. Maybe he deleted some of his tweets?

He definitely didn't mention it much (which is part of what gave me that impression - in general, Sam's public output is always very light about the details of OA research). I dunno about deleting. Twitter search is terrible; I long ago switched to searching my exported profile dump when I need to refind an old tweet of mine.

I remember vividly reading one of his tweets last year, enthusiastically talking about how he'd started chatting with GPT-3 and it was impressing him with its intelligence.

Are you thinking of this tweet? I believe that was meant to be a joke. His actual position at the time appeared to be that GPT-3 is impressive but overhyped.

I don't believe that was it. That was very obviously sarcastic, and it was in July which is a month after the period I am thinking of (plus no chatbot connection), which is an eternity - by late July, as people got into the API and saw it for themselves, more people than just me had been banging the drums about GPT-3 being important, and there was even some genuine GPT-3 overhyping going on, and that is what Sam-sama was pushing back with those late July 2020 tweets. If you want to try to dig it up, you'll need to go further back than that.

Thank you I fixed it. I think the same argument shows that that question is also undefined. I think the real takeaway is that physics doesn't deal well with some infinities.

The question may be flawed in a way that I don't see it. Or the question may be flawed not by my mistake, but by a mistake already built in R^3 or R^4 math. I think, it's the later.

As you point out later in the thread the light can never touch any given sphere, since no matter which one you pick there will always be another sphere in front of it to block the light. At the same time the light beam must eventually hit something because the centre sphere is in its way. So your light beam must both eventually hit a sphere and never hit a sphere so your system is contradictory and thus ill defined.

You could make the question answerable by instead asking for the limit of the light beam as number of steps of packing done goes to infinity in... (read more)

It's not a cube. The corner points for example, are NOT covered by any sphere. Its a cube MINUS infinitely many points. On the edges, for example, only aleph zero points are covered and aleph one many - aren't. The limit technique you employ here, does not apply at all.

Since I don't spend all my time inside avoiding every risk hoping for someone to find the cure to aging, I probably value a infinite life a large but finite amount times more than a year of life. This means that I must discount in such a way that after a finite number of button press Omega would need to grant me an infinite life span.

So I preform some Fermi calculations to obtain an upper bound on the number of button presses I need to obtain Immortality, press the button that often, then leave.

They are different concepts, either you use statistical significance or you do Bayesian updating (ie. using priors):

If you are using a 5% threshold roughly speaking this means that you will accept a hypothesis if the chance of getting equally strong data if your hypothesis is false is 5% or less.

If you are doing Bayesian updating you start with a probability for how likely a statement is (this is your prior) and update based on how likely your data would be if your statement was true or false.

here is an xkcd which highlights the difference:


In particular, I intuitively believe that "my beliefs about the integers are consistent, because the integers exist". That's an uncomfortable situation to be in, because we know that a consistent theory can't assert its own consistency.

That is true, however you don't appear to be asserting the consistency of your beliefs, you are asserting the consistency of a particular subset of your beliefs which does not contain the assertion of its consistency. This is not in conflict with Gödel's incompleteness theorem which implies that no theory may co... (read more)

Yeah, that's a fair point. If I believed the sentence "my beliefs about the integers are consistent", it would be a pretty complicated sentence about the integers, containing an encoding of itself by the diagonal lemma. Maybe you're right that I don't actually believe that, not even intuitively. I just believe a bunch of other sentences, and believe that they are consistent. That would agree with the conclusion of the post, that my beliefs about the integers (both actual and extrapolated) can be covered by some specific formal theory.

Quirrell doesn't have a very large window in which to drink the blood.

According to this he should have plenty of time:

"Is it possible to Transfigure a living subject into a target that is static, such as a coin - no, excuse me, I'm terribly sorry, let's just say a steel ball."

Professor McGonagall shook her head. "Mr. Potter, even inanimate objects undergo small internal changes over time. There would be no visible changes to your body afterwards, and for the first minute, you would notice nothing wrong. But in an hour you would be sick,

... (read more)

I think McGonagall doubts Quirrell's goodness more than his knowledge.

Also, won't Quirrell die of transfiguration sickness if he drinks the blood of transfigured Rarity?

No, the unicorn will, but by the time Quirrell drinks it blood it won't be transfigured any more, so he will be fine.

From chapter 100: Quirrell doesn't have a very large window in which to drink the blood. More to the point, wouldn't particulate matter, other fluids, other bits of the unicorn pollute the blood as a result of the transfiguration? I could see the blood itself fixing that issue, but in the case of another fluid in a similar situation, I could see the drinker getting sick (if not to the degree that the animal did). I may not be understanding how transfiguration sickness works exactly. EDIT: formatting

These seem to be the relevant quotes:

"For some reason or other," said the amused voice of Professor Quirrell, "it seems that the scion of Malfoy is able to cast surprisingly strong magic for a first-year student. Due to the purity of his blood, of course. Certainly the good Lord Malfoy would not have openly flouted the underage magic laws by arranging for his son to receive a wand before his acceptance into Hogwarts."


Only there was a reason why they usually didn't bother giving wands to nine-year-olds. Age counted too, it wasn'

... (read more)

Have a program use its own output as input, effectively letting you run programs for infinite amounts of time, which depending on how time travel is resolved may or may not give you a halting oracle.

Also you can now brute force most of mathematics:

one way to do this is using first order logic which is expressive enough to state most problems. First order logic is semi-decidable which means that there are algorithms which will eventually return a proof for correct statements. Since your computer will take at most ten seconds to do this, you will have a proof after ten seconds or know that the statement was incorrect if your computer remains silent.

It won't give you a halting oracle without an infinite computer. The best it can do is effectively give you 2^n computing time, where n is the number of bits in memory.
What practical benefits or effects on the world do I get out of my new infinite computing power and mathematical proofs? Presumably i can now decrypt all non-quantum encryption, and do various high cost simulations very fast.
To expand on this: Moravec's classic "Time Travel and Computing".
X-D Someone should tell the mathematicians they are all obsolete now.

Is it reasonable to assign P(X) = P(will_be_proven(X)) / (P(will_be_proven(X)) + P(will_be_disproven(X))) ?

No I don't think so, consider the following example:

I flip a coin. If it comes up heads I take two green marbles, else I take one red and one green marble. Then I offer to let you see a random marble and I destroy the other one without showing you.

Then, suppose you wish to test whether my coin came up tails. if the marble is red, you have proven the coin came up tails and the chance of tails being disproven is zero, so your expression is 1, but it should be 0.5.

Yes, good answer, with a minor correction, since in this case P(coin came up tails) is in fact 1, not 0.5. The problem is that before I look at a marble, it is possible for doing that to prove that the coin came up tails, but not that it came up heads. And yet, the probability I should assign to the proposition that it came up heads is 0.5.

A 2 minute youtube video

I'm not going to explain what it is because that would ruin the video.

Also since explaining the video ruins it, here is a link to rot13

That was completely worth 1:42 of my time.
The number of times I've read an article about something like this that gives it away in the title or opening before giving the reader a chance to experience it for themselves... thanks for not explaining.

I feel like I am forced to raise my credence level for remote viewing being real to somewhere between 50 and 60 percent.

A general note on this sort of situation without getting into the specifics of this case:

If something very unlikely ,say P, happens and you have something which would explain that, say A. You should increase your confidence in A and as you receive stronger evidence you continue increasing your confidence. However you should not keep increasing your confidence in A until it is almost 1:

Since your test isn't between A and not A but betw... (read more)

Are you saying this is some thing which MIRI considers actively bad or are you just pointing out that this something which is not helpful for MIRI?

While I don't see the benefit of this exercise I also don't see any harm since for any idea which we come up with here some one else would very likely have come up with it before if it were actionable for humans.

Some ideas which come to mind:

  1. An AI could be very capable of predicting the stock market. It could then convince/trick/coerce a person into trading for it, making massive amounts of money, then the AI could have its proxy spend the new money to gain access to what ever the AI wants which is currently available on the market.

  2. The AI could could make some program which does something incredibly cool which everyone will want to have. The program should also have the ability to communicate meaningfully with its user (this would probably count as the incredi

... (read more)

Maybe "God" is well defined in the context of analytic philosophy, but if not you could consider starting by asking what they mean by "God". You could then ask a variation of 1 or 2 (they seem identical?) and how their response would change with other common definitions of "God".

This would hopefully prevent wasting time due to different use of words or misunderstanding their position.

In a similar vein you could ask what would be sufficient evidence for them to believe something. (Maybe this is already specified by the analytic... (read more)

I'll talk to them one-on-one. Yeah, I think asking them what they mean by god is a good idea--thanks!

I hate to point this out, but it is already easy enough to ridicule the proper spelling; its spelled Asperger.

Edit: Sorry tried to delete this comment, but that doesn't seem to possible for some reason.

[This comment is no longer endorsed by its author]Reply
Fixed. FWIW thanks.

LW believes the average probability that cryonics will be successful for someone frozen today is 22.8%

This is a nitpick, but using average (I'm assuming that means arithmetic mean) is misleading since so long as at least a nonnegotiable proportion of people is answering in the double digits every answer below 1% is being treated as essentially the same, thus skewing towards higher probabilities of cryonics working.

Appropriate quote:

I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man.

-Hans Bethe (Scroll to quotes about von Neumann for the source)

A Google gave me this page the argument appears to be that fish antibiotics are the same as human ones, but cheaper and you don't need a medical license. Obliviously don't assume this true unless you have better evidence.

Edit: Ninja'd

Had to resort to fish penicillin when I had no medical insurance and got scarlet fever a few years back. Worked great.

I completely missed that the first time, thank you. Is there a way to only retract part of a post?

No, but you can just add an edit saying something like "Wrong; see below".

Perhaps you should link to the article directly. At first I was trying to figure out the connection between densely packed Hitlers and one-line generators. (Edit: My mistake the link was there I just didn't see it)

Also, unless you want to elaborate, maybe this should go into the open thread?

They did link to the source as "(source)" (see the bottom line of their post). But I agree that this should be in the open thread.

Thank you.

People with number form synesthesia sometimes have the first twelve digits in the form of of a clock face, I was wondering if something similar was going on with male bodies usually being relatively angular in comparison to female bodies.

Is it possible that this has something to do with how rounded the shapes are? I noticed that the ratio of cusps to rounded edges (a circle counting for two) is 1:0, 2:0, 3:1, 2:0 for the male digits and 0:2, 1:1, 0:3, 0:4, 0:3 for the female digits. Though obviously this can change with typeface it often remains more or less true.

I doubt it. For me, 1, 3, 8, and 9, are all male, whereas 0, 2, 4, 5, 6, and 7 are all female.
Yes, I think that's where the association comes from.

Besides the issue of "subjective experience" that has already been brought up, there's also the question of what "thing" and "exists" mean.

I believe some form of MUH is correct so when I say exist I mean the same thing as in mathematics (in the sense of quantifying over various things). So by a thing I mean anything for which it is (at least in principle) possible to write down a mathematically precise definition.

Presumably abstract ideas and virtual particles fall under this category though in neither case am I sure becaus... (read more)

By "any subcomponent," do you mean that the powerset of the universe is composed of conscious entities, even when light speed and expansion preclude causal interaction within the conscious entity?

If you replace consciousness with subjective experience I believe your statement is correct. Also once you have one infinity you can take power sets again and again.

I'm really confused by what that does to anthropic reasoning

As far as I understand it breaks anthropic reasoning because now your event space is to big to be able to define a probabili... (read more)

Defining subjective experience is hard for the same reason that defining red is hard, since they are direct experiences. However in this case I can't get around this by pointing at examples. So the only thing I can do is offer an alternative phrasing which suffers from the same problem:

If you accept that our experiences are what an algorithm feels like from on the inside then I am saying that everything feels like something from the inside.

I would still consider this to be a single thing, the same way that "P and Q" is still a statement.

Phrasing this in different way when I say "exist" I mean "either exist in the sense of quantifying over relations or elements"(definition subject to revision as I learn more non-first order logic).

Irrationality game: Every thing which exists has subjective experience (80%). This includes things such as animals, plants, rocks, ideas, mathematics, the universe and any sub component of an aforementioned system.

Besides the issue of "subjective experience" that has already been brought up, there's also the question of what "thing" and "exists" mean. Are abstract concepts "things"? Do virtual particles "exist"? By including ideas, you seem to be saying "yes" to the first question. So do subjective experiences have subjective experiences themselves? Also, it's "an aforementioned". That's especially important when speaking.
By "any subcomponent," do you mean that the powerset of the universe is composed of conscious entities, even when light speed and expansion preclude causal interaction within the conscious entity? Because, if the universe is indeed spatially infinite, that means that the set of conscious entities is the infinity of the continuum; and I'm really confused by what that does to anthropic reasoning.
How would one define subjective experience for rocks and atoms?

Could you explain your reasoning? I'm very curious about this.

I was brought up Catholic, and quickly decided religion (later updated to human scribes millennia ago and blind faith therein) didn't really understand the difference between "bigger than I can understand" and "infinite." I also have a life so cartoonishly awesome (let me know if you have a solution to this, but I honestly believe if I laid down the facts people would think I'm lying), I figured what I called God not only exists but likes me more than everybody else. As I grew up, I "tested" the theory a few times, but never with any scientific rigor, and I think I'd have to call the results positive but not statistically significant. I have no problem assuming no god at the beginning of a discussion, and if I had strong enough evidence I'd like to think I'd admit I'm wrong. I also don't correlate anything about God with misunderstanding what "death" means - or as many Catholics call it life after death. I know it's a minority view here and would never trot it out in normal discourse, but it seemed appropriate for the venue.

Up voted, I believe that the universe is ultimately a complicated piece of mathematics. So when I say "exist" in a non-mathematical context, I mean the same thing as when I say it in a mathematical context.

I don't think "exist" means a single thing even in mathematics. For example, in second order logic I'd consider quantifying over elements in the domain of discourse to be different than quantifying over relations.

German: Möbel (2) Stuhl (1) Liege (2)

I could give a serious response to this about "AI" being stand in for "the person playing the AI" however other responses I could give:

  • I am firmly of the opinion that the distinction between artificial and natural is artificial.
Yes, my comment was in the spirit of your second response :-)

let nothing get in or out except for some very low bandwidth channel (text, video)

You may want to read this. Basically it is the scenario you describe, except for a smart human taking the place of an AI, and it turns out to be insufficient to contain the AI.

Yeah, I've seen this. It's pretty frustrating because of the secrecy. All I know is two guys let Yudkowsky out of the box. But I think there are two reasons why that scenario is actually very favorable to the AI. 1) An AI that is a bit dumber than humans in all ways, dumber in some ways and smarter in others, or just a little bit smarter than humans can still teach you a lot about further AIs you'd want to builid, and it seems at least plausible that an AI that's 2xHuman intelligence will come along before one that's 1000x human intelligence. We want to get as much out of the 2x-10x AIs as possible before we try building, and IF it is possble to avoid accidentally making that 1000x AI before you can make sure it is safe, then there could be a point where you are dealing with 10x AIs and where you can do so safely. So don't pit your engineer against an unboundedly transhuman AI. Pit your engineer against a slightly transhuman AI. 2) Security-wise, you can do a ton better than somebody who is just "really really sure they wouldn't let the AI out". You can have the AI talk only to people who literally cannot let the AI out, and on top of that ensure that everyone is briefed on the risks. Make sure that to the extent possible, letting the AI out is literally not an option. It is always an option to attempt to do so covertly, or to attempt to convince people to break self-imposed rules, sure, but how many prisoners in history have talked their way out of prison? You can even tell people "Okay, we're never ever letting this AI out. It is a prototype and considered dangerous. But hard, diligent work with attention to safety protocols will ensure we can someday build a safe AI that will have every capacity and more that this one does, to the enormous benefit of humanity". 3) If you look at this answer and say "well you sound like you think you're that much better than the people who took the challenge and lost not even to a transhuman AI but to Yudkowsky" and you'd be
Did you just call Eliezer an AI..? X-D

From my reading of Wikipedia:

Einstein was working at the patent office in 1905 while also working on his phd. He published his first annus mirabilis paper in March, was awarded his phd is April and published the remaining papers in May, June and September. He didn't take a position as a lecturer until 1908. This means Einstein was outside of physics while publishing his papers on Brownian motion, Special Relativity and Mass-Energy equivalence. Or did I miss something?

My understanding is that this was a normal career path at the time and the fact that he was not paid by the university after getting his degree is no more evidence of him being outside the physics department than his not being paid by the university before completing it. ---------------------------------------- Added: But it is relevant that this isn't normal today.

The obvious example example of a (/several) great discovery(s) in physics by someone outside of a physics department is Einstein.

Grad students count as people in physics departments.

I think that the idea is that somethings are very specific specifications, while others aren't. For example a star isn't a particularly unlikely configuration, take a large cloud of hydrogen and you'll get a star. However a human is a very narrow target in design space: taking a pile of carbon, nitrogen, oxygen and hydrogen is very unlikely to get you a human.

Hence to explain stars we don't need posit the existence of a process with a lot of optimization power. However since since humans are a very unlikely configuration this suggests that the reason they exist is because of something with a lot of optimization power (that thing being evolution).

I see what you are saying, certainly humans are very unlikely to spontaneously form in space. On the other hand, humans are not at all rare on Earth and stars are very unlikely to spontaneously form there.

Can one detect intelligence in retrospect?

Let me explain. Let's take the definition of an intelligent agent as an optimizer over possible futures, steering the world toward the preferred one.

Yes, at least some of the time. Evolution fits your definition and we know about that. So if you want examples of how to deduce the existence of an intelligence without knowing its goals ahead of time, you could look at the history of the discovery of evolution.

Also, Eliezer has has written an essay which answers your question, you may want to look at that.

I don't see how Eliezer's criterion of stable negentropic artifacts can tell apart people (alive) from stars (not alive) (this is my go-to counterexample to the standard definitions of life).

Taking it in fruit juice also solves the "how to make a placebo" problem.

Creatine doesn't dissolve in water at all, so the placebo would have to be something else that has that property.

It may be worth looking at gwern's essays on nootropics first, since they has done similar self experiments.

One thing in particular you could consider is looking if you can find something which looks/tastes similar enough to creatine that you can use it as a placebo to blind your self. For example you could get a friend to put the creatine and placebo in different containers, but not tell you which. Then take substance 1 for 2 weeks, then take substance 2 for 2 weeks. Then get your friend to tell you which box contained the creatin (or better yet have them write it down somewhere and then don't look at it.)

Creatine is sufficiently bulky that blinding would be a pain in the ass (you'd need something like 10 OO gel caps a day, although at least creatine is tasteless in my experience), and it's also very cheap; so I'm not sure blinding is entirely justified on a cost-benefit basis. It may be better to just randomize blocks of 2/3 weeks: it won't eliminate the expectancy or other placebo effects, but it does eliminate most of the potential problems. And just randomizing is a heck of a lot easier.
Creatine's beneficial effects seem to come from having higher systemic levels of creatine which takes a while to build. I have used it as a supplement many times for weight lifting and the effects (water retention, extra "pump" after a lift, ability to squeak out one more rep, etc.) are not noticeable until you've been taking it for about a week. Blinding would not work here unless you made a ton of real caps and a ton of placebo caps and separated them. Then, pick one whole batch and take it for 2-3 weeks. This increases the length of time of the experiment dramatically to get any useful data.
Yes, gwern's essays are what has motivated me to test these particular agents. I'm not clear on optimal management of placebo effects. The thing is that placebo effects are still effects. And if knowing that creatine's effect is a placebo will stop me from taking creatine and thereby rob me of its benefits then I would rather be ignorant. So I kind of don't want to test it against a placebo, although I recognise that this feels suspicious. Happy to be clarified on this one. Of course, the creatine could boos my score by making me redistribute my exertion of mental energy toward testing times, which is useless...
This would probably be easier for substances other than creatine; it needs to be taken in pretty large doses (5-10 grams, although some sources recommend an initial phase of up to 25 grams) to be effective, and it has a distinctive taste and odor. Unless you want to be swallowing ten or so 00 gel capsules a day...

The latest SMBC is on the singularity, fun theory and simulations.

Liked it a lot. I've noticed that "already" seems to be a very important word in LW-related arguments and posts, i.e. if X were a good idea, people would already be doing it; if Y is a plausible end for the universe, it's probably happened already.

I'm not saying that our population intuitions are simple, I'm saying that we can't rule out the possibility. For example a prior I wouldn't have expected physics to turn out to be simple, however (at least to the level that I took it) physics seems to be remarkably simple (particularly in comparison to the universe it describes), this leads me to conclude that there is some mechanism by which things turn out to be simpler than I would expect.

To give an example, my best guess (besides "something I haven''t though of") for this mechanism is that ma... (read more)

If you flip 1000 fair coins, the resulting output is more likely to be a mishmash of meaningless clumps than it is to be something like "HHTTHHTTHHTTHHTT..." or another very simple repeating pattern. Similarly, a chaotic[1] process like the evolution of our ethical intuitions is more likely to produce an arbitrary mishmash of conflicting emotional drives than it is to produce some coherent system which can easily be extrapolated into an elegant theory of population ethics. All of this is perfectly consistent with any reasonable formalization of Occam's Razor. EDIT: The new definition of "complex" that you added above is a reasonable one in general, but in this case it might lead to some dangerous circularity - it seems okay right now, but defining complexity in terms of human intuition while we're discussing the complexity of human intuition seems like a risky maneuver. The abstract aspects in question are abstractions and extrapolations of much older empathy patterns, or are trying to be. So, no. ---------------------------------------- 1. In the colloquial sense of "lots and lots and lots of difficult-to-untangle significant contributing factors"

our population intuitions are complex...

Are they? They certainly look complex, but that could be because we haven't found the proper way to describe them. For example the Mandelbrot set looks complex, but it can be defined in a single line.

Also "complex" leads to ambiguity, perhaps it needs to be defined. I used it in the sense that something is complex if it cannot be quickly defined for a smart and reasonably knowledgeable (in the relevant domain) human, since this seems to be the relevant sense here.

There's no particular reason why we should expect highly abstract aspects of our random-walk psychological presets to be elegant or simply defined. As such, it's practically guaranteed that they won't be.
Maybe a better phrasing would be that we don't a priori expect them to be simple...

It's a repost from last week.

Though rereading it, does anyone know whether Zach knows about MIRI and/or lesswrong? I expect "unfriendly human-created Intelligence " to parse to AI with bad manners to people unfamiliar with MIRI's work, which is probably not what the scientist is worried about.

I expect "unfriendly human-created Intelligence " to parse to HAL and Skynet to regular people.
Load More