What was your biggest recent surprise?

9th Jun 2012

28[anonymous]

2Thomas

-1[anonymous]

0Thomas

0shminux

25betterthanwell

3Luke_A_Somers

1betterthanwell

21Kaj_Sotala

3Jesper_Ostman

2Kaj_Sotala

16witzvo

12Jesper_Ostman

12[anonymous]

48CasioTheSane

10lukeprog

20rocurley

2[anonymous]

8JoshuaZ

6magfrump

5Nisan

6JoshuaZ

2Paul Crowley

6gjm

3Paul Crowley

5witzvo

4gjm

2magfrump

2A1987dM

2Paul Crowley

1A1987dM

2JoshuaZ

7DataPacRat

9Alejandro1

2witzvo

2IlyaShpitser

New Comment

36 comments, sorted by Click to highlight new comments since: Today at 3:03 AM

I was surprised that it is possible to apply simple(?) signal processing techniques to extract subtle signals from a video, e.g. somebody's heartbeat.

Surprise levels:

1) I never thought of that (that there could be useful hidden signals in standard video). Their paper references a few other attempts at this.

2) If I had thought of it, or someone had mentioned the idea, I would have guessed that those signals are not strong enough to
be extracted by any method.

3) And, even if there were a signal, I would have thought it would take very powerful techniques and many assumptions (like manually annotating where you expect to see the heartbeat, etc.) to make it work.
This is required less than I'd expect. From the paper:

we auto- matically select, and then amplify, a band of temporal frequencies that includes plausible human heart rates.

The same way we could obtain details from an astronomy video. One hour of a video of a distant planet might be worth of a big telescope, The long exposition time was the first step in this direction, long ago.

The current exo planet detection is another, bigger.

We simply don't yet use every information we have.

Astronomy is an interesting connection to think about wrt to this work. In astronomy, we're integrating the light received. In some sense this is dynamic, because there are small variations due to atmosphere. But the underlying signal is assumed to be static? I guess there are pulsars where we don't expect that. Maybe then people have to apply similar techniques (filtering out dynamics, e.g. from atmosphere, at frequencies far from that expected from pulsars?)

The standard approach is to simulate multiple possible sources and use Bayesian techniques, such as maximum likelihood, to evaluate which ones match the data best and whether the best is a good enough fit. The waveforms matching in LIGO is one of the extremes, given how weak the potential signal is.

A very salient moment of surprise was when I realized that my mental model of a simple three-quark proton was deeply (or simply) wrong:

You may have heard that a proton is made from three quarks. Indeed here are several pages that say so. This is a lie — a white lie, but a big one. In fact there are zillions of gluons, antiquarks, and quarks in a proton. The standard shorthand, “the proton is made from two up quarks and one down quark”, is really a statement that the proton has two more up quarks than up antiquarks, and one more down quark than down antiquarks. To make the glib shorthand correct you need to add the phrase “plus zillions of gluons and zillions of quark-antiquark pairs.” Without this phrase, one’s view of the proton is so simplistic that it is not possible to understand the LHC at all.

http://profmattstrassler.com/articles-and-posts/largehadroncolliderfaq/whats-a-proton-anyway/

What still surprises me, whenever I think of it, is how we live in a *such* a *big* world,
even on the smallest scales we are able to probe. And also that things like nuclei happen to be stable over long enough timescales for things like chemistry and life to occur.

All of those gluons and quark-antiquark pairs are every bit as stable as the Earth's gravitational field. They're elements of the *ground state* for a quark.

The process of finding the ground state for a particle from its interactions, including dragging in virtual pairs to screen high field intensities around the singularity, is called Renormalization.

I realized that my mental model of a simple three-quark proton was deeply (or simply) wrong.

For an explanation using more showing and less telling: Checking what's inside a proton

You’ve heard the famous statement that “a proton is made from two up quarks and a down quark”. But in this basic article, and this somewhat more advanced one, and in a recent post where I went into some details about what we know about proton structure, I’ve claimed to you that protons are chock full of particles, most of which carry a tiny fraction of the proton’s energy, and most of which are gluons, along with a substantial number of of quarks and antiquarks.

What I want to do in this article is show you evidence that the statements made about proton structure in this post are true. After all, why should you have to take my word for such things? Let’s look at some LHC data, and see how it confirms these notions.

Not really related to any explicit field of study, but...

Most recently, I was surprised by the extent to which the Japanese still use faxes.

Before that, I was *really* surprised by the whole Planetary Resources thing. My model of the world claimed that aside for some relatively minor stuff like space tourism and such, plausible pushes to *actually* do something new and non-trivial in space *simply do not happen*, and that there would be essentially no real progress in any kind of space exploration before the Singularity. At best, there would be a new private space station in orbit, or NASA would announce a manned Mars mission that would get quietly killed by budget cuts a few years later. Having a bunch of billionaires announce a real effort to actually mine asteroids was something that made it slightly easier for me to alieve in the Singularity happening some day. Before, both asteroid mining and the Singularity used to belong to the mental category of "things that I intellectually acknowledge as possible, but which would be such huge changes to the current paradigm that on a gut level, I don't really grasp either of them happening".

I learned that I'm not crazy for having been confused by the double-speak I was taught in college about "observation" in Quantum Mechanics and that maybe there's a community where I can get straight answers to things.

According to RolfAndreassen:

'Observation' is a shorthand (for historical reasons) for 'interaction with a different system', for example a detector or a human; but a rock will do as well. I would actually suggest you read the Quantum Mechanics Sequence on this point, Eliezer's explanation is quite good.

According to Douglas_Knight,

I advocate in place of "many worlds interpretation" the phrase "no collapse interpretation."

Thanks Less Wrong.

That there is reason to believe that it is "relatively easy" (say if we survive x-risk and get a good singleton within a million years) to colonize billions of galaxies. That makes the expected (ignoring possibility of discovering new useful physics, creating universes etc) hedonic utility of x-risk reduction up to some 9-orders of magnitude greater than I had previously thought.

Not very recent, but...

I was surprised way back when I learned that we had already located some neurons which seem to encode the expected utility of possible actions. ('Utility' here isn't meant in the philosophical sense but in the neuroeconomic sense.)

I also remember being amused 1+ years ago when I did some more studying in AI and decision theory and learned that all currently described AI agents are Cartesian dualists. (This is old news 'round these parts, I know.)

Some AI have a limited understanding of their own bodies; they can learn kinematic models of the actuators in the robots they control or form "affordances", ideas about what kind of interactions with their environments they can effect. But very few (apparently no?) cognitive architectures or AI designs model thier minds as being algorithms executing on their computing hardware, so whatever metacognitive representation and processing they have, it's "disembodied", like old ideas of the mind being made of spooky stuff. The combination of physical bodies and spooky minds is called Cartesian Dualism after philospher René Descartes.

First of all, terminology. SO(n) is orientation-preserving orthogonal transformations on n-space, or equivalently the orientation-preserving symmetries of an (n-1)-sphere in n-space. So Joshua's statement is about SO(n) for n>3.

OK. So the obvious way to interpret "rotation about an axis" in many dimensions is: you choose a 2-dimensional subspace V, then represent an arbitrary vector as v+w with v in V and w in its orthogonal complement, and then you rotate v. The dimension of the set of these things is (n-1)+(n-2) from choosing V -- you can pick one unit vector to be in V, and then another unit vector orthogonal to it -- plus 1 from choosing how far to rotate. So, 2n-2.

And yes, the dimension of SO(n) is n(n-1)/2. One way to see this: you've got matrices with n^2 elements, and n(n+1)/2 constraints on those elements because all the pairwise inner products of the columns (including each column with itself) are specified.

These dimensions are all topological dimensions rather than vector-space dimensions, since the sets we're looking at aren't vector subspaces of R^(n^2), but there's nothing abusive about that :-).

It can't be 2n-2 because it's 3 when n=3. I get 2n-3 because the first vector is chosen with n-1 degrees of freedom, then the second with n-2, then subtract one because of the equivalence class of rotations, then add one for choosing how far to rotate.

EDIT: More generally, I think that the dimension of k-dimensional subspaces of an n-dimensional spaces is k(n-k), so where k=2 you get 2n-4, then add one for choosing how far to rotate. I'd feel better if I knew what I meant by "dimension" here though; it's not a vector space.

These are the best references I know:

As for topological dimension, roughly, if you consider a neighborhood of a point in the space, what does space look like from there? Locally it's Euclidean if you're "on" a manifold. The rigorous definition involves charts. See also Lebesgue covering dimension.

Meh, you're right: the dimension of the space of 2-dimensional subspaces of n-space is 2n-4, not 2n-3. The reason why my handwavy dimension-counting above was wrong is ("of course") that I failed to "subtract one because of the equivalence class of rotations". And yes, you're right that in general it's k(n-k).

"Dimension" here means: *locally* the set looks like a that-many-dimensional vector space. That is, e.g., any element of SO(n) has a neighbourhood that's topologically the same as a neighbourhood in R^(n(n-1)/2).

I'd feel better if I knew what I meant by "dimension" here though; it's not a vector space.

The number of parameters you need to label each element (provided the labelling is a continuous function, otherwise you can label points of **R**^2 with a single parameter e.g. (*3.1415*..., **2.7182**...) -> *3***2**.*1***7***4***1***1***8**...)

Just figured something new out, based on my original post here.

The energy/time version of the uncertainty principle says that virtual particles of any given energy can spontaneously appear - but the bigger the energy, the shorter they last. This explains why the strength of electromagnetism falls off at a distance - virtual photons with high energies last for short times and thus travel short distances, while virtual photons with low energies can last for longer times and travel longer distances. All straight from the book.

But I just recalled that other forces, the strong and weak, are described as having a range limitation. I've always read about that range-limit existing - but since no reason was given for it, and I couldn't figure it out, I just shrugged my shoulders with an assumption of 'quantum weirdness'. But now I have an idea /why/ that range limit exists: with a minimum amount of energy in any given virtual particle for those forces, in the form of those particles' rest mass, the uncertainty principle thus also imposes a maximum lifespan, and thus a maximum range.

It's been such a long time since I've had a chance to figure out something about physics that I wasn't simply directly told, it's a surprisingly pleasant experience. :)

(Now, I'm wondering if this particular idea implies that since gravity's range is infinite, that implies that if gravity is transmitted by force-particles rather than space-curvature (assuming that that's a distinction with meaning), then the virtual gravity force-carrying particles have to be able to have arbitrarily small energies, and thus no significant rest mass...)

Your insight about forces carried by massless vs. massive particles and their respective ranges is absolutely correct. Congratulations!

(Now, I'm wondering if this particular idea implies that since gravity's range is infinite, that implies that if gravity is transmitted by force-particles rather than space-curvature (assuming that that's a distinction with meaning), then the virtual gravity force-carrying particles have to be able to have arbitrarily small energies, and thus no significant rest mass...)

It is generally agreed that the still-to-be-constructed theory of quantum gravity will have gravitons, particles carrying the gravitational force analogous to photons for the EM field, and yes, gravitons should be massless as you argue. This is not however in conflict with the description of gravity as space-time geometry. Though the full details will have to wait till we understand quantum gravity completely, provisionally we can make unambiguous sense of gravitons at the pertrubative level: Think of a gravitational wave as a small ripple in spacetime, then one can quantize this perturbation and gravitons are to the wave as photons are to classical EM waves.

I just had a big "update".

EDIT: I'm a little less sure now. See the end.

I found something to teach programming on an immediate level to non-programmers without knowing they are programming, without any cruft. I always wished this was possible, but now I think we're really close.

If you want to *get* programming, and are a visual thinker, but never could get over some sort of inhibition, I think you should try this. You won't even know you're programming. It may not be "quite" programming, but it's closer than anything else I've seen at this level of simplicity. And anyway it's fun and pretty.

The important thing about this "programming" environment is that it is completely concrete. There are no *formal* "abstractions," and yet it's all about *concrete* representation of the idea formerly known as abstractions.

Enough words. Take a look: http://recursivedrawing.com/

[I was excited because to me this seems awfully close to the untyped lambda-calculus, made magically concrete. The "normal forms" are the "fixed points" are the fractals. It's all too much and requires more thought. It only makes pictures, though, for now. However, I can't see anything in it like "application" so... the issue of how close it is seems actually quite subtle. Somehow application's being bypassed in a static way. Curious. I'm sure there's a better way to see it I just haven't gotten yet.]

PS: *Blue! Blue! Blue!* (**)

** This is a joke that will only make sense if you've read The Name of the Wind: Rothfuss. If you prefer to spoil yourself, here, but buy the book afterward if you like it.

cross-posted here [I'm not sure about the etiquette, but I think this idea deserves not to be lost in an old thread.]

I recently flipped through the "Cartoon Guide to Physics", expecting an easy-to-understand rehash of ideas I was long familiar with; and that's what I got - right up to the last few pages, where I was presented with a fairly fundamental concept that's been absent from the popular science media I've enjoyed over the years. (Specifically, that the uncertainty principle, when expressed as linking energy and time, explains what electromagnetic fields actually /are/, as the propensity for virtual photons of various strengths to happen.) I find myself happy to try to integrate this new understanding - and at least mildly disturbed that I'd been missing it for so long, and with an increased curiosity about how I might find any other such gaps in my understanding of how the universe works.

So: what's the biggest, or most surprising, or most interesting concept /you/ have learned of, after you'd already gotten a handle on the basics?