lsusr

lsusr's Comments

The Arrows of Time

It maps to a finite length binary number if you force the particle into one of two states. So you could think of this universe as equidistant (in time) instants of a continuous universe where positions are measured, then they're let to evolve and then positions are measured again. The binary strings refer only to the snapshots where the continuous universe is measured.

This ignores the fact that there must be something to measure the particles with. The goal of this thought experiment is to play around with the Born rule while ignoring the time evolution of a wave function governed by the Schrödinger equation.

How do you do hyperparameter searches in ML?

I think it depends on your problem. If you have lots of compute power, high dimensionality and powerful higher-order emergent behavior from hyperparameters then Bayesian optimization makes sense. And vice-versa.

[Personal Experiment] Training YouTube's Algorithm

Most of the weirder suggestions happen later in the recommendations (lower down on the page). I think the algorithm thinks to itself "The user appears tired of music and probably wants to watch something else".

[Book Review] The Trouble with Physics

The Trouble with Physics does address zero-point energy as a possible explanation/alternative for dark energy. Your points 1 and 2 are correct. The problem is that the cosmological constant calculated from vacuum energy is many orders of magnitude greater than the observed cosmological constant.

Connectome-Specific Harmonic Waves

I didn't know that. Thank you. I have corrected the original article.

[Book Review] The Trouble with Physics

If physicists of the sort I'd like to talk to are around at all that's good enough for me.

Machine Learning Can't Handle Long-Term Time-Series Data

Thank you for the correction. AlphaStar is not completely stateless (even ignoring fog-of-war-related issues).

I think the issue here is more about a lack of 'reasoning' skills than time-scales: the network can't think conceptually...

This is exactly what I mean. The problem I'm trying to elucidate is that today's ML techniques can't create good conceptual bridges from short time-scale data to long time-scale data (and vice-versa). In other words, that they cannot generalize concepts from one time scale to another. If we want to take ML to the next level then we'll have to build a system that can. We may disagree about how to best phrase this but I think we're on the same page concerning the capabilities of today's ML systems.

As for connectome-specific harmonic waves, yes, my suggestion is to store slow-changing data in the largest eigenvectors of the Laplacian. The problem with LSTM (and similar RNN systems) is that there's a combinatorial explosion[1] when you try to backpropagate their state cells. This is the computational cliff I mentioned in the article.

The human brain has no known mechanism for conventional backpropagation in the style of artificial neural networks. I believe no such mechanism exists. I hypothesize instead that the human brain doesn't run into the aforementioned computational cliff because there's no physical mechanism to hit that cliff.

So if the human brain doesn't use backpropagation then what does it use? I think a combination of Laplacian eigenvectors and predictive modeling. If everything so far is true then this sidesteps the RNN computational cliff. I think it uses something involving resonance[2] between state networks instead, but we can reach this conclusion without knowing how the human brain works.

This is promising for a two related reasons: one involving power and the other involving trainability.

  • Concerning power: I think resonance could provide a conceptual bridge between shorter time-scales to longer time-scales. This solves the problem of fractal organization in the time domain and provides a computational mechanism for forming logic/concepts and then integrating them with larger/smaller parts of the internal conceptual architecture.
  • Concerning trainability: You don't have to backpropagate when training the human brain (because you can't). If CSHW and predictive modeling is how the human brain gradient ascends then this could completely sidestep the aforementioned computational cliff involved in training RNNs. Such a machine would require a hyperlinearly smaller quantity of training data to solve complex problems.

I think these two ideas work together; the human brain sidesteps the computational cliff because it uses concepts (eigenvectors) in place of raw low-level associations.


  1. I mean that the necessary quantity of training data explodes, not that it's hard to calculate the backpropagated connection weights for a single training datum. ↩︎

  2. Two state networks in resonance automatically exchange information and vice-versa. ↩︎

How to Talk About Antimemes

I didn't know what "straussian" means so I looked it up.

The “Straussian” approach to the history of political philosophy is articulated primarily in the writings of Leo Strauss. Strauss wrote extremely careful, detailed studies of canonical philosophical works along with essays explaining his approach. The most controversial claim Strauss made was that philosophers in the past did not always present their thoughts openly and explicitly. They used an “art of writing” to entice potential philosophers to begin a life of inquiry by following the hints the authors gave about their true thoughts and questions. The overriding purpose of Strauss's own studies was to prove that philosophy in its original Socratic form is still possible by showing the persistence of certain fundamental problems throughout the history of philosophy. The most pertinent of those problems, not merely to political philosophy but to human life as a whole, was the problem of justice. Strauss also insisted that “historicism” is based on a philosophical account of the character and limitations of human knowledge and that it can be refuted, therefore, only on the basis of a philosophical argument. ― The Straussian Approach by Catherine Zuckert in The Oxford Handbook of the History of Political Philosophy

But I can't tell exactly what 'people who engage in philosophy' means and why it's in quotes. It sounds like the title of an essay but a web search doesn't find anything.

Do you feel comfortable giving an example of such a memeplex?

Suffering is indeed an antimeme—and a broad-ranging one too. This is a new addition to my collection. Thanks.

I didn't know what the substitution effect is either so here's a definition.

The substitution effect is the decrease in sales for a product that can be attributed to consumers switching to cheaper alternatives when its price rises. ― Source

How to Talk About Antimemes

I appreciate this article very much. I read the whole thing and was disappointed when I realized Curtis Yarvin hadn't finished the series yet. It already has many great insights and illuminating points. I'll be digesting the implications for a while.

Diversity of approaches is important in this game. My favorite things about it is how Yarvin attacks a closely-related problem from a different perspective. In particular:

  • He focuses on the political economy. (I deliberately de-emphasize politics when choosing where to focus.)
  • He debugs things from first principles. "It is always better to debug forward." (I prefer to debug backwards.)

I agree with almost everything he says. I disagree with his claim that it is "always" better to debug forward. Debugging forward is better when you have a small dataset, as is the case with the historic sweep of broad political ideologies (the subject of Yavin's writing). I think when you're dealing with smaller problems, like niche technical decisions, there's a greater diversity data and therefore a greater opportunity to figure things out inductively.

Load More