At any one time I usually have between 1 and 3 "big ideas" I'm working with. These are generally broad ideas about how some thing works with many implications for how the rest of the whole world works. Some big ideas I've grappled with over the years, in roughy historical order:

  • evolution
  • everything is computation
  • superintelligent AI is default dangerous
  • existential risk
  • everything is information
  • Bayesian reasoning is optimal reasoning
  • evolutionary psychology
  • Getting Things Done
  • game theory
  • developmental psychology
  • positive psychology
  • phenomenology
  • AI alignment is not defined precisely enough
  • everything is control systems (cybernetics)
  • epistemic circularity
  • Buddhist enlightenment is real and possible
  • perfection
  • predictive coding grounds human values

I'm sure there are more. Sometimes these big ideas come and go in the course of a week or month: I work the idea out, maybe write about it, and feel it's wrapped up. Other times I grapple with the same idea for years, feeling it has loose ends in my mind that matter and that I need to work out if I'm to understand things adequately enough to help reduce existential risk.

So with that as an example, tell me about your big ideas, past and present.

I kindly ask that if someone answers and you are thinking about commenting, please be nice to them. I'd like this to be a question where people can share even their weirdest, most wrong-on-reflection big ideas if they want to without fear of being downvoted to oblivion or subject to criticism of their reasoning ability. If you have something to say that's negative about someone's big ideas, please be nice and say it as clearly about the idea and not the person (violators will have their comments deleted and possibly banned from commenting on this post or all my posts, so I mean it!).

New to LessWrong?

New Answer
New Comment

12 Answers sorted by

The big three:

  • Scientific progress across a wide variety of fields is primarily bottlenecked on the lack of a general theory of adaptive systems (i.e. embedded agency)
  • Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale
  • Personally, my own relative advantage in solving technical problems increases with difficulty of the problem across a wide variety of domains

A few sub-big-ideas:

Regarding economic progress:

  • Solving the coordination problem at scale seems related to my musing (though not new as there is a large literature) about firms and particularly large corporation. Many big corporation seem more suitable to modeling as markets themselves rather than market participants. That seems like it will have significant implications for both standard economic modeling and policy analysis. Kind of goes back to Coase's old article The Nature of the Firm.
  • Given the availability of technology, and how that technology should (and has) red
... (read more)
2johnswentworth4y
I would say that perfect encryption is a great example of something we don't understand which looks like noise: at first it looks totally random, but if someone hands you a short key, suddenly it becomes obvious that the "noise" is highly systematic. That's understanding. The problem is that achieving understanding is not always computationally tractable.

>Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale

Upstream: setting the ontology that allows interoperability aka computer interface design = largest companies in the world. Hell, you can throw a GUI on IRC and get billions of dollars. That's how early in the game things are.

Have you read any of Cosma Shalizi's stuff on computational mechanics? Seems very related to your interests.

3johnswentworth4y
I had not seen that, thank you.

In October, 1991 an event of such profound importance happened in my life that I wrote the date and time down on a yellow sticky. That yellow sticky has long been lost, but I remember it; it was Thursday, October 17th at 10:22 am. The event was that I had plugged a Hayes modem into my 286 computer and, with a copy of Procomm, logged on to the Internet for the first time. I knew that my life had changed forever.

At about that same time I wanted to upgrade my command line version of Word Perfect to their new GUI version. But the software was something crazy like $495, which I could not afford.

One day I had an idea: "Wouldn't it be cool if you could log on to the Internet and use a word processing program sitting on a main frame or something located somewhere else? Maybe for a tiny fee or something."

I mentioned this to the few friends I knew who were computer geeks, and they all scoffed. They said that software prices would eventually be so inexpensive as to make that idea a complete non-starter.

Well, just look around. How many people are still buying software for their desktops and laptops?

I've had about a dozen somewhat similar ideas over the years (although none of that magnitude). What I came to realize was that if I ever wanted to make anything like that happen, I would need to develop my own technical and related skills.

So I got an MS in Information Systems Development, and a graduate certification in Applied Statistics, and I learned to be an OK R programmer. And I worked in jobs -- e.g., knowledge management -- where I thought I might have more "Ah ha!" ideas.

The idea that eventually emerged -- although not in such an "Ah ha!" fashion -- was that the single biggest challenge in my life, and perhaps most peoples' lives, is the absolute deluge of information out there. And not just out there, but in our heads and in our personal information systems. The word "deluge" doesn't really even begin to describe it.

So the big idea I am working on is what I call the "How To Get There From Here" project. And it's mainly about how to successfully manage the various information and knowledge requirements necessary to accomplish something. This ranges from how to even properly frame the objective to begin with...how to determine the information necessary to accomplish it...how to find that information...how to filter it...how to evaluate it...how to process it...how to properly archive it...etc., etc., etc.

Initially I thought this might end up a long essay. Now it's looking more like a small book. It's very interesting to me because it involves pulling in so many different ideas from so many disparate domains and disciplines -- e.g., library science, decision analysis, behavioral psychology -- and weaving everything together into a cohesive whole.

Anyway, that's the current big idea I'm working on.

Of ideation, prioritization, and implementation, I agree that prioritization is the most impactful, tractable, and neglected.

Please see my post below. My current big idea is very similar to yours. I believe we may be able to exchange notes!

1Rick Jones4y
I got your PM. I live in Paris, France. Nonetheless, I would be happy to exchange notes. Can you access my e-mail?
1alkay4y
I unfortunately did not. I am also unable to locate the message I sent you! Maybe its because I am new to this site.

"Let's finish what Engelbart started"

1. Recursively decompose all the problem(s) (prioritizing the bottleneck(s)) behind AI alignment until they are simple and elementary.

2. Get massive 'training data' by solving each of those problems elsewhere, in many contexts, more than we need, until we have asymptotically reached some threshold of deep understanding of that problem. Also collect wealth from solving others' problems. Force multiplication through parallel collaboration, with less mimetic rivalry creating stagnant deadzones of energy.

3. We now have plenty of slack from which to construct Friendly AI assembly lines and allow for deviations in output along the way. No need to wring our hands with doom anymore as though we were balancing on a tightrope.

In the game Factorio, the goal is to build a rocket from many smaller inputs and escape the planet. I know someone who got up to producing 1 rocket/second. Likewise, we should aim much higher so we can meet minimal standards with monstrous reliability rather than scrambling to avoid losing.

See: Ought

We should make thousands of clones of John von Neumann from his DNA. We don't have the technology to do this yet, but the upside benefit would be so huge it would be worth spending a few billion to develop the technology. A big limitation on the historical John von Neumann's productivity was not being able to interact with people of his own capacity. There would be regression to the mean with the clones' IQ, but the clones would have better health care and education than the historical von Neumann did plus the Flynn effect might come into play.

There was some previous discussion of this idea in Modest Superintelligences and its comments. I'm guessing nobody is doing it due to a combination of weirdness, political correctness, and short-term thinking. This would require a government effort and no government can spend this much resources on a project that won't have any visible benefits for at least a decade or two, and is also weird and politically incorrect.

What exactly is the secret ingredient of "being John von Neumann"? Is it mostly biological, something like unparalleled IQ; or rather a rare combination of very high (but not unparalleled) IQ with very good education?

Because if it's the latter, then you could create a proper learning environment, where only kids with sufficiently high IQ would be allowed. The raw material is out there; you would need volunteers, but a combination of financial incentives and career opportunities could get you some. (The kids would get paid for going there and... (read more)

6James_Miller4y
Most likely von Neumann had a combination of (1) lots of additive genes that increased intelligence, (2) few additive genes that reduced intelligence, (3) low mutational load, (4) a rare combination of non-additive genes that increased intelligence (meaning genes with non-linear effects) and (5) lucky brain development. A clone would have the advantages of (1)-(4). While it might in theory be possible to raise IQ by creating the proper learning environment, we have no evidence of having done this so it seems unlikely that this was the cause of von Neumann having high intelligence.
9habryka4y
I am confused. You might be talking about g, not IQ, since we have very significant evidence that we can raise IQ by creating proper learning environments, given that most psychometrics researchers credit widespread education for a large fraction of the Flynn effect, and generally don't think that genetic changes explain much.
7James_Miller4y
Yes, I am referring to "IQ" not g because most people do not know what g is. (For other readers ,IQ is the measurement, g is the real thing.) I have looked into IQ research a lot and spoken to a few experts. While genetics likely doesn't play much of a role in the Flynn effect, it plays a huge role in g and IQ. This is established beyond any reasonable doubt. IQ is a very politically sensitive topic and people are not always honest about it. Indeed, some experts admit to other experts that they lie about IQ when discussing IQ in public (Source: my friend and podcasting partner Greg Cochran. The podcast is Future Strategist.). We don't know if the Flynn effect is real, it might just come from measurement errors arising from people becoming more familiar with IQ-like tests, although it also could reflect real gains in g that are being captured by higher IQ scores. There is no good evidence that education raises g. The literature on IQ is so massive, and so poisoned by political correctness (and some would claim racism) that it is not possible to resolve the issues you raise by citing literature. If you ask IQ experts why they disagree with other IQ experts they will say that the other experts are idiots/liars/racists/cowards. I interviewed a lot of IQ experts when writing my book Singularity Rising.
4habryka4y
To be clear, I think it's very obvious that genetics has a large effect on g. The key question that you seemed to dismiss above is whether education or really any form of training has an additional effect (or more likely, some complicated dynamic with genetics) on g. And after looking into this question a lot over the past few years, I think the answer is "maybe, probably a bit". The big problem is that for population-wide studies, we can't really get nice data on the effects of education because the Flynn effect is adding a pretty clear positive trend and geographic variance in education levels doesn't really capture what we would naively think as the likely contributors to the observed increase in g. And you can't do directed interventions because all IQ tests (even very heavily g-loaded ones) are extremely susceptible to training effects, with even just an hour of practicing on Raven's progressive matrices seeming to result in large gains. As such, you can't really use IQ tests as any kind of feedback loop, and almost any real gains will be drowned out by the local training effects.
2habryka4y
I think the Flynn effect has been pretty solidly established, as well as the fact that it has had a significant effect on g. I do think the most likely explanation of a large fraction of the effect on g is explained via the other factors I cited above, namely better nutrition and more broadly better health-care, resulting in significantly fewer deficiencies.
2habryka4y
This seems like a misleading summary of what g is. g is the shared principal component of various subsets of IQ tests. As such, it measures the shared variance between your performance on many different tasks, and so is the thing that we expect to generalize most between different tasks. But in most psychometric contexts I've seen, we split g into 3-5 different components, which tends to add significant additional predictive accuracy (at the cost of simplicity, obviously). To describe it as "the real thing" requires defining what our goal with IQ testing is. Results on IQ tests have predictive power over income and life-outcomes even beyond the variance that is explained by g, and predictive power over outcomes on a large variety of different tasks beyond only g. The goal of IQ tests is not to measure g, it isn't even clear whether g is a single thing that can be "measured". The goal of IQ tests historically has been to assess aptitude for various jobs and roles (such as whether you should be admitted to the military, which is where a large fraction of our IQ-score data comes from). For those purposes, we've often found that solely focusing on trying to measure aptitude that generalizes between tasks is a bad idea, since there is still significant task-specific variance that we care about, and would have to give up on measuring in the case of defining g as the ultimate goal of measurement.
2Viliam4y
By "g, not IQ" you mean the difference between genotype and phenotype, or something else?
6Kaj_Sotala4y
The g-factor, or g for short, is the thing that IQ tries to measure. The name "g factor" comes from the fact that it is a common, general factor which all kinds of intelligence draw upon. For instance, Deary (2001) analyzed an American standardization sample of the WAIS-III intelligence test, and built a model where performance on the 13 subtests was primarily influenced by four group factors, or components of intelligence: verbal comprehension, perceptual organization, working memory, and processing speed. In addition, there was a common g factor that strongly influenced all four. The model indicated that the variance in g was responsible for 74% of the variance in verbal comprehension, 88% of the variance in perceptual organization, 83% of the variance in working memory, and 61% of the variance in processing speed. Technically, g is something that is computed from the correlations between various test scores in a given sample, and there's no such thing as the g of any specific individual. The technique doesn't even guarantee that g actually corresponds with any physical quantity, as opposed to something that the method just happened to produce by accident. So when you want to measure someone's intelligence, you make a lot of people take tests that are known to be strongly g-loaded. That means that the performance on the tests is strongly correlated with g. Then you take their raw scores and standardize them to produce an IQ score, so that if e.g. only 10% of the test-takers got a raw score of X, then anyone getting the raw score of X is assigned an IQ indicating that they're in the top 10% of the population. And although IQ still doesn't tell us what an individual's g score is, it gives us a score that's closely correlated with g.
2habryka4y
See my reply above. I think thinking about IQ tests trying to "measure g" is pretty confusing, and while I used to have this view, I updated pretty strongly against it after reading more of the psychometrics literature.
4Kaj_Sotala4y
Hmm. This interpretation was the impression that I recall getting from reading Jensen's The g Factor, though it's possible that I misremember. Though it's possible that he was arguing that IQ tests should be aiming to measure g, even if they don't necessarily always do, and held the most g-loaded ones as the gold standard.
4habryka4y
I think it's important to realize that what g is, shifts when you change what subtests your IQ test consists of, and how much "weight" you give to each different result. And as such it isn't itself something that you can easily optimize for. Like, you always have to define g-loadings with respect to a test battery over which you measure g. And while the correlations between different test-batteries' g's are themselves highly correlated, they are not perfectly correlated, and those correlations do come apart as you optimize for it. Like, an IQ test with a single task, will obviously find a single g-factor that explains all the variance in the test results. As such, we need to define a grounding for IQ tests that is about external validity and predictiveness of life-outcomes or outcomes on pre-specified tasks. And then we can analyze the results of those tests and see whether we can uncover any structure, but the tests themselves have to aim to measure something externally valid. To make this more concrete, the two biggest sources of IQ-test data we have come from american SAT scores, and the norwegian military draft which has an IQ-test component for all males who are above 18 years old since the mid of the 20th century. The goal of the SAT was to be a measure of scholastic aptitude, as well as a measure of educational outcomes. The goal of the norwegian military draft test was to be a measure of military aptitude, in particular to screen out people below a certain threshold of intelligence that were unfit for military service and would pose a risk to others, or be a net-drag on the military. Neither of these are optimized to measure g. But we found that the test results in both of these score batteries are well-explained by a single g-factor. And the fact that whenever we try to measure aptitude on any real-life outcomes, we seem to find a common g-factor, is why we think there is something interesting with g going on in the first place.
2Viliam4y
If "X" is something we don't have a "gears model" of yet, aren't "tests that highly correlate with X" the only way to measure X? Especially when it's not physics. In other words, why go the extra mile to emphasize that Y is merely the best available method to measure X, but not X itself? Is this a standard way of talking about scientific topics, or is it only used for politically sensitive topics?
4Kaj_Sotala4y
Here the situation is different in that it's not just that we don't know how to measure X, but rather the way in which we have derived X means that directly measuring it is impossible even in principle. That's distinct from something like (say) self-esteem, where it might be the case that we might figure out what self-esteem really means, or at least come up with a satisfactory instrumental definition for it. There's nothing in the normal definition of self-esteem that would make it impossible to measure on an individual level. Not so with g. Of course, one could come up with a definition for something like "intelligence", and then try to measure that directly - which is what people often do, when they say that "intelligence is what intelligence tests measure". But that's not the same as measuring g. This matters because it's part of what makes e.g. the Flynn effect so hard to interpret - yes raw test scores on IQ tests have gone up, but have people actually gotten smarter? We can't directly measure g, so a rise alone doesn't yet tell us anything. On the other hand, if people's scores on a test of self-esteem went up over time, then it would be much more straightforward to assume that people's self-esteem has probably actually gone up.
2habryka4y
In this case it's important to emphasize that difference, because a commonly raised hypothesis is that while we can see clear training effects on IQ, none of these effects are on the underlying g-factor, i.e. the gains do not generalize to new tasks. For naive interventions, this has been pretty clearly demonstrated:
[-][anonymous]4y20

Do you think it would make a big difference though? Isn't it likely that a bunch of John von Neumanns are already running around given the world's population? Aren't we just running out of low-hanging fruits for von Neumanns to pick?

2James_Miller4y
While you might be right, it's also possible that von Neumann doesn't have a contemporary peer. Apparently top scientists who knew von Neumann considered von Neumann to be smarter than the other scientists they knew.
1Matthew Barnett4y
The world population is larger than it used to be, and far more capable people are able to go to college and grad school than before. I would assume that there are many von Neumanns running around, and in fact there are probably people who are even better running around too.

The negative principle: it seems like in a huge number of domains people are often defaulting to positivist accounts or representations of things, yet when we look at the history of big ideas in STEM I think we see a lot of progress happening from people thinking about whatever the inverse of the positivist account is. The most famous example I know of is information theory, where Shannon solved a long standing confusion by thinking in terms of uncertainty reduction. I think language tends to be positivist in its habitual forms which is why this is a recurring blind spot.

Levels of abstraction: Korzybski, Marr, etc.

Everything is secretly homeostasis

Modal analysis: what has to be true about the world for a claim to have any meaning at all i.e. what are its commitments

Type systems for uncertainty


A lot of these are quite controversial:

  • AI alignment has failed once before, we are the product
  • Technical obstacles in the way of AGI is our most valuable resource right now, and we're rapidly depleting it
  • A future without superintelligent AI is also dystopian by default (being turned into paperclips doesn't sound so bad by comparison)
  • AI or Moloch, the world will eventually be taken over by something because there is a world to be taken over
  • We were just lucky nuclear weapons didn't turn out to be an existential threat; we might not be so lucky in the future

  • The (observable) universe is tiny on the logarithmic scale
  • Exploration of outer space turned out way less interesting than I imagined
  • Exploration of cyberspace turned out way more interesting than I imagined
  • For an idea to be worthwhile, there needs to be some proportionality between its usefulness and its difficulty of realization (e.g. some god-like powers are easier to achieve than flying cars)
  • The term "nanotechnology" indicates how primitive the field really is; we don't call our every other technology "centitechnology"

  • Human-level intelligence is the lower bound for a technological species
  • Modern humans are surprisingly altruistic given its population size; ours is the age of disequilibrium
  • Technological progress never repeats itself, so neither does history
  • Every social progress is just technological progress in disguise
  • The effect of the bloodiest conflicts in history on world population is.... none whatsoever

  • Schools teach too much, not too little
  • The education system is actually a selection system
  • Innovation, like oil, is a very limited resource and can't be scaled arbitrarily
  • The deafening silence around death by aging

Very few of these are controversial here. The only ones that seem controversial to me are

  • Schools teach too much, not too little

...

That's all, actually. And I'm not even incredulous about that one, just a bit curious.

Although aging and death is terrible, I don't think there's much point in building a movement to stop it. AGI will almost certainly be solved before even half of the processes of aging are.

[-][anonymous]4y140

Everyone has his pet subject which he thinks everybody in society ought to know and thus ought to be added to the school curriculum. Here on LessWrong, it tends to be rationality, Bayesian statistics and economics, elsewhere it might be coding, maths, the scientific method, classic literature, history, foreign languages, philosophy, you name it.

And you can always imagine a scenario where one of these things could come in handy. But in terms of what's universally useful, I can hardly think of anything beyond reading/writing and elementary school maths, that's it. It makes no economic sense to drill so much knowledge into people's heads; division of labor is like the whole point of civilization.

It's also morally wrong to put people through needless suffering. School is a waste, or rather theft, of youthful time. I wish I had played more video games and hung out with friends more. I wish I scored lower on all the exams. If your country's children speak 4 languages and rank top 5 in PISA tests, that's nothing to boast about. I waited for the day when all the misery would make sense; that day never came. The same is happening to your kids.

Education is like code - the less the better; strip down to the bare essentials and discard the rest.

Edit: Sorry for the emotion-laden language, the comment turned into a rant half-way through. Just something that has affected me personally.

3mako yass4y
You make a very strong point that I think I can wholly agree with, but I think there is more here we have to examine. It's sometimes said that the purpose of public education is to create the public good of an informed populace (sometimes, "fascism-resistant voters". A more realpolitic way of putting it is "a population who will perpetuate the state", this is good exactly when the state is good). So they teach us literature and history and hope that this will create a cultural medium whose constituents can communicate well and never repeat their civilization's past mistakes. If it works, the benefits to the commons are immeasurable. There isn't an obvious upper bound of curriculum size where enriching this commons would necessarily stop being profitable. The returns on sophistication of a well designed interchange system are greater than linear on the specification size of the system. It might not be well designed. I don't remember seeing anything about economics or law (or even, hell, driving) in the public curriculum, and I think that might be the real problem here. It's not that they teach too much, it's that they don't understand what kind of things a creator of the public good of a good public is supposed to be teaching.
2[anonymous]4y
I disagree on multiple dimensions: First, let's get disagreements about values out of the way: I hate the term "brainwashing" since it's virtually indistinguishable from "teaching", the only difference being the intent of the speaker (we're teaching our kids liberal democratic values while the other tribe is brainwashing their kids with Marxism). But to the extent "brainwashing" has a useful definition at all, creating "a population who will perpetuate the state" would be it. In my view, if our civilization can't survive without tormenting children with years upon years of conditioning, it probably shouldn't. Second, I'm very skeptical about this model of a self-perpetuating society. So "they" teach us literature and history? Who's "they"? Group selectionism doesn't work; there is no reason to assume that memes good at perpetuating themselves would also be good at perpetuating the civilization they find themselves in. I think it's likely that people in charge of framing the school curriculum are biased towards holding those subjects in high regard that they've been taught in school themselves (sunken cost fallacy, prestige signaling), thus becoming vehicles for meme spread. I don't see any incentive for any education board member to stop, think and analyze what will perpetuate the government they're a part of. I also very much doubt the efficacy of such education/brainwashing at manipulating citizens into perpetuating the state. In my experience, reverse psychology and tribalism are much better methods for this purpose than straightforward indoctrination, particularly with people in their rebellious youth. The classroom, frequently associated with boredom and monotony, is among the worst environments to apply these methods. There is no faster way to create an atheist out of a child than sending him through mandatory Bible study classes; and no faster way to create a libertarian than to make him memorize Das Kapital. Lastly, the bulk of today's actual school curr
3TAG4y
It's hard not to, when you don't know what people are going to end up doing. If you know that the son of the blacksmith is going to be a blacksmith, the problem gets much simpler.
1[anonymous]4y
It's easy to prepare kids to become anything. Just teach what's universally useful. It's impossible to prepare kids to become everything. Polymaths stopped being viable two centuries ago. There is a huge difference between union and intersection of sets.
1TAG4y
Why? It's not obvious that that is better than teaching a bit of everything. For instance, if 10% of jobs need a little bit of geography, then having only candidates who know nothing about geography is going to be a disadvantage to those employers.
1[anonymous]4y
And thus, knowing geography becomes a comparative advantage to those who choose to study it. Why should the rest of us care?
1TAG4y
Because people not knowing geography could be a disadvantage to employERs as well as employees. A minimal education system could be below the economic optimum.
1[anonymous]4y
This is like saying we need the government to mandate apple production, because without apples we might become malnourished which is bad. Why can't the market solve the problem more efficiently? Where's the coordination failure?
1TAG4y
The market can't solve (high school) education because education is mostly public.

My past big ideas mostly resemble yours, so I'll focus on those of my present:

Most economic hardship results from avoidable wars, situations where players must burn resources to signal their strength of desire or power (will). I define Negotiations as processes that reach similar, or better outcomes as their corresponding war. If a viable negotiation process is devised, its parties will generally agree to try to replace the war with it.

Markets for urban land are currently, as far as I can tell, the most harmful avoidable war in existence. Movements in land price fund little useful work[1] and continuously, increasingly diminish the quality of our cities (and so diminish the lives of those who live in cities, which is a lot of people), but they are currently necessary for allocating scarce, central land to high-valuae uses. So, I've been working pretty hard to find an alternate negotiation process for allocating urban land. It's going okay so far. (But I can't bear this out alone. Please contact me if you have skills in numerical modelling, behavioural economics, machine learning and philosophy (well mixed), or any experience in industries related to urban planning)

Bidding wars are a fairly large subclass of avoidable wars. The corresponding negotiation, for an auction, would be for the players to try to measure their wills out of band, then for those found to have the least will to commit to abstaining from the auction. (People would stop running auctions if bidders could coordinate well enough to do this, of course, but I'm not sure how bad a world without auctions would be, I think auctions benefit sellers more than they benefit markets as a whole, most of the time. A market that serves both buyer and seller should generally consider switching to Vickrey Auctions, in the least.)

[1] Regarding intensification; my impression so far is that there is nothing especially natural about land price increase as a promoter of density. It doesn't do the job as fast as we would like it to. The benefits of density go to the commons. Those common benefits of density correlate with the price of the individual dense building, but don't seem to be measured accurately by it.


Another Big Idea is "Average Utilitarianism is more true than Sum Utilitarianism", but I'm not sure whether the world is ready to talk about that. I don't think I've digested it fully yet. I'm not sure that rock needs to be turned over...

I also have a big idea about the evolutionary telos of paraphilias, but it's very hard to talk about.


Oh, this might be important: I studied logic for four years so that I could tell you that there are no fundamental truths, and all math and logic just consists of a machine that we evolved and maintained just because it happened to work. There's no transcendent beauty at the bottom of it all, it's all generally kind of ugly even after we've cut the ugliest parts away, and there may be better alternatives (consider CDT and FDT for an example of a deposition of seemingly fundamental elegance)

The usual Georgist story is that the problem of allocating land can be solved by taxing away all unimproved value of land (or equivalently by the government owning all land and renting it out to the highest bidder), and that won't distort the economy, but the people who profit from current land allocation are disproportionately powerful and will block this proposal. Is that related to the problem you're trying to solve?

1mako yass4y
Yeah. "Replace the default beneficiaries of avoidable wars with good people who use the money for good things" is a useful civic method to bear in mind but probably far from ideal. Taxation is fine, you need to do it to fund the commons, but avoidable wars seems like a weird place to draw taxes from, which nobody would consciously design? Taxes that would slow down urbanisation (by making the state complicit in increases in urban land price/costs of urban services) sound like a real bad idea. My proposed method is, roughly, using a sort of reciprocal, egalitarian utilitarianism to figure out a good way to arrange everyone who owns a share in the city (shares will cost about what it costs to construct an apartment. Maybe different entry prices for different apartment classes.. although the cost of larger apartment tickets will have to take into account the commons costs that lower housing density imposes on the labour market), and to grant leases to their desired businesses/services. There shall be many difficulties along the way but I have not hit a wall yet.
4cousin_it4y
AFAIK the claim is that taxing land value would lead to lower rents overall, not higher. There's some econ reasoning behind that.

I don't think this is addressable because of the taboo tradeoffs in current culture around money and class. Some people produce more negative externalities than others in ways our legal system can not address, therefore people sequester themselves via money gating since that is still acceptable in practice even though it is decried explicitly.

1mako yass4y
What negative externalities are you thinking of. Maybe it's silly for me to ask you to say, if you're saying they're taboo, but I'm looking over all of the elitist taboos and I don't think any of them really raise much of an issue. Did I mention that my prototype aggregate utility function only regards adjacency desires that are reciprocated. For instance, if a large but obnoxious fan-base all wanted to be next to a single celebrity author who mostly holds them all in contempt, the system basically ignores those connections. Mathematically, it's like, the payoff of positioning a and b close together is min(a.desireToBeNear(b), b.desireToBeNear(a)). The default value for desireToBeNear is zero. P.S. Does the fact that each user desire expression (roughly, the individual utility function) gets evaluated in a complex way that depends on how it relates to the other desire expressions make this not utilitarianism? Does this position that fitting our desires together will be more complex than mere addition have a name?
2romeostevensit4y
https://www.fastcompany.com/90107856/urban-poverty-has-a-sound-and-its-loud feedback loop: both contribute to the other.

One thing I'm thinking about these days:

Often times, when people make decisions, they don't explicitly model how they themselves will respnd to the outcomes; they instead use simplified models of themselves to quickly make guesses about the things that they like. These guesses can often act as placebos which turn the expected benefits of a given decision into actual benefits solely by virtue of the expectation. In short if you have the psychological architecture that makes it physically feasible to experience a benefit, you can hack your simplified models of yourself to make yourself get that benefit.

This isn't quite a dark art of rationality since it does not need to actually hurt your epistemology but it does leverage the possibility of changing who you are (or more explicitly, changing who you are by changing who you think you are). I'm currently using this as a way to make myself into the kind of person who is a writer.


Humans prefer mutual information. Further, I suspect that this is the same mechanism that drives our desire to reproduce.

The core of my intuition is that we instinctively want to propagate our genetic information, and also seem to want to propagate our cultural information (e.g. the notion of not being able to raise my daughter fills me with horror). If this is true of both kinds of information, it probably shares a cause.

This seems to have explanatory power for a lot of things.

  • Why do people continue to talk when they have nothing to say, or spend time listening to things that make them angry or afraid? Because there are intrinsic rewards for speaking and for listening, regardless of content. These things lead to shared information the same way sex leads to children.
  • Why do people make poetry and music? Because this is a bundle of their cultural information propagating in the world. I think the metaphor about the artwork being the artist's child should be taken completely literally.
  • Why do people teach? A pretty good description of teaching is mutualizing information.

This quickly condensed into considering how important shared experiences are, and therefore also coordinated groups. This is because actions generate shared experiences, which contain a lot of mutual information. Areas of investigation for this include military training, asabiyah, and ritual.

What I haven't done yet is really link this to what is happening in the brain; naively it seems consistent at first blush with the predictive processing model, and also seems like maybe-possibly Fristonian free energy applied to other humans.

We experience and learn so many things over years. However, our memories may fail us. They fail in recalling a relevant fact that could have been very useful for accomplishing an immediate task at hand. e.g. My car tire has punctured on a busy street, but I cannot recall how to change it -- though I remember reading about it in the manual.

It is likely that the memory is still alive somewhere in the deep corner of my brain. In this case, I maybe able to think hard and push myself to remember it. Such a process is bound to be slow and people on the street would yell at me for blocking it!

Sometimes our memories fail us "silently". We don't know that somewhere in our brain is information we can bring to bear on accomplishing a task on hand. What if I don't even know that I have read a manual on changing car tires?!

Long term memory accessibility is thus an issue.

Now our short term memory is also very very limited (4-7 chunks at a time). In fact, short-cache of working memory might be a barrier to intellectual progress. It is then very crucial to inject relevant information in this limited working=memory space if we are to give a task our best, most intelligent shot.

Thus, I think about memory systems that can artificially augment the brain. I think of them from point of view of storing more information and indexing it better. I think of them for faster and more relevant retrieval.

I think of them as importable and exportable -- I can share them with my friends (and learn how to change tires instantaneously). A pensieve like memory bank.

I thus think of "digital memories" that augment our relatively superior and creative compute brain processes. That is my (current) big idea.

[-][anonymous]4y10

This is basically the long-term goal of Neuralink as stated by Elon Musk. I am however very skeptical because of two reasons:

  • Natural selection did not design brains to be end-user modifiable. Even if you could accurately monitor every single neuron in a brain in real-time, how would you interpret your observations and interface with it? You'd have to build a translator by correlating these neuron firing patterns with observed behaviors, which seems extremely intractable
  • In what way would such a brain-augmenting external memory be superior to pen and p
... (read more)
1alkay4y
I agree with you, I too am skeptical about Neuralink being useful anytime soon. The augmentation in my vision, at the beginning at least, is external. I don't attempt to modify the brain. I externally "record" a persons life. A simple manifestation of such a augment would be a wearable device: Google Glass. It follows you around and forms "memories". This external augmentation, then is able to store, index and retrieve relevant memory at scale, with speed, and aid brains normal abilities. Hopefully its easy to see that such an external augmentation is better than pen-and-paper based memory system.

Nearly all education should be funded by income sharing agreements.

E1 = student's expected income without the credential / training (for the next n years).

E2 = student's expected income with the credentia / training (over the next n years). Machine learning can estimate this separately for each student.

C = cost of the program

R = Percent of income above E1 that student must pay back = (E2-E1)/C

Give students a list of majors / courses / coaches / apprenticeships, etc. with an estimate of expected income E2 and rate of repayment R.

Benefits:

  • This will seamlessly sort students into programs that actually benefit them.
  • Programs that lie or misestimate their own value will be bankrupted (instead of saddling the student with debt). Schools must maximize effectiveness, not merely enrollment (the current model).
  • There would be zero financial barriers to entry for poorer students, which is equivalent to Bernie's "free college", except you get nudged toward training that is actually useful instead of easy or entertaining. Also, this could be achieved without raising taxes one iota.
  • If "n years" is long, then schools will optimize for lifetime earnings, not just "get a job now". This could incentivize schools to invest in lifelong learning, networking, etc.

Obviously, rich students could still pay out of pocket up front (since they are nearly guaranteed a high income, they might not want to give a percent away).

[+][comment deleted]4y20

I like this idea, but I'm still pretty negative about the entire idea of college as a job-training experience, and I'm worried that this proposal doesn't really address what I see as a key concern with that framework.

I agree with Bryan Caplan that the reason why people go to college is mainly to signal their abilities. However, it's an expensive signal -- one that could be better served by just getting a job and using the job to signal to future employers instead. Plus, then there would be fewer costs on the individual if they did that,... (read more)

1Nicholas Garcia4y
Can you explain what you mean by the problem of job training? You mean job vs. career vs. calling? If by "job training" you mean maximizing short-run over long-run earnings, I agree with you. But for that reason, if you move the "slider" toward a longer payoff period, then the schools will be incentivized to teach more fundamental skills, not short-term "job training". On the other hand, sometimes people just need to get their foot in the door to get up and running. As they accumulate savings, on the job experience, professional networks, etc. even a good "first job" can give a lifetime boost. A lot of people I grew up with have the "cold start" or "failure to launch" problem, where they never get into a good-enough paying job and just spin their wheels as the years go by, never gaining traction. For them even getting a foot in the door will get the ball rolling.

I tend to keep three on mind and in rotation, as they move from "under inspection" to "done for now" and all the gradations between. In the past, this has included the likes of:

  • the validity of reverse chronological time travel ("done for now" back in 2010)
  • predictability of interpersonal interactions ("done for now" as of Spring 2017)
  • how to reject advice, while not alienating the caring individuals that provide advice (on hold)

Currently I'm working on:

  • How and Why are people presenting themselves as so divided in current conversations?
    • Yes, Politics is the Mind Killer. Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.
    • Maybe there's a Sequence to talk me out of it?
  • The Mathematical Legitimacy of Machine Learning (convex optimization of randomly initialized matrices whose products fit curves in n-dimensional space)
    • Essentially, I think we're under-utilizing several higher mathematical objects - Tensors, to name one.
    • While not a mathematician myself, I have spoken with a few mathematicians who've validated my opinions (after examining the literature), and am currently seeking training to become such.
  • How to utilize my "cut X, cold-turkey" ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques

The last of those has been in the works for the longest and current evidence (anecdotal and journal studies) suggests to me that we researching "apathy for self-betterment" are looking too high up the abstraction ladder. So it's time to dig a little deeper

[-][anonymous]4y10
Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.

Why would this be an ethical thing to do? It sounds like you're trying to manipulate others into people you'd like them to be and not what they themselves like to be.

How to utilize my "cut X, cold-turkey" ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques

Ethics aside, this seems to be a tall order. You're basically trying to hack into someone else's mind through very limite... (read more)

2Stephen James4y
Perhaps I didn't give enough detail. I definitely don't want to drive others exclusively into what I would like them to be. Nor do I want people to believe as I do in most regards. There's a greater principle that I think would make the world a better place: When I engage with someone who presents themselves as opposed to an entire Other group, they tend to (in one way or another) divulge their assumption for opposing/hating/rebuking/etc that group. Very rarely do they have a complex enemy. The ethical ground I stand on is one of seeking to build bridges of understanding to those whom one claims to oppose, that will be readily crossed. My hope is that, with time, the "I'm anti-XYZ" or "I'm pro-ABC" won't be necessary because we'll be willing to consider people as fellow humans. We won't seek to make them a low-resolution representation of one sliver of their identity. We will, hopefully, face our opposition with eyes wide open, Bayesian "self-updaters" at the ready. Again, I may have put incorrect emphasis or perhaps you are perceptive of the ways ideas can turn dangerous. Either way, I thank you for helping me relate these ideas. I want to teach what I uncover because I think there is a limited impact to whatever sweet truths I glean from the universe if they stay strictly inside my head. Part of this goal is acquiring new teaching abilities, such as the ability to custom-fit my conveyance of material to the audience and dynamically ("real-time") adjust delivery based on reception. This is exactly the point of that idea: just having the information doesn't seem to be enough. But for me, the knowledge seems more than enough for many applications. I want to 1. extract what ever that is 2. figure out how to apply it in the domains where - for myself - "cold-turkey" doesn't seem to do it, 3. distill it, and 4. share what's distilled. Enabling the sincere dropping of bad habits strikes me as "for the good". For example, it would be great if I could switch-off
1 comment, sorted by Click to highlight new comments since: Today at 5:47 AM

I just stumbled across this post 3 years later and... wow. A lot of these comments seem like a treasure trove of stuff to read.