[ Question ]

What's your big idea?

by G Gordon Worley III 1mo18th Oct 20191 min read63 comments

29


At any one time I usually have between 1 and 3 "big ideas" I'm working with. These are generally broad ideas about how some thing works with many implications for how the rest of the whole world works. Some big ideas I've grappled with over the years, in roughy historical order:

  • evolution
  • everything is computation
  • superintelligent AI is default dangerous
  • existential risk
  • everything is information
  • Bayesian reasoning is optimal reasoning
  • evolutionary psychology
  • Getting Things Done
  • game theory
  • developmental psychology
  • positive psychology
  • phenomenology
  • AI alignment is not defined precisely enough
  • everything is control systems (cybernetics)
  • epistemic circularity
  • Buddhist enlightenment is real and possible
  • perfection
  • predictive coding grounds human values

I'm sure there are more. Sometimes these big ideas come and go in the course of a week or month: I work the idea out, maybe write about it, and feel it's wrapped up. Other times I grapple with the same idea for years, feeling it has loose ends in my mind that matter and that I need to work out if I'm to understand things adequately enough to help reduce existential risk.

So with that as an example, tell me about your big ideas, past and present.

I kindly ask that if someone answers and you are thinking about commenting, please be nice to them. I'd like this to be a question where people can share even their weirdest, most wrong-on-reflection big ideas if they want to without fear of being downvoted to oblivion or subject to criticism of their reasoning ability. If you have something to say that's negative about someone's big ideas, please be nice and say it as clearly about the idea and not the person (violators will have their comments deleted and possibly banned from commenting on this post or all my posts, so I mean it!).

29

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

12 Answers

In October, 1991 an event of such profound importance happened in my life that I wrote the date and time down on a yellow sticky. That yellow sticky has long been lost, but I remember it; it was Thursday, October 17th at 10:22 am. The event was that I had plugged a Hayes modem into my 286 computer and, with a copy of Procomm, logged on to the Internet for the first time. I knew that my life had changed forever.

At about that same time I wanted to upgrade my command line version of Word Perfect to their new GUI version. But the software was something crazy like $495, which I could not afford.

One day I had an idea: "Wouldn't it be cool if you could log on to the Internet and use a word processing program sitting on a main frame or something located somewhere else? Maybe for a tiny fee or something."

I mentioned this to the few friends I knew who were computer geeks, and they all scoffed. They said that software prices would eventually be so inexpensive as to make that idea a complete non-starter.

Well, just look around. How many people are still buying software for their desktops and laptops?

I've had about a dozen somewhat similar ideas over the years (although none of that magnitude). What I came to realize was that if I ever wanted to make anything like that happen, I would need to develop my own technical and related skills.

So I got an MS in Information Systems Development, and a graduate certification in Applied Statistics, and I learned to be an OK R programmer. And I worked in jobs -- e.g., knowledge management -- where I thought I might have more "Ah ha!" ideas.

The idea that eventually emerged -- although not in such an "Ah ha!" fashion -- was that the single biggest challenge in my life, and perhaps most peoples' lives, is the absolute deluge of information out there. And not just out there, but in our heads and in our personal information systems. The word "deluge" doesn't really even begin to describe it.

So the big idea I am working on is what I call the "How To Get There From Here" project. And it's mainly about how to successfully manage the various information and knowledge requirements necessary to accomplish something. This ranges from how to even properly frame the objective to begin with...how to determine the information necessary to accomplish it...how to find that information...how to filter it...how to evaluate it...how to process it...how to properly archive it...etc., etc., etc.

Initially I thought this might end up a long essay. Now it's looking more like a small book. It's very interesting to me because it involves pulling in so many different ideas from so many disparate domains and disciplines -- e.g., library science, decision analysis, behavioral psychology -- and weaving everything together into a cohesive whole.

Anyway, that's the current big idea I'm working on.

The big three:

  • Scientific progress across a wide variety of fields is primarily bottlenecked on the lack of a general theory of adaptive systems (i.e. embedded agency)
  • Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale
  • Personally, my own relative advantage in solving technical problems increases with difficulty of the problem across a wide variety of domains

A few sub-big-ideas:

"Let's finish what Engelbart started"

1. Recursively decompose all the problem(s) (prioritizing the bottleneck(s)) behind AI alignment until they are simple and elementary.

2. Get massive 'training data' by solving each of those problems elsewhere, in many contexts, more than we need, until we have asymptotically reached some threshold of deep understanding of that problem. Also collect wealth from solving others' problems. Force multiplication through parallel collaboration, with less mimetic rivalry creating stagnant deadzones of energy.

3. We now have plenty of slack from which to construct Friendly AI assembly lines and allow for deviations in output along the way. No need to wring our hands with doom anymore as though we were balancing on a tightrope.

In the game Factorio, the goal is to build a rocket from many smaller inputs and escape the planet. I know someone who got up to producing 1 rocket/second. Likewise, we should aim much higher so we can meet minimal standards with monstrous reliability rather than scrambling to avoid losing.

See: Ought

We should make thousands of clones of John von Neumann from his DNA. We don't have the technology to do this yet, but the upside benefit would be so huge it would be worth spending a few billion to develop the technology. A big limitation on the historical John von Neumann's productivity was not being able to interact with people of his own capacity. There would be regression to the mean with the clones' IQ, but the clones would have better health care and education than the historical von Neumann did plus the Flynn effect might come into play.

The negative principle: it seems like in a huge number of domains people are often defaulting to positivist accounts or representations of things, yet when we look at the history of big ideas in STEM I think we see a lot of progress happening from people thinking about whatever the inverse of the positivist account is. The most famous example I know of is information theory, where Shannon solved a long standing confusion by thinking in terms of uncertainty reduction. I think language tends to be positivist in its habitual forms which is why this is a recurring blind spot.

Levels of abstraction: Korzybski, Marr, etc.

Everything is secretly homeostasis

Modal analysis: what has to be true about the world for a claim to have any meaning at all i.e. what are its commitments

Type systems for uncertainty


My past big ideas mostly resemble yours, so I'll focus on those of my present:

Most economic hardship results from avoidable wars, situations where players must burn resources to signal their strength of desire or power (will). I define Negotiations as processes that reach similar, or better outcomes as their corresponding war. If a viable negotiation process is devised, its parties will generally agree to try to replace the war with it.

Markets for urban land are currently, as far as I can tell, the most harmful avoidable war in existence. Movements in land price fund little useful work[1] and continuously, increasingly diminish the quality of our cities (and so diminish the lives of those who live in cities, which is a lot of people), but they are currently necessary for allocating scarce, central land to high-valuae uses. So, I've been working pretty hard to find an alternate negotiation process for allocating urban land. It's going okay so far. (But I can't bear this out alone. Please contact me if you have skills in numerical modelling, behavioural economics, machine learning and philosophy (well mixed), or any experience in industries related to urban planning)

Bidding wars are a fairly large subclass of avoidable wars. The corresponding negotiation, for an auction, would be for the players to try to measure their wills out of band, then for those found to have the least will to commit to abstaining from the auction. (People would stop running auctions if bidders could coordinate well enough to do this, of course, but I'm not sure how bad a world without auctions would be, I think auctions benefit sellers more than they benefit markets as a whole, most of the time. A market that serves both buyer and seller should generally consider switching to Vickrey Auctions, in the least.)

[1] Regarding intensification; my impression so far is that there is nothing especially natural about land price increase as a promoter of density. It doesn't do the job as fast as we would like it to. The benefits of density go to the commons. Those common benefits of density correlate with the price of the individual dense building, but don't seem to be measured accurately by it.


Another Big Idea is "Average Utilitarianism is more true than Sum Utilitarianism", but I'm not sure whether the world is ready to talk about that. I don't think I've digested it fully yet. I'm not sure that rock needs to be turned over...

I also have a big idea about the evolutionary telos of paraphilias, but it's very hard to talk about.


Oh, this might be important: I studied logic for four years so that I could tell you that there are no fundamental truths, and all math and logic just consists of a machine that we evolved and maintained just because it happened to work. There's no transcendent beauty at the bottom of it all, it's all generally kind of ugly even after we've cut the ugliest parts away, and there may be better alternatives (consider CDT and FDT for an example of a deposition of seemingly fundamental elegance)

A lot of these are quite controversial:

  • AI alignment has failed once before, we are the product
  • Technical obstacles in the way of AGI is our most valuable resource right now, and we're rapidly depleting it
  • A future without superintelligent AI is also dystopian by default (after reading that last one, being turned into paperclips doesn't sound so bad to me after all)
  • AI or Moloch, the world will eventually be taken over by something because there is a world to be taken over
  • We were just lucky nuclear weapons didn't turn out to be an existential threat; we might not be so lucky in the future

  • The (observable) universe is tiny on the logarithmic scale
  • Exploration of outer space turned out way less interesting than I imagined
  • Exploration of cyberspace turned out way more interesting than I imagined
  • Some god-like powers are easier to achieve than flying cars
  • The term "nanotechnology" indicates how primitive the field really is; we don't call our every other technology "centitechnology"

  • Human-level intelligence is the lower bound for a technological species
  • Modern humans are surprisingly altruistic given its population size; ours is the age of disequilibrium
  • Technological progress never repeats itself, and so neither does history
  • Every social progress is just technological progress in disguise
  • The effect of the bloodiest conflicts of the 20th century on world population is.... none whatsoever

  • Schools teach too much, not too little
  • The education system is actually a selection system
  • Innovation, like oil, is a very limited resource; some processes just can't be parallelized
  • The deafening silence around death by aging

One thing I'm thinking about these days:

Often times, when people make decisions, they don't explicitly model how they themselves will respnd to the outcomes; they instead use simplified models of themselves to quickly make guesses about the things that they like. These guesses can often act as placebos which turn the expected benefits of a given decision into actual benefits solely by virtue of the expectation. In short if you have the psychological architecture that makes it physically feasible to experience a benefit, you can hack your simplified models of yourself to make yourself get that benefit.

This isn't quite a dark art of rationality since it does not need to actually hurt your epistemology but it does leverage the possibility of changing who you are (or more explicitly, changing who you are by changing who you think you are). I'm currently using this as a way to make myself into the kind of person who is a writer.


Humans prefer mutual information. Further, I suspect that this is the same mechanism that drives our desire to reproduce.

The core of my intuition is that we instinctively want to propagate our genetic information, and also seem to want to propagate our cultural information (e.g. the notion of not being able to raise my daughter fills me with horror). If this is true of both kinds of information, it probably shares a cause.

This seems to have explanatory power for a lot of things.

  • Why do people continue to talk when they have nothing to say, or spend time listening to things that make them angry or afraid? Because there are intrinsic rewards for speaking and for listening, regardless of content. These things lead to shared information the same way sex leads to children.
  • Why do people make poetry and music? Because this is a bundle of their cultural information propagating in the world. I think the metaphor about the artwork being the artist's child should be taken completely literally.
  • Why do people teach? A pretty good description of teaching is mutualizing information.

This quickly condensed into considering how important shared experiences are, and therefore also coordinated groups. This is because actions generate shared experiences, which contain a lot of mutual information. Areas of investigation for this include military training, asabiyah, and ritual.

What I haven't done yet is really link this to what is happening in the brain; naively it seems consistent at first blush with the predictive processing model, and also seems like maybe-possibly Fristonian free energy applied to other humans.

We experience and learn so many things over years. However, our memories may fail us. They fail in recalling a relevant fact that could have been very useful for accomplishing an immediate task at hand. e.g. My car tire has punctured on a busy street, but I cannot recall how to change it -- though I remember reading about it in the manual.

It is likely that the memory is still alive somewhere in the deep corner of my brain. In this case, I maybe able to think hard and push myself to remember it. Such a process is bound to be slow and people on the street would yell at me for blocking it!

Sometimes our memories fail us "silently". We don't know that somewhere in our brain is information we can bring to bear on accomplishing a task on hand. What if I don't even know that I have read a manual on changing car tires?!

Long term memory accessibility is thus an issue.

Now our short term memory is also very very limited (4-7 chunks at a time). In fact, short-cache of working memory might be a barrier to intellectual progress. It is then very crucial to inject relevant information in this limited working=memory space if we are to give a task our best, most intelligent shot.

Thus, I think about memory systems that can artificially augment the brain. I think of them from point of view of storing more information and indexing it better. I think of them for faster and more relevant retrieval.

I think of them as importable and exportable -- I can share them with my friends (and learn how to change tires instantaneously). A pensieve like memory bank.

I thus think of "digital memories" that augment our relatively superior and creative compute brain processes. That is my (current) big idea.

I tend to keep three on mind and in rotation, as they move from "under inspection" to "done for now" and all the gradations between. In the past, this has included the likes of:

  • the validity of reverse chronological time travel ("done for now" back in 2010)
  • predictability of interpersonal interactions ("done for now" as of Spring 2017)
  • how to reject advice, while not alienating the caring individuals that provide advice (on hold)

Currently I'm working on:

  • How and Why are people presenting themselves as so divided in current conversations?
    • Yes, Politics is the Mind Killer. Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.
    • Maybe there's a Sequence to talk me out of it?
  • The Mathematical Legitimacy of Machine Learning (convex optimization of randomly initialized matrices whose products fit curves in n-dimensional space)
    • Essentially, I think we're under-utilizing several higher mathematical objects - Tensors, to name one.
    • While not a mathematician myself, I have spoken with a few mathematicians who've validated my opinions (after examining the literature), and am currently seeking training to become such.
  • How to utilize my "cut X, cold-turkey" ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques

The last of those has been in the works for the longest and current evidence (anecdotal and journal studies) suggests to me that we researching "apathy for self-betterment" are looking too high up the abstraction ladder. So it's time to dig a little deeper

Nearly all education should be funded by income sharing agreements.

E1 = student's expected income without the credential / training (for the next n years).

E2 = student's expected income with the credentia / training (over the next n years). Machine learning can estimate this separately for each student.

C = cost of the program

R = Percent of income above E1 that student must pay back = (E2-E1)/C

Give students a list of majors / courses / coaches / apprenticeships, etc. with an estimate of expected income E2 and rate of repayment R.

Benefits:

  • This will seamlessly sort students into programs that actually benefit them.
  • Programs that lie or misestimate their own value will be bankrupted (instead of saddling the student with debt). Schools must maximize effectiveness, not merely enrollment (the current model).
  • There would be zero financial barriers to entry for poorer students, which is equivalent to Bernie's "free college", except you get nudged toward training that is actually useful instead of easy or entertaining. Also, this could be achieved without raising taxes one iota.
  • If "n years" is long, then schools will optimize for lifetime earnings, not just "get a job now". This could incentivize schools to invest in lifelong learning, networking, etc.

Obviously, rich students could still pay out of pocket up front (since they are nearly guaranteed a high income, they might not want to give a percent away).