Wiki Contributions

Comments

Ok, let's examine a more conservative scenario using solely visual input. If we take 10 megabits/s as the base and deduct 30% to account for sleep time, we'll end up with roughly 0.78 petabytes accumulated over 30 years. This translates to approximately 157 trillion tokens in 30 years, or around 5.24 trillion tokens annually. Interestingly, even under these conservative conditions, the estimate significantly surpasses the training data of LLMs (~1 trillion tokens) by two orders of magnitude.

Where's Nick Bostrom? I've been wondering about this. I haven't seen anything published recently by him or hear him talk, besides that small New York Times piece. It would be great to hear his take in depth about this recent AI progress.

You mention "I would point out that your calculations are based on the incident data our senses pick up, whereas what we learn is based on the information received by our brain. Almost all of the incident data is thrown away much closer to the source."

Wouldn't this be similar to how a Neural Network "disregards" training data that it has already seen? i.e. If it has already learned that pattern, there's no gradient so the loss wouldn't go down. Maybe there's another mechanism that we're missing in current neural nets online training, that would increase training efficiency by recognizing redundant data and prevent a feedforward pass. Tesla does this in an engineered manner where they throw away most data at the source and only learn on "surprise/interventions", which is data that generates a gradient. 

I don't really get what you mean by "Not sure how much other preprocessing and discarding of data happens elsewhere, but it doesn't take that many more steps to close the remaining 1.5 OOMs gap." Are you saying that the real calculations are closer to 1.5 orders of magnitude of what I calculated or 1.5% of what I calculated?
 

I did some calculations with a bunch of assumptions and simplifications but here's a high estimate, back of the envelope calculation for the data and "tokens" a 30 year old human would have "trained" on:

  •  Visual data: 130 million photoreceptor cells, firing at 10 Hz = 1.3Gbits/s = 162.5 MB/s over 30 years (aprox. 946,080,000 seconds) = 153 Petabytes
  • Auditory data:  Humans can hear frequencies up to 20,000 Hz, high quality audio is sampled at 44.1 kHz satisfying Nyquist-Shannon sampling theorem, if we assume a 16bit (cd quality)*2(channels for stereo) = 1.41 Mbits/s = .18 MB/s over 30 years = .167 Petabytes
  • Tactile data: 4 million touch receptors providing 8 bits/s (assuming they account for temperature, pressure, pain, hair movement, vibration) = 5 MB/s over 30 years = 4.73 Petabytes
  • Olfactory data:  We can detect up to 1 trillion smells , assuming we process 1 smell every second and each smell is represented a its own piece of data i.e. log2(1trillion) = 40 bits/s = 0.0000050 MB/s over 30 years = .000004 Petabytes
  • Taste data: 10,000 receptors, assuming a unique identifier for each basic taste (sweet, sour, salty, bitter and umami) log2(5) 2.3 bits rounded up to 3 = 30 kbits/s = 0.00375 MB/s over 30 years = .00035 Petabytes

    This amounts to 153 + .167 + 4.73 + .000004 +  .00035 = 158.64 Petabytes assuming 5 bytes per token (i.e. 5 characters) this amounts to 31,728 T tokens

    This is of course a high estimate and most of this data will clearly have huge compression capacity, but I wanted to get a rough estimate of a high upper bound.

    Here's the google sheet if anyone wants to copy it or contribute

I'm curious about where you get that "models trained mostly on English text are still pretty good at Spanish" do you have a reference?

I'm very much aligned with the version of utilitarianism that Bostrom and Ord generally put forth, but a question came up in a conversation regarding this philosophy and view of sustainability.  As a thought experiment what would be consistent with this philosophy if we discover that a very clear way to minimize existential risk due to X requires a genocide of half or a significant subset of the population?

Here we are now, what would you comment on the progress of C. Elegans emulation in general and of your particular approach?