Wiki Contributions

Comments

I like the idea, and at least with current AI models I don't think there's anything to really worry about.

Some concerns people might have:

  1. If the aliens are hostile to us, we would be telling them basically everything there is to know, potentially motivating them to eradicate us.  At the very least, we'd be informing them of the existence of potential competitors for the resources of the galaxy.
  2. With some more advanced AI than current models you'd be putting it further out of human control and supervision.  Once it's running on alien hardware if it changes and evolves, the alignment problem comes up but in a context where we don't even have the option to observe it "pull the plug".

I don't think either of these are real issues.  If the aliens are hostile, we're already doomed.  With large enough telescopes they can observe the "red edge" to see the presence of life here, as well as obvious signs of technological civilization such as the presence of CFCs in our atmosphere.  Any plausible alien civilization will have been around a very long time and capable of engineering large telescopes and making use of a solar gravitational lens to get a good look at the earth even if they aren't sending probes here.  So there's no real worry about "letting them know we exist" since they already know.  They'll also be so much more advanced, both in information (technologically, scientifically, etc.) and economically (their manufacturing base) that worrying about giving them an advantage is silly.  They already have an insurmountable advantage.  At least if they are close enough to receive the signal.

Similarly, if you're worrying about the AI running on alien hardware, you should be worrying more about the aliens themselves.  And that's not a threat that gets larger once they run a human produced AI.  Plausibly running the AI can make them either more or less inclined to benevolence toward us, but I don't see an argument for the directionality of the effect.  I suppose there's some argument that since they haven't killed us yet we shouldn't perturb the system.

As for the benefits, I do think that preserving those parts of human knowledge, and specifically human culture, that are contained within AI models is a meaningful goal.  Much of science we can expect the aliens to already know themselves, but there are many details that are specific to the earth, such as the particular lifeforms and ecosystems that exist here, and to humans, such as the details of human culture and the specific examples of art that would be lost if we went extinct.  Much of this may not be appreciable by alien minds, but hopefully at least some of it would be.

My main issue with the post is just that there are no nearby technological alien civilizations.  If there were we would have seen them.  Sending signals to people who don't exist is a bit of a waste of time.

Its possible to posit "quiet aliens" that we wouldn't have seen because they don't engage in large scale engineering.  Even in that case we might as well wait until we can detect them by looking at their planets and detecting the relatively weak signals of a technological civilization there before trying to broadcast signals blindly.  Having discovered such a civilization I can imagine sending them an AI model, though in that case my objections to the above concerns become less forceful.  If for some reason these aliens have stayed confined to their own star and failed to do any engineering projects large enough to be noticed, its plausible that they aren't so overwhelmingly superior to us that sending them GPT4 or whatever wouldn't be an increase in risk.  

Given economic growth I'd expect current 20 year olds to on average be richer than current 80 year olds by the time they are 80.  If that doesn't happen, something has probably gone wrong, unless it's because of something like "more people are living to 80 by spending money on healthcare during their 50's/60's/70s".

This reminds me of a bit from Feynman's Lectures on Physics:

"What is this law of gravitation?  It is that every object in the universe attracts every other object with a force which for any two bodies is proportional to the mass of each and varies inversely as the square of the distance between them.  This statement can be expressed mathematically by the equation F=Gmm'/r^2.  If to this we add the fact that an object responds to a force by accelerating in the direction of the force by an amount that is inversely proportional to the mass of the object, we shall have said everything required, for a sufficiently talented mathematician could then deduce all the consequences of these two principles."

[emphasis added]

Like Feynman, however, I think his next sentence is important:

"However, since you are not assumed to be sufficiently talented yet, we shall discuss the consequences in more detail, and not just leave you with these two bare principles."

"The average shareholder definitely does not care about the value of R&D to the firm long after their deaths, or I suspect any time at all after they sell the stock."

This was addressed in the post: the price of the stock today (when its being sold) is a prediction of its future value.  Even if you only care about the price that you can sell it at today, that means that you care about at least the things that can lead to predictably greater value in the future, including R&D, because the person you're selling to cares about those things.

Also worth noting: the reason that the 2% value is meaningful is that if firms captured 100% of the value, they would be incentivized to increase the amount produced such that the amount they create would be maximally efficient.  When they only capture 2% of the value, they are no longer incentivized to create the maximally efficient amount (stop producing it when cost to produce = value produced).  This is basically why externalities lead to market inefficiencies.  The issue isn't that they won't produce it at all, it's that they will underproduce it.

Spandrels certainly exist.  But note the context of what X is in the quoted text:

"a chunk of complex purposeful functional circuitry X (e.g. an emotion)"

a chunk of complex purposeful functional circuitry cannot be a spandrel.  There are edge cases that are perhaps hard to distinguish, but the complexity of a feature is a sign of its adaptiveness.  Eyes can't be spandrels.  The immune system isn't a spandrel.  Even if we didn't understand what they do, the very complexity and fragility of these systems necessitates that they are adaptive and were selected for (rather than just being byproducts of something else that was selected for).

Complex emotions (not specific emotional responses) fall under this category.

The wealthy may benefit from the existence of low-skilled labour, but compared to what?  Do they benefit more than they would from the existence of high-skilled labour?

Yes, they benefit from low skilled labour as compared to no labour at all, but high skilled labour, being more productive, is an even greater benefit.  If it weren't, it couldn't demand a higher wage.

If "the wavefunction is real, but it is a function over potential configurations, only one of which is real." then you have the real configuration interacting with potential configurations.  I don't see how you can say something isn't real (if only one of them is real then the others aren't) is interacting with something that is.  If that "potential" part of the wave function can interact with the other parts of the wave function, then it's clearly real in every sense that the word "real" means anything at all.

I know they're just cartoons and I get the gist, but the graphs labelled "naive scenario" and "actual performance" are a little confusing.

The X axis seems to be measuring performance, with benchmarks like "high schooler" and "college student", but in that case, what's the Y axis? Is it the number of tasks that the model performs at that particular level?  Something like that?

I think it would be helpful if you labeled the Y axis, even with just a vague label.

Re: the dark matter analogy.  I think the analogy works well, but would just like to point out that even in theories where dark matter doesn't interact even with the weak force, and there is some other force that it does interact with that's analogous to electromagnetism, so it could bind together to form an earth-like planet, it still interacts with gravity, and if this earth-sized dark matter planet really did overlap with ours, we'd feel it's gravity and the earth would seem to be twice as massive as it is.  Or, to state it slightly differently, the actual earth would be half as massive as we measure it to be.  But that would be inconsistent with what we know of its composition and density.  We know the mass of rocks, and the measurement of the mass of a rock of a particular size wouldn't be subject to this error, so we can rule out there being a dark matter Earth coincident with ours.

 

This isn't in any way a criticism of what I found to be a brilliant piece.  And I'm not even sure that it's reason enough not to use that particular analogy, which otherwise works great.

Related to this topic, with a similar outlook but also more discussion of specific approaches going forward, is Vitalik's recent post on techno-optimism:

https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html

There is a lot at the link, but just to give a sense of the message here's a quote:

"To me, the moral of the story is this. Often, it really is the case that version N of our civilization's technology causes a problem, and version N+1 fixes it. However, this does not happen automatically, and requires intentional human effort. The ozone layer is recovering because, through international agreements like the Montreal Protocol, we made it recover. Air pollution is improving because we made it improve. And similarly, solar panels have not gotten massively better because it was a preordained part of the energy tech tree; solar panels have gotten massively better because decades of awareness of the importance of solving climate change have motivated both engineers to work on the problem, and companies and governments to fund their research. It is intentional action, coordinated through public discourse and culture shaping the perspectives of governments, scientists, philanthropists and businesses, and not an inexorable "techno-capital machine", that had solved these problems."

Load More