Review

I set out to answer a simple question: How much energy does it take to make solar panels? A quick DuckDuckGo search led me to this website [archive] which says:

it would cost about 200kWh of energy to produce a 100-watt panel

But something about this website seemed "off". I suspected that it was an AI-generated SEO page designed specifically to appear in search results for this question. If so, then that gives me reason to doubt the truth of the answer. I checked their About us page [archive] and my suspicions grew.

The page lists four names: "Elliot Bailey, Brad Wilson, Daniel Morgan, Joe Ross", which sound like they were selected from a list of the most common first and last names. The portraits also look a lot like the output of thispersondoesnotexist.com. (What kind of strange jacket is "Elliot Bailey" wearing?) Furthermore, if I search Google Images for them, I find no photos of these people except the exact same ones from that website. One would think, if these people are leading solar energy magnates, that at least one picture of at least one of them from another angle would exist.

The page also gives the address "1043 Garland Ave, San Jose, CA". When I look this up on Google Maps, I find a tiny strip mall containing a tattoo parlor, a ballet studio, a food bank, and a martial arts school - but no sign of a solar panel manufacturer. And if I go to their supposed LinkedIn profiles, I get three pages that you need to sign in to view, and one "Profile Not Found".

So, I'm now pretty convinced that "Sol Voltaics" is not actually a real company and that none of these people actually exist. But what incentive could someone have had to set up such a deception? Are they trying to sell me something?

I go to their Products page [archive]. Interestingly, it seems like they don't actually sell any of their own products. Instead, the page consists of affiliate links to products sold by other companies: Bluetti, Anker, Rich Solar, etc. Are these also "fake" companies? I don't have the time or wherewithal to look into it. But at least it seems I can actually buy products from them if I wanted to, which is more than I can say for "Sol Voltaics".

My best guess is that "solvoltaics.com" is really just an elaborate marketing campaign for those companies. I imagine some savvy hustler approached them with a proposition of driving clicks to their websites in exchange for a cut of ensuing sales, and then to this end used AI to crank out a search-engine-optimized website in an afternoon. (Or maybe this "savvy hustler" isn't a real person either, and the entire process was initiated by an AI? I don't think we're at that level yet, but perhaps we will be soon.)

Which all brings us back to the initial question: Can the statistic provided on that website be trusted?

it would cost about 200kWh of energy to produce a 100-watt panel

There are two conflicting considerations:

  • The website's content was generated by a process optimizing for clicks, which is indifferent to truth - if "200kWh" had been replaced with "50kWh", the search engine result would have ranked just as highly. And it's needlessly costly to design an AI that cares about truth when its goal is just as easily achieved by a system that makes up numbers at random.
  • All else equal, AIs are slightly more likely to generate true facts than false ones, because they're trained on text written by humans, of whom at least some inherently value truth-telling, while the rest generally don't have an incentive to lie about specific facts like the amount of energy it takes to produce a solar panel.

Overall I would give around 60% credence that the statement is correct. The mere fact that it's presented in the context of a sketchy-looking website isn't itself proof that it's false, but if I were the Secretary of Energy I would certainly not base important policy decisions on this information. (And yet, one wonders if this kind of thing has already happened...)

This ruse was easily unmasked, but it could've been made much more convincing with just a little more effort. Current image-generating AI can create pictures of people from multiple angles, wearing different clothes, or in groups. They could've given the address of a large generic office building that gives no hints as to what goes on inside. They could've generated detailed LinkedIn profiles with employment histories pointing to other, equally fictitious companies. Perhaps they could even have been featured in the news, on a website spun up for this purpose. How much of this is flying under the radar - or will, in the coming years?

At this point I'm fully expecting a commenter to come forward saying "Actually I've met Elliot Bailey - he's a really smart guy." But I would take this more as evidence that said comment is itself AI-generated than as evidence that Elliot Bailey, CEO & Chief Editor of Sol Voltaics, is a real flesh-and-blood member of the species Homo sapiens. I won't believe anything unless I see it with my own two eyes.

The Information Age kicked off with such promise! But now I have no better knowledge of what's going on in the world than did a medieval villager, hearing tales of faraway magic.

New Comment
7 comments, sorted by Click to highlight new comments since:
[-]gjm110

The "people" listed on the Sol Voltaics "about" page all have links to "their" LinkedIn "pages". All of them 404.

I go to their Products page [archive]. Interestingly, it seems like they don't actually sell any of their own products. Instead, the page consists of affiliate links to products sold by other companies: Bluetti, Anker, Rich Solar, etc. Are these also "fake" companies? I don't have the time or wherewithal to look into it. But at least it seems I can actually buy products from them if I wanted to, which is more than I can say for "Sol Voltaics".

Most websites have affiliate programs. The web is full of affiliate marketers that look at which keywords rank high on Google and build websites that link people who come in reading articles to affiliate products. 

While AI might contribute to this problem archive.org suggests that a good portion of that page was written before the age of ChatGPT. There's a good chance might be written by an Indian that was hired at Fiverr and was tasked with writing an article that contains certain keywords. 

There's a lot of money in affiliate marketing. 

"Early in the Reticulum[Internet] -thousands of years ago— it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information," Sammann said. 

"Crap, you once called it," I reminded him.

"Yes-a technical term..."

...

"As a tactic for planting misinformation in the enemy’s reticules[webpages/webservers], you mean," Osa said. "This I know about. You are referring to the Artificial Inanity programs of the mid–First Millennium A.R." 

Source: Anathem (2009) transcription via Redditors fanning about it.

See also: bogons and rampant orphan botnet ecologies.

(I doubt Neal Stephenson will ever do this, but I would love to see a sequel, with an Avout consultant to a small team of Ita who are experimentally trying to build an AI capable of Procian Rhetoric and/or Incanting, that might be related to a future journey, where a Polycosmic Praxis is deployed as a fallback defense against whoever they visit next, since anyone even further downwick (more "Hylean"?) than the Arbrans themselves will presumably be somehow "better" at Polycosmic Praxis!

...It seems like it would be really really hard to finesse, but if Stephenson is willing and able to pull this off, I'm sure it would be a joy to read!)

All else equal, AIs are slightly more likely to generate true facts than false ones

AIs are much more likely to generate false facts than true ones.

For example, AI-generated references to sources are almost invariably non-existent. Case in point: those LinkedIn links.

Bret Devereaux had a post about using ChatGPT to write an essay on a historical subject. The result looked superficially like an essay on the historical subject, but it was useless.

Given the likely genesis of the Sol Voltaics web page, reading it should produce no update about the energy cost of making a solar panel.

All else equal, AIs are slightly more likely to generate true facts than false ones

AIs are much more likely to generate false facts than true ones.

I think there might be some English ambiguity here.  Suppose there are 81 names that the AI might come up with when writing the sentence "$NAME invented plastic", and maybe the correct name has a 20% chance of being picked, and each of the incorrect names has a 1% chance of being picked.  Then it's simultaneously true that:

  1. The correct name is 20x as likely to be picked as any individual incorrect name.
  2. It is much more likely that the name picked will be incorrect than that it will be correct.

Are you taking into account the simulated exams? It doesn't look like it mostly generates false facts?