Book 3 of the Sequences Highlights

While beliefs are subjective, that doesn't mean that one gets to choose their beliefs willy-nilly. There are laws that theoretically determine the correct belief given the evidence, and it's towards such beliefs that we should aspire.

Recent Discussion

One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food.

“Did you not store up any food during the summer?” the ants ask.

“No”, says the grasshopper. “I lost track of time, because I was singing and dancing all summer long.”

The ants, disgusted, turn away and go back to work.


One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food.

“Did you not store up any food during the summer?” the ants ask.

“No”, says the grasshopper. “I lost track of time, because I was singing and dancing all summer long.”

The ants are sympathetic. “We wish we could help you”, they say, “but it sets up

...

There needs to be a little more emphasis of the fact that as a result of their deal, some measure of the grasshopper will get to sing again along with the other minds who blossom in the summer. The paragraph simply doesn't convey that. The impression I got was more like, as a result of their deal, the grasshopper was consumed quicker and less painfully and then merely remembered. I don't think that was your intention?

3Raemon2h
  This sure hits me in some kind of feels, though I'm kinda confused about it.
2Richard_Ngo4h
Artifact of cross-posting from my blog.
1parafactual4h
I assumed as much, but I'm curious as to what the artifact was specifically.

Yann LeCun is Chief AI Scientist at Meta.

This week, Yann engaged with Eliezer Yudkowsky on Twitter, doubling down on Yann’s position that it is dangerously irresponsible to talk about smarter-than-human AI as an existential threat to humanity.

I haven’t seen anyone else preserve and format the transcript of that discussion, so I am doing that here, then I offer brief commentary.

IPFConline: Top Meta Scientist Yann LeCun Quietly Plotting “Autonomous” #AI Models This is as cool as is it is frightening. (Provides link)

Yann LeCunn: Describing my vision for AI as a “quiet plot” is funny, given that I have published a 60 page paper on it with numerous talks, posts, tweets… The “frightening” part is simply wrong, since the architecture I propose is a way to guarantee that AI

...

4 seems like an unreasonable assumption. From Zuck’s perspective his chief AI scientist, whom he trusts, shares milder views of AI risks than some unknown people with no credentials (again from his perspective).

A few weeks ago after the community meeting with the Department of Children and Families I decided to run a survey to learn more about the range of ages at which people generally thought kids might be ready to do various activities without supervision. Stay home for a few hours? Play at the park alone? Take public transit? I put something together on Google Forms and shared it on Facebook and over email. Several friends shared it further, and over the next two weeks it gathered 219 responses. In reading the following please remember that this is not a representative sample of anything: it's heavily oversampling people geographically and socially similar to me!

The main part of the survey was a series of questions about scenarios ("play in an unfenced backyard", "cross a medium-traffic street", "spend...

When I write “China”, I refer to the political and economic entity, the People’s Republic of China, founded in 1949.

Leading US AI companies are currently rushing towards developing artificial general intelligence (AGI) by building and training increasingly powerful large language models (LLMs), as well as building architecture on top of them. This is frequently framed in terms of a great power competition for technological supremacy between the US and China.

However, China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).

In brief, China does not compete with the US in developing AGI via LLMs because of:

  1. Resources and technology: China does not have access to the computational resources[1] (compute, here specifically data centre-grade GPUs) needed for
...

The temporary disappearance of Jack Ma in 2020 when the CCP decided that his company Alibaba had become too powerful is another cautionary tale for Chinese tech CEOs to not challenge the CCP.

I think Jack Ma's disappearance had as much to do with Alibaba being powerful as it did with a speech he gave critiquing the CCP's regulatory policy

There are other equally sized / influential companies in China (or even bigger ones such as Tencent) who's founders didn't disappear; the main difference being their deference to Beijing.

1sanxiyn1h
I note that this is how Falcon [https://falconllm.tii.ae/] from Abu Dhabi was trained. To quote:

Summary: Yudkowsky argues that an unaligned AI will figure out a way to create self-replicating nanobots, and merely having internet access is enough to bring them into existence. Because of this, it can very quickly replace all human dependencies for its existence and expansion, and thus pursue an unaligned goal, e.g. making paperclips, which will most likely end up in the extinction of humanity.

I however will write below why I think this description massively underestimates the difficulty in creating self-replicating nanobots (even assuming that they are physically possible), which requires focused research in the physical domain, and is not possible without involvement of top-tier human-run labs today.

Why it matters? Some of the assumptions of pessimistic AI alignment researchers, especially by Yudkowsky, rest fundamentally on the fact that...

When you describe the "emailing protein sequences -> nanotech" route, are you imagining an AGI with computers on which it can run code (like simulations)?  Or do you claim that the AGI could design the protein sequences without writing simulations, by simply thinking about it "in its head"?

Some observations:

  1. ML-relevant hardware supply is bottlenecked at several points.
  2. One company, NVIDIA, is currently responsible for most purchasable hardware.[1]
  3. NVIDIA already implements driver licensing to force data center customers to buy into the more expensive product line.[2]
  4. NVIDIA would likely not oppose even onerous regulation on the use of its ML hardware if it gives them more of a competitive moat.

Where can you stick a regulatory lever to prevent improper use of GPUs at scale?

Here's one option![3]

Align greed with oversight

  1. Buff up driver licensing with some form of hardware authentication so that only appropriately signed drivers can run. (NVIDIA may do this already; it would make sense!)
  2. Modify ML-focused products to brick themselves if they do not receive a periodic green light signal from a regulatory source with proper authentication.
  3. Require chain of
...
5GoteNoSente2h
A hardware protection mechanism that needs to confirm permission to run by periodically dialing home would, even if restricted to large GPU installations, brick any large scientific computing system or NN deployment that needs to be air-gapped (e.g. because it deals with sensitive personal data, or particularly sensitive commercial secrets, or with classified data). Such regulation also provides whoever controls the green light a kill switch against any large GPU application that runs critical infrastructure. Both points would severely damage national security interests. On the other hand, the doom scenarios this is supposed to protect from would, at least as of the time of writing this, by most cybersecurity professionals probably be viewed as an example of poor threat modelling (in this case, assuming the adversary is essentially almighty and that everything they do will succeed on their first try, whereas anything we try will fail because it is our first try). In summary, I don't think this would (or should) fly, but obviously I might be wrong. For a point of reference, techniques similar in spirit have been seriously proposed to regulate use of cryptography (for instance, via adoption of the Clipper chip), but I think it's fair to say they have not been very successful.

There is no problem with air gap. Public key cryptography is a wonderful thing. Let there be a license file, which is a signed statement of hardware ID and duration for which license is valid. You need private key to produce a license file, but public key can be used to verify it. Publish a license server which can verify license files and can be run inside air gapped networks. Done.

2porby1h
Yup! Probably don't rely on a completely automated system that only works over the internet for those use cases. There are fairly simple (for bureaucratic definitions of simple) workarounds. The driver doesn't actually need to send a message anywhere, it just needs a token. Airgapped systems can still be given those small cryptographic tokens in a reasonably secure way (if it is possible to use the system in secure way at all), and for systems where this kind of feature is simply not an option, it's probably worth having a separate regulatory path. I bet NVIDIA would be happy to set up some additional market segmentation at the right price. The unstated assumption was that the green light would be controlled by US regulatory entities for hardware sold to US entities. Other countries could have their own agencies, and there would need to be international agreements to stop "jailbroken" hardware from being the default, but I'm primarily concerned about companies under the influence of the US government and its allies anyway (for now, at least). I think there's a meaningful difference between attempts to regulate cryptography and regulating large machine learning deployments; consumers will never interact with the regulatory infrastructure, and the negative externalities are extremely small compared to compromised or banned cryptography.
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with

This is an entry in the 'Dungeons & Data Science' series, a set of puzzles where players are given a dataset to analyze and an objective to pursue using information from that dataset. 

Note: this is a sequel to the original 'League of Defenders of the Storm' scenario, but uses a different ruleset.  If you want to play that one first you can, but it's not necessary to have played that one in order to play this.

STORY (skippable)

You've been feeling pretty good about your past successes as an esports advisor on Cloud Liquid Gaming, using your Data Science skills to help them optimize their strategies against rival teams.  But recently, you've gotten a very attractive offer from a North American team.

The one constant in the esports scene is...

3Yonge5h
I found this problem late. Could I have an extra day or two please?

Sure, no objections.   In the absence of further requests I'll aim to post the wrapup doc Friday the 9th: I'm fairly busy midweek and might not get around to posting things then.

I've previously argued that nice clothes are good, actually. But this was an informal claim reflective of the fact that, all else equal, nice clothes are better. But, as elsewhere in life, all else is rarely equal. Choosing clothes and designing a wardrobe is a multivariable optimization problem, and it's a problem everyone but nudists are forced to solve because we must wear something. Most people tackle the problem via intuitions, heuristics & biases, and vibes. But we're aspiring rationalists. We can do better. We can wear optimal clothing, if only we bother to try.

But before we can do better, we must first not do worse. Therefore we must identify what optimal clothing is not. Optimal clothing is not a particular style. You cannot go to the...

I've tried optimizing and making "my personal" wardrobe and so on, but found that I ended up with one or two favorites pieces and disliked the rest. It's like a MLB slugger who hits a homer once a week and strikes out the rest of the time. I've had more success with just finding a "style tribe" I like and dressing like that. It's more of an MLB "singles and doubles", a consistently good-enough approach. I think people who wear streetwear look cool, and I like hanging out with them so don't mind strangers associating me with them. Now I wear mild variations... (read more)

4mingyuan3h
That's fair... but also I want to spread the word about my optimal dress [https://www.stitchfix.com/product/Kaileigh-Amandine-Knit-Dress/R2PG8MEDP]. While it's only sold new on StitchFix, there are tons available secondhand (and usually cheaper) on Poshmark, and it comes in dozens of fabrics/patterns! (Search 'Kaileigh faux wrap dress' or just 'Kaileigh dress', in your size.) It's obviously not actually 'optimal' but it's really comfortable and looks amazing on a lot of people. I own like fifteen in different patterns and have gifted them to four friends of different complexions and body types, and all of them love them and wear them constantly (even one who never ever wears dresses), and two of them have even gone and bought more. And as a bonus, it's also good as a maternity dress and for breastfeeding! I just love this dress. I get so many compliments on it and I wear it almost every day. And now so do some of my friends!