While beliefs are subjective, that doesn't mean that one gets to choose their beliefs willy-nilly. There are laws that theoretically determine the correct belief given the evidence, and it's towards such beliefs that we should aspire.
One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food.
“Did you not store up any food during the summer?” the ants ask.
“No”, says the grasshopper. “I lost track of time, because I was singing and dancing all summer long.”
The ants, disgusted, turn away and go back to work.
One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food.
“Did you not store up any food during the summer?” the ants ask.
“No”, says the grasshopper. “I lost track of time, because I was singing and dancing all summer long.”
The ants are sympathetic. “We wish we could help you”, they say, “but it sets up
...Yann LeCun is Chief AI Scientist at Meta.
This week, Yann engaged with Eliezer Yudkowsky on Twitter, doubling down on Yann’s position that it is dangerously irresponsible to talk about smarter-than-human AI as an existential threat to humanity.
I haven’t seen anyone else preserve and format the transcript of that discussion, so I am doing that here, then I offer brief commentary.
...IPFConline: Top Meta Scientist Yann LeCun Quietly Plotting “Autonomous” #AI Models This is as cool as is it is frightening. (Provides link)
Yann LeCunn: Describing my vision for AI as a “quiet plot” is funny, given that I have published a 60 page paper on it with numerous talks, posts, tweets… The “frightening” part is simply wrong, since the architecture I propose is a way to guarantee that AI
4 seems like an unreasonable assumption. From Zuck’s perspective his chief AI scientist, whom he trusts, shares milder views of AI risks than some unknown people with no credentials (again from his perspective).
A few weeks ago after the community meeting with the Department of Children and Families I decided to run a survey to learn more about the range of ages at which people generally thought kids might be ready to do various activities without supervision. Stay home for a few hours? Play at the park alone? Take public transit? I put something together on Google Forms and shared it on Facebook and over email. Several friends shared it further, and over the next two weeks it gathered 219 responses. In reading the following please remember that this is not a representative sample of anything: it's heavily oversampling people geographically and socially similar to me!
The main part of the survey was a series of questions about scenarios ("play in an unfenced backyard", "cross a medium-traffic street", "spend...
When I write “China”, I refer to the political and economic entity, the People’s Republic of China, founded in 1949.
Leading US AI companies are currently rushing towards developing artificial general intelligence (AGI) by building and training increasingly powerful large language models (LLMs), as well as building architecture on top of them. This is frequently framed in terms of a great power competition for technological supremacy between the US and China.
However, China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).
In brief, China does not compete with the US in developing AGI via LLMs because of:
The temporary disappearance of Jack Ma in 2020 when the CCP decided that his company Alibaba had become too powerful is another cautionary tale for Chinese tech CEOs to not challenge the CCP.
I think Jack Ma's disappearance had as much to do with Alibaba being powerful as it did with a speech he gave critiquing the CCP's regulatory policy.
There are other equally sized / influential companies in China (or even bigger ones such as Tencent) who's founders didn't disappear; the main difference being their deference to Beijing.
Summary: Yudkowsky argues that an unaligned AI will figure out a way to create self-replicating nanobots, and merely having internet access is enough to bring them into existence. Because of this, it can very quickly replace all human dependencies for its existence and expansion, and thus pursue an unaligned goal, e.g. making paperclips, which will most likely end up in the extinction of humanity.
I however will write below why I think this description massively underestimates the difficulty in creating self-replicating nanobots (even assuming that they are physically possible), which requires focused research in the physical domain, and is not possible without involvement of top-tier human-run labs today.
Why it matters? Some of the assumptions of pessimistic AI alignment researchers, especially by Yudkowsky, rest fundamentally on the fact that...
When you describe the "emailing protein sequences -> nanotech" route, are you imagining an AGI with computers on which it can run code (like simulations)? Or do you claim that the AGI could design the protein sequences without writing simulations, by simply thinking about it "in its head"?
Some observations:
Where can you stick a regulatory lever to prevent improper use of GPUs at scale?
Here's one option![3]
There is no problem with air gap. Public key cryptography is a wonderful thing. Let there be a license file, which is a signed statement of hardware ID and duration for which license is valid. You need private key to produce a license file, but public key can be used to verify it. Publish a license server which can verify license files and can be run inside air gapped networks. Done.
This is an entry in the 'Dungeons & Data Science' series, a set of puzzles where players are given a dataset to analyze and an objective to pursue using information from that dataset.
Note: this is a sequel to the original 'League of Defenders of the Storm' scenario, but uses a different ruleset. If you want to play that one first you can, but it's not necessary to have played that one in order to play this.
You've been feeling pretty good about your past successes as an esports advisor on Cloud Liquid Gaming, using your Data Science skills to help them optimize their strategies against rival teams. But recently, you've gotten a very attractive offer from a North American team.
The one constant in the esports scene is...
Sure, no objections. In the absence of further requests I'll aim to post the wrapup doc Friday the 9th: I'm fairly busy midweek and might not get around to posting things then.
I've previously argued that nice clothes are good, actually. But this was an informal claim reflective of the fact that, all else equal, nice clothes are better. But, as elsewhere in life, all else is rarely equal. Choosing clothes and designing a wardrobe is a multivariable optimization problem, and it's a problem everyone but nudists are forced to solve because we must wear something. Most people tackle the problem via intuitions, heuristics & biases, and vibes. But we're aspiring rationalists. We can do better. We can wear optimal clothing, if only we bother to try.
But before we can do better, we must first not do worse. Therefore we must identify what optimal clothing is not. Optimal clothing is not a particular style. You cannot go to the...
I've tried optimizing and making "my personal" wardrobe and so on, but found that I ended up with one or two favorites pieces and disliked the rest. It's like a MLB slugger who hits a homer once a week and strikes out the rest of the time. I've had more success with just finding a "style tribe" I like and dressing like that. It's more of an MLB "singles and doubles", a consistently good-enough approach. I think people who wear streetwear look cool, and I like hanging out with them so don't mind strangers associating me with them. Now I wear mild variations...
There needs to be a little more emphasis of the fact that as a result of their deal, some measure of the grasshopper will get to sing again along with the other minds who blossom in the summer. The paragraph simply doesn't convey that. The impression I got was more like, as a result of their deal, the grasshopper was consumed quicker and less painfully and then merely remembered. I don't think that was your intention?