LESSWRONG
LW

Gauraventh
66760
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Gauraventh's Shortform
4mo
3
No wikitag contributions to display.
Not stepping on bugs
Gauraventh2d10

I am glad you liked it! I find knowing when to post to LW very hard; judging from the downvotes I interpret that this post is probably not super relevant to LW users, but nonetheless happy you liked it. 
 

Reply
Gauraventh's Shortform
Gauraventh1mo50

From Meta: https://www.meta.com/superintelligence/

Personal Superintelligence

Over the last few months we have begun to see glimpses of our AI systems improving themselves. The improvement is slow for now, but undeniable. Developing superintelligence is now in sight.

It seems clear that in the coming years, AI will improve all our existing systems and enable the creation and discovery of new things that aren't imaginable today. But it is an open question what we will direct superintelligence towards.

In some ways this will be a new era for humanity, but in others it's just a continuation of historical trends. As recently as 200 years ago, 90% of people were farmers growing food to survive. Advances in technology have steadily freed much of humanity to focus less on subsistence and more on the pursuits we choose. At each step, people have used our newfound productivity to achieve more than was previously possible, pushing the frontiers of science and health, as well as spending more time on creativity, culture, relationships, and enjoying life.

I am extremely optimistic that superintelligence will help humanity accelerate our pace of progress. But perhaps even more important is that superintelligence has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose.

As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.

Meta's vision is to bring personal superintelligence to everyone. We believe in putting this power in people's hands to direct it towards what they value in their own lives.

This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output. At Meta, we believe that people pursuing their individual aspirations is how we have always made progress expanding prosperity, science, health, and culture. This will be increasingly important in the future as well.

The intersection of technology and how people live is Meta's focus, and this will only become more important in the future.

If trends continue, then you'd expect people to spend less time in productivity software, and more time creating and connecting. Personal superintelligence that knows us deeply, understands our goals, and can help us achieve them will be by far the most useful. Personal devices like glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices.

We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible.

The rest of this decade seems likely to be the decisive period for determining the path this technology will take, and whether superintelligence will be a tool for personal empowerment or a force focused on replacing large swaths of society.

Meta believes strongly in building personal superintelligence that empowers everyone. We have the resources and the expertise to build the massive infrastructure required, and the capability and will to deliver new technology to billions of people across our products. I'm excited to focus Meta's efforts towards building this future.

– Mark

Reply
Legal Supervision of Frontier AI Labs is the answer.
Gauraventh4mo30

Hey, thanks for your comment. I do think this is right. The style of writing on my blog is a lot more bullish, and does not index uncertainty. I appreciate this is not in the ethos of LW, so I am going to change how/what I crosspost here. I prefer being bullish because this seems to get a lot more feedback and good questions such as yours. 

As for your questions - I intend to write a second part and address these. It is much easier for me to say something is needed versus actually trying to elucidate on what this looks like. So I expect to go do more work on this and return with a more fleshed out theory. In the meantime you may enjoy reading Peter's paper on supervision: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5122871

Reply
MATS mentor selection
Gauraventh8mo10

Wow, my intuition was that it was really hard to get mentors onboard with supervising scholars given time constraints most senior researchers have, so seeing 87 mentors apply feels is wild! 

Reply
Californians, tell your reps to vote yes on SB 1047!
Gauraventh1y30

The Bill has passed the appropriations committee and will now move onto the Assembly floor.  There were some changes made to the Bill. From the press release: 

Removing perjury – Replace criminal penalties for perjury with civil penalties. There are now no criminal penalties in the bill. Opponents had misrepresented this provision, and a civil penalty serves well as a deterrent against lying to the government.

Eliminating the FMD – Remove the proposed new state regulatory body (formerly the Frontier Model Division, or FMD). SB 1047’s enforcement was always done through the AG’s office, and this amendment streamlines the regulatory structure without significantly impacting the ability to hold bad actors accountable. Some of the FMD’s functions have been moved to the existing Government Operations Agency.

Adjusting legal standards - The legal standard under which developers must attest they have fulfilled their commitments under the bill has changed from “reasonable assurance” standard to a standard of “reasonable care,” which is defined under centuries of common law as the care a reasonable person would have taken. We lay out a few elements of reasonable care in AI development, including whether they consulted NIST standards in establishing their safety plans, and how their safety plan compares to other companies in the industry.

New threshold to protect startups’ ability to fine-tune open sourced models – Established a threshold to determine which fine-tuned models are covered under SB 1047. Only models that were fine-tuned at a cost of at least $10 million are now covered. If a model is fine-tuned at a cost of less than $10 million dollars, the model is not covered and the developer doing the fine tuning has no obligations under the bill. The overwhelming majority of developers fine-tuning open sourced models will not be covered and therefore will have no obligations under the bill.

Narrowing, but not eliminating, pre-harm enforcement – Cutting the AG’s ability to seek civil penalties unless a harm has occurred or there is an imminent threat to public safety.

Reply
(Report) Evaluating Taiwan's Tactics to Safeguard its Semiconductor Assets Against a Chinese Invasion
Gauraventh2y10

Fixed!

Reply1
4Qualified rights for AI agents
3d
1
0Not stepping on bugs
3d
3
3My kids won't be workers
17d
0
14Legal Supervision of Frontier AI Labs is the answer.
4mo
2
2Gauraventh's Shortform
4mo
3
16CultFrisbee
1y
3
14(Report) Evaluating Taiwan's Tactics to Safeguard its Semiconductor Assets Against a Chinese Invasion
2y
5