Posts

Sorted by New

Wiki Contributions

Comments

nastav1y10

Tl;dr - We must enlist and educate professional politicians, reporters and policymakers to talk about alignment.

This interaction between Peter Doocy and Karine Jean-Pierre (YouTube link below) is representative of how EY’s time article has been received in many circles. 

I see a few broad fronts of concern in LLM’s.

  • Privacy and training feedback loop
  • Attribution, copyright and monetization related payment considerations
  • Economic displacement considerations (job losses and job gains etc) and policy responses needed
  • Alignment 

Of these, alignment is likely the most important (at least in my opinion - on this point opinions seem to genuinely vary) and has had a very long long intellectual effort behind it, and yet recent discourse has framed it often as if it were unserious and hyperbolic (eg “doomerism”)

One objection that often comes up is that ChatGPT and Bing are aligned with our values and anyone can try it for themselves. Another objection I keep coming across is that alignment folks aren’t experts in AI implementation and thus lack the insights that an implementor possesses from directly working on alignment as part of the implementation process. 

The way I see it, these and other objections don’t have to be true nor have to be perfect to land effectively in the public sphere of discourse  - they just have to seem coherent, logical and truthy to a lay audience.

ChatGPT’s appeal is that anyone can immediately experience it, and partake in judging its value to our collective lives - AI is no longer an esoteric concept (most people still don’t think of social media feeds explicitly as AI). 
 

The discussions for and against AI and various aspects of policy now take place in a manner accessible to all, and whether we like it or not,  such discussions must now comport with public norms of discourse and persuasion. I really think that we must enlist and educate professional politicians, reporters and policymakers to do this work. 

nastav1y31

I own a few scholarly texts from india about temple architecture. Despite their best efforts, the authors can’t help themselves but describe mythology (surrounding temples, religion etc) as if it were history. 

Hinduism is also very amorphous and culture and regional norms greatly overlaps with what constitutes the actual religious system - and this makes secular taxonomy very hard. At least it makes it inaccessible. 
 

What I find “rationalist” about this essay is that it attempts to offer a secular taxonomy, and attempts to overcome very common mistakes (like conflating mythology with history). 

nastav12y10

I can think of one situation where pulling the levers would be more 'good' than 'bad'.

Estimate each of their future influence on others - both span and depth. If you consider it 'high', then pull the lever. If you consider it 'low' (which might further correlate with lower IQ), then (tentatively) hold off.