NormanPerlmutter

PhD in math. MIRI Summer Fellow in 2016. Worked as a professor for a while, now I run my family's business.

Wiki Contributions

Comments

Sorted by

Hi everybody, it looks like LessWrong made a new account for me in connection with this meetup, rather than using my existing account. Moderators, please put this event under my main account, NormanPerlmutter.

One point that is being glossed over in this essay is that teaching is a difficult skill that is not as strongly correlated with comprehensive expert knowledge of the content than one might think. I say this as someone who worked as a teacher for 6 years.

Part of the process of developing expertise in a field of study is "chunking." The expert mind sees lots of complex things together as a single chunk (which can be unpacked if necessary) whereas the beginner sees the individual pieces. This chunking helps experts to interact with other experts and to apply the material to solve complex problems. But it can actually hinder teaching beginners especially if the expert is not a skilled teacher or has not taught that subject material before. The expert might easily give an overview of the topic, but has to unchunk the knowledge before explaining it to the beginner in detail.

Good teaching requires many interpersonal, and pedagogical skills that are not at all needed for the original learning of the material.

I just downloaded MS Edge so that I could use Bing AI and ask it to find me a Brazillian hammock more than 6 feet wide.  After repeated questioning, it kept giving me hammocks less than 6 feet wide (but more than 6 feet long). Even after I pointed out its error explicitly it kept making the same error and finally Bing gave up and told me it couldn't help. Like it would list two possibilities for me, state the length and width of each, and the width was less than 6 feet in each case.

Given all the amazing stuff we've seen out of AI lately, I'm kind of amazed it wasn't more successful. I'm guessing they don't make Brazillian hammocks in that size. (not sure why, as they make Mayan hammocks much wider than that, but anyway . . . )

Is this a blind spot for Bing? Or does Microsoft prefer for it to turn up bad results rather than say that no such thing exists?

Since nobody has called it . . . I spotted the (intentional?) linguistic joke in one of the section headers. The Hebrew word that sounds like Llama means "why."

I've met humans who are unable to recognize blatant inconsistencies. Not quite to the same extent as Bing Chat, but still. Also, I'm pretty sure monkeys are unable to recognize blatant inconsistencies, and monkeys are intelligent.

I agree that this is a risk, but I'm not sure whether it's the main risk. Another risk is that if somebody gets access to the encrypted store, they can use it to steal all your passwords.

I read a lot of Derek Lowe early in the pandemic and regard him highly, but in this case I think he's wrong. Going through the comments of Lowe's post, I came across a link to this essay by a distinguished biologist, Stephen Salzberg, at Johns Hopkins agreeing with Zvi's perspective. 

http://genome.fieldofscience.com/2022/10/gain-of-function-experiments-at-boston.html

Salzberg is a computational biologist, not a virologist, but he's a distinguished professor at a prestigious school and does not seem to be on the fringe politically as far as I can tell  If anybody knows more about him, please let me know. 

Overall, experts seem to be split on this matter. Which is strong enough evidence for me that the research should have been disallowed or at least regulated to the highest security level. The risks are just too great relative to what was learned from the research.

I have written a letter to my representative in the House encouraging her to legislate more restrictions on gain of function research and referencing the article linked above.

This is a fascinating essay that made me think of some of my personal experiences with having my boundaries violated in a new light. Thank you.

You pointed out that just asking for consent can be costly. I think an important social/communication/culture technology to consider is how to make consent requests less costly and/or less frequently necessary, while still allowing a strong social norm around consent.. For instance, having meta-discussions about consent with your friends or meta-rules about consent in your social group or community, that are organized in such a way that asking for consent is seen as easy. Giving close friends broad consent to a wide range of acts, and occasionally checking in on that over time. Etc.

I agree that living conditions are better today than several decades ago and worse today than 3 years ago.

That being said, I have seen a lot of mixed evidence and arguments about long covid and haven't figured out how to best think about it.

Load More