LESSWRONG
LW

Evan_Gaensbauer
1796493920
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
"AI Rapidly Gets Smarter, And Makes Some of Us Dumber," from Sabine Hossenfelder
Evan_Gaensbauer4mo20

I peruse her content occasionally but I wasn't aware that she is widely recognized as the quality of her analysis/commentary varying so wildly, and often particularly lacklustre outside of her own field. Gwern mentioned that Gary Marcus has apparently said as much in the past when it comes to her coverage of AI topics. I'll refrain from citing her as a source in the future. 

Reply
"AI Rapidly Gets Smarter, And Makes Some of Us Dumber," from Sabine Hossenfelder
Evan_Gaensbauer4mo20

You wrote a low quality summary of a low quality secondary-source video of no particular importance by a talking head whose expertise has little to do with AI (nor is regarded as such like a Gary Marcus)

You're right that I was probably exaggerating when I said it was the best effort I could provide. It was more like what I expected would be considered a basic, accurate summary I could generate in a brief period of time. 

low quality secondary-source video of no particular importance by a talking head whose expertise has little to do with AI (nor is regarded as such like a Gary Marcus)

That the source itself is not considered to be of particularly significant quality or importance makes sense given that my post is only lightly downvoted relative to the number of votes it has received. While of course her expertise isn't that relevant to AI, the fact that she has expertise in a sufficiently technical field seemed to me relevant to clarify to indicate she shouldn't be strongly suspected to present the information based on a wild misunderstanding. I wasn't aware that Gary Marcus has previously criticized the quality of her coverage of these sort of issues, or whatnot, so I'll keep in mind for the future she shouldn't be regarded as a reliable source. 

That she, or anyone else, might apparently be a "talking head whose expertise has little to do with AI" doesn't by itself seem like it'd be a strong argument against taking the person seriously as a source, relative to the standards on LW, given that the same could be said of many whose contributions are frequently well-received on LW. Similar criticisms have frequently been leveled at Eliezer Yudkowsky, or could be at Scott Alexander. I'd be unmoved by many such criticisms for the same reason as anyone else, though that criticism of others who've for longer been well-received on LW could be warranted. There are also many with expertise in AI who among rationalists are often dismissed as talking heads.

where you added no original information or thought, and failed to follow up on basic details, like failing to name or link the study in the final item

I wasn't aware that additional info or analysis/commentary beyond the contents of the source was expected. Anyone could follow up on basic details as easily as I could if they were curious to learn even more, and I'm technically not obliged to do so myself, though I'm also not entitled to be well received even if I don't bother citing other sources, so that seems fair enough others would be nonplussed by that. 

I do not think you really have to ask why your post is not being upvoted to the skies.

I agree, which is why I didn't. I asked why it was being downvoted.

(despite praising your own "effort" repeatedly)

I didn't praise my effort but mentioned that I put in any. I didn't mean to use the word effort in any exaggerated sense. There's no need to diminish it as though it's not technically true. If someone takes two minutes to brush their teeth, I'd consider to say they put in two minutes of effort is as appropriate a way to describe that as any other. 

If you are spending a lot of "effort" on posts like this and you are upset by the reception

I wasn't as upset by the reaction but more frustrated that nobody before had bothered before explaining why it was mostly receiving downvotes. I was aware the post might be banal, though it also seemed innocuous enough, that I didn't expect it to be mostly downvoted either, as though it somehow particularly subtracts from the quality of content on LW. I now understand better the reasons why, so thanks for explaining.

I mentioned above:

I'm just hoping you can offer insight into whether I should keep bothering with the effort of posts like this because I'm the one who's off here, or others just have superficial reactions. 

You've offered enough that I understand that the answer to my own question is that it was mostly the former--that I was the one who was off--so I'm satisfied by this response. 

Reply
"AI Rapidly Gets Smarter, And Makes Some of Us Dumber," from Sabine Hossenfelder
Evan_Gaensbauer4mo20

I appreciate that, though it seems since yesterday my post may have been downvoted even more. I wouldn't mind as much except nobody has explained why when I bothered putting in the effort. I could think maybe it's because of the clickbait-y title, or on account of the fact that it's a YouTube video meant to convey important info about AI to, like, normies in a mainstream way, and is therefore assumed to be of super low quality.

Yet that'd be in spite of facts that:
1. I clarified this is from theoretical physicist and science communicator who's trying to inform the public in an approachable way, which is something I figure others on LessWrong could appreciate.
2. I have now summarized the details as best as I can so that others on LessWrong don't need to bother with digesting the info through a medium they don't prefer, in spite of the fact that having that available in a multimedia format is more preferred by others, so it could recognized how it's constructive there are multiple options.

I suspect part of it might just be a latent preference on LessWrong for the sort of lengthy blog posts in a style they're accustomed to, which is valid, but a tendency to presume the same sort of info they like being exposed to but delivered in a different way means it must be lower quality. That could be a bias I might question, though it's fair enough if others just disagree. I'm just hoping you can offer insight into whether I should keep bothering with the effort of posts like this because I'm the one who's off here, or others just have superficial reactions. 

Reply
"AI Rapidly Gets Smarter, And Makes Some of Us Dumber," from Sabine Hossenfelder
Evan_Gaensbauer4mo20

I've now summarized those details as they were presented in the video. 'Staying more grounded in how bad it is' with more precision would require you or whoever learning more about these developments from the respective companies on your own, though the summaries I've now provided can hopefully serve as a starting point for doing so. 

Reply
Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now)
[+]Evan_Gaensbauer7mo*-9-4
Defining alignment research
Evan_Gaensbauer10mo20

Do you mean Evan Hubinger, Evan R. Murphy, or a different Evan? (I would be surprised and humbled if it was me, though my priors on that are low.)

Reply
Habryka's Shortform Feed
Evan_Gaensbauer10mo00

How do you square encouraging others to weigh in on EA fundraising, and presumably the assumption that anyone in the EA community can trust you as a collaborator of any sort, with your intentions, as you put it in July, to probably seek to shut down at some point in the future?

Reply
Google "We Have No Moat, And Neither Does OpenAI"
Evan_Gaensbauer2y20

The Substack post only mentions that a researcher leaked the document, not that any researcher authored it. The document could've been written up by one or more Google staffers who aren't directly doing the research themselves, like a project manager or a research assistant. 

Nothing in the document should necessarily be taken as representative of Google, or any particular department, though the value of any insights drawn from the document could vary based on what AI research project(s)/department the authors of the document work on/in. This document is scant evidence in any direction of how representative the statements made are of Google and its leadership, or any of the teams or leaders of any particular projects or departments at Google focused on the relevant approaches to AI research.

Reply
Google "We Have No Moat, And Neither Does OpenAI"
Evan_Gaensbauer2y20

Thanks for making this comment. I had a similar comment in mind. You're right nobody should assume any statements in this document represent the viewpoint of Google, or any of its subsidiaries, like DeepMind, or any department therein. Neither should be assumed that the researcher(s) who authored or leaked this document are department or project leads. The Substack post only mentions that a researcher leaked the document, not that any researcher authored it. The document could've been written up by one or more Google staffers who aren't directly doing the research themselves, like a project manager or a research assistant. 

On the other hand, there isn't enough information to assume it was only one or more "random" staffers at Google. Again, nothing in the document should necessarily be taken as representative of Google, or any particular department, though the value of any insights drawn from the document could vary based on what AI research project(s)/department the authors of the document work on/in. 

That might not be a useful question to puzzle over much, since we could easily never find out who the anonymous author(s) of the document is/are. Yet that the chance the authors aren't purely "random" researchers should still also be kept in mind.

Reply
Changes in Community Dynamics: A Follow-Up to 'The Berkeley Community & the Rest of Us'
Evan_Gaensbauer3y20

Thank you for this detailed reply. It's valuable, so I appreciate the time and effort you've put into it. 

The thoughts I've got to respond with are EA-focused concerns that would be tangential to the rationality community, so I'll draft a top-level post for the EA Forum instead of replying here on LW. I'll also read your EA Forum post and the other links you've shared to incorporate into my later response. 

Please also send me a private message if you want to set up continuing the conversation over email, or over a call sometime. 

Reply
Load More
4"AI Rapidly Gets Smarter, And Makes Some of Us Dumber," from Sabine Hossenfelder
4mo
9
-8Chinese Researchers Crack ChatGPT: Replicating OpenAI’s Advanced AI Model
6mo
1
-13Human Biodiversity (Part 4: Astral Codex Ten)
8mo
6
4Catalogue of POLITICO Reports and Other Cited Articles on Effective Altruism and AI Safety Connections in Washington, DC
1y
0
23Are there high-quality surveys available detailing the rates of polyamory among Americans age 18-45 in metropolitan areas in the United States?
Q
1y
Q
0
10What evidence is there for (or against) theories about the extent to which effective altruist interests motivated the ouster of Sam Altman last year?
Q
1y
Q
0
21Changes in Community Dynamics: A Follow-Up to 'The Berkeley Community & the Rest of Us'
3y
6
18Is there any way someone could post about public policy relating to abortion access (or another sensitive subject) on LessWrong without getting super downvoted?
Q
3y
Q
20
5What are your recommendations for technical AI alignment podcasts?
Q
3y
Q
4
22What are the numbers in mind for the super-short AGI timelines so many long-termists are alarmed about?
Q
3y
Q
14
Load More