digital_carver

Wiki Contributions

Comments

[Book Review] "The Alignment Problem" by Brian Christian

I believe there's a conflict-vs-mistake thing going on here. I saw this as a specific case where the probabilities don't line up the way your comment assumes they do. You seem to be from the beginning assuming I'm opposing an entire worldview and seeing it as some attack.

where a Twitter screenshot calling out Google is incontrovertible 'disinterested' evidence never to be questioned

Behold, the Straw Man! Today for his trick, he turns "However small X's level of evidence is, Y has even weaker evidence" into "X is incontrovertible evidence never to be questioned".

I considered the original tweet, the fact that Google did not refute it, and the claim that Google blocked ape-related tags from Photos for years afterwards (with its own attached probability), and updated my inner measure of probability of this being true. And I find that the statement from the ex-Googler provides even weaker evidence to update based on. Nothing is "never to be questioned" here.

any objection is instead required to be multiple independent third-party studies?

Again a thing I didn't say, and you keep repeating.

They were not disinterested in the least! They were specifically 'calling out' and shaming Google for it

A "claim from a disinterested party" means someone previously disinterested, someone where our prior assumptions can be close to assuming they are disinterested. A police officer is considered disinterested in a case if their family and friends are not involved in the case, and so can be assigned the case. It makes no sense to say "they've been assigned to the case now, so they're not a disinterested party"!

They were specifically 'calling out' and shaming Google for it and it worked brilliantly in earning them huge fake internet points.

This is the original tweet pointing out the issue. To me it just looks like a user casually pointing out a problem with a tool they use. Not everyone is obsessed all the time with culture wars and internet points.

(Someone who left Google and is mentioning it in an aside years later which no one noticed, that's much closer to disinterested.)

Oh by the way, I found evidence that this author tweeted this seals claim back at the time as part of the Twitter thread about this. Still without any links or images, but that helped update my probabilities a little bit (as I had believed that part of the problem might be that it came two years later, as mentioned before). I wish that's what this conversation had consisted of, actual evidence to try to arrive at the truth, instead of straw men and outright false claims.

The last paragraph has little to do with the claims here, unless you're dumping an entire opposing worldview on me, and arguing against that imaginary person. For the record, I am much closer to your worldview regarding these issues and have noticed most of the things you mentioned. It's just that in this instance even with that background there isn't good enough evidence to believe the media suppressed some narrative.

My original comment has served its purpose to provide additional context for those who want it, and I don't think further discussion with you here will be productive. Thanks for all the fish.

[Book Review] "The Alignment Problem" by Brian Christian

I'm writing this rather nitpicky comment because this is the top comment replying with rather strong wording about sourcing and studies and double standards for reporting...

This is an unnecessarily snarky addition to the comment that's disappointing to see (and doesn't even make sense, since mine is neither a top comment, nor does it mention studies). In case you interpreted the "especially if I'd seen it was from gwern" in a negative way, I meant it as a factual statement that I saw you as a person with high respect and trust and hence assigned high prior confidence to things from you.

So, how many third parties reported about the classification and how trustworthy were they?

The original post was by a disinterested third party sharing a screenshot. However small the level of evidence provided by that is, an offhand statement by someone literally involved in the project and with no attached evidence at all is obviously much weaker.

What should we make of an ecosystem which tells us on a literally daily to weekly basis (google the term) about the gorillas, but never, ever tells you about the seals (I only learned about that one because I was reading the Google expert's post for other reasons)? What should we infer about the epistemics and justifications of the various experts and reporting here?

Let's simulate two worlds:

In world (A), Google's PR team were so incompetent that they did not mention this "white people misidentified as seals" at the time, even to say "our system sometimes misidentifies people as animals, for eg. <photo of white people with seal tag>, and we are improving the system" - which would have softened the PR blow significantly. Users see white people tagged as animals, but they never ever share it; or they do, but no one bothers to report it, not even a tabloid with space to fill and low standards, not even contrarian media that carries "All Lives Matter" articles and would love to use any "attack on whiteness"; and the screenshot doesn't go viral either (despite "bots are funnily dumb" being a favourite meme category).

In world (B), the "white people tagged as seals" happens either only in obviously-distorted or blurry photos, or only in an internal test system that never even got out of Google, or in some other not-usable-for-PR way. Journalists do not report on it because they don't see it. A Googler writes a non-official Medium article that's not focused on this, two years after the fact, and mentions it in a couple of sentences offhandedly. Perhaps one or two journalists happen to read it for other reasons, just like you, but it's mentioned as a past bug that's likely fixed, and there's no supporting evidence, nothing to show their editor as a potential article, so they move on.

With the evidence available to us, something similar to world (B) seems much more likely than world (A).

[Book Review] "The Alignment Problem" by Brian Christian

For anyone not clicking to read the article: its author "was technical leader for Google’s social efforts (including photos)" at the time, and doesn't cite any public sources for the information. So we should at least consider how that's going to colour their interpretation/representation of the information.

They don't mention how often black people were classified as gorillas, and how that compared to white people being classified as dogs or seals. It could be that for every thousand cases of the former, there was one of the latter, or it may be one for one. My sibling comment says "The dataset had a good mix of races in it" (which I take to mean there was some reasonable proportional representation of races) - the article doesn't claim that. It says "the training data included a wide range of people of all races and colors", contrasting that with HP webcams where "the training data for “faces” had been composed exclusively of white people" - so it clears the bar of not being exclusively white, but we don't know by how much. In fact, the article goes on to say (due to photography practices) "our standards for what constitute “good images” still overwhelmingly favor white faces rather than black ones."

I'm writing this rather nitpicky comment because this is the top comment with rather strong wording ("no journalists bothered reporting this, but that system classified white people as 'dogs' and 'seals'"), that on another day I might have just taken on faith (especially if I'd seen it was from gwern, which I didn't at first) - I would have assumed the link contained a study, or at least images of results pages, and contained solid additional information about these results from a third party.

The Best Software For Every Need

I used the other Anki add-on for Incremental reading enough to be convinced of IR's potential, but unfortunately that addon has enough issues that it's not really usable for me. I've had the one you linked to installed for a while (based on many recommendations on Anki website and reddit), but I never figured out how the UI actually works. For now, SuperMemo definitely seems to have a lead when it comes to IR.

Better air is the easiest way not to die

As far as I know, it's just a random person on the Internet saying things that seem well thought-out and with enough research to at least serve as a good starting point. There's also a Hacker News page with critical discussion and other semi-related information, for those interested.

Estimating COVID cases & deaths in India over the coming months

The policy briefing file is pretty useful, a useful summary and then a bunch of neat graphs. (I've been jealous of Zvi's posts on Covid-19 in the US for quite some while, and this is probably the closest we'll get for India.) They also include comparison with other model projections from different groups at the end, which gives some other possible projections to look at.

Leaky Delegation: You are not a Commodity

"knowledge compilation" or "production compilation," which serves only to specialize knowledge: turning a general operation like "recall the next digit of my phone number and then say it" into a specialized one like "say 'seven.'"

This is a great concept to add to my mental vocabulary, thanks for mentioning it.

(Going off-topic from the main point of the post here.) It helps me understand, for eg., the pattern of success and failure in teaching my dad how to use a smartphone. Originally, I kept trying to directly transfer the general knowledge I had, about what to expect from mobile UIs, the dark patterns websites and apps use, etc., and found very little of it stuck. It turns out that, if I instead give him the compiled knowledge, specific to each instance - which intuitively feels like a bad way to teach things - he's able to reverse engineer a "good enough" part of the general knowledge from that, given enough examples. (Insert comparison to the success of just-give-it-lots-of-samples ML over symbolic top-down AI here.)

This concept of general uncompiled knowledge vs specific compiled action is a good tool for thinking about pedagogy and learning, applicable in many ways.

[U.S. specific] PPP: free money for self-employed & orgs (time-sensitive)

I didn't read the post, but thank you for saying '[U.S. specific]' right in the title. I wish this was a norm more widely across the Internet.

Great minds might not think alike

Related: Career advice from Scott Adams (Dilbert's creator) suggests becoming "very good (top 25%) at two or more things." (He even goes on to suggest: "At least one of the skills in your mixture should involve communication, either written or verbal. ")

Being a translator is often a natural outcome of this; when you have two or more mental spaces to pick ideas and metaphors from, it becomes easier to describe complex things in one field without resorting to jargon, using the language of a different field. Trying to be a translator can also be useful to clarify your own understanding, for this reason - you can't hide behind jargon or "common knowledge" beliefs any more, and so have to go through, clarify, and perhaps reconstruct your mental models for what you're trying to translate.

Give it a google

An addendum to this, perhaps as a next level, is to give it a non-google search.

  • DuckDuckGo helps me avoid the SEO-driven spammy results more easily than with Google, which is particularly apparent when searching about products, or terms that could be construed as product-related. My hypothesis is that the SEOptimizers have learnt the specifics of what Google's algorithm looks for and so have refined techniques to get their less-deserving clients rise to the top, and (thankfully) those techniques don't work with whatever alternate algorithms DDG uses. (To be clear, it's not a strict "DDG is always better than Google" thing, rather that they're surprisingly good at complementing each other, one being good when the other gives weak results.)

  • Million short is useful when I'm searching for something that's had a lot of media attention, but I want to avoid the mainstream media pieces on it. It excludes the top 100/1000/.../million websites on the Internet (as ranked by Alexa) from its results, which is sometimes exactly what you want.

For the sake of completion, some other alternate search engines are Metager , Qwant , and Mojeek (all of which claim privacy as their primary benefit). And I love me a bit of wiby when I'm feeling nostalgic about the old, simpler, 90s Internet of plain personal pages, and want to experience some of that again.

Load More