Dave Orr

Google AI PM; Foundation board member

Posts

Sorted by New

Wiki Contributions

Comments

Omicron Post #4

Thanks for all the work you put in on these incredibly informative posts. This is, hands down, the best source of analysis I've found for all things COVID. 

What have your romantic experiences with non-EAs/non-Rationalists been like?

Together for 21 years, married for 17, two kids, all good.

The major thing I had to learn was how to communicate certain things, and when to keep my mouth shut. For example, she doesn't ascribe to the principle of charity when it comes to strangers or the out group. We used to have fights about it (object level always, "why are you taking their side?!?") before I realized they were fundamentally unproductive. And I can think of a couple of key times that she was right and I was overly charitable, including in a work context that really mattered.

On the EA front, we donate 10% of income, to a mix of things -- hers more socially determined, mine more EA-ish, but good organizations all.

I think the world is full of people who don't t think the way you do, no matter who you are, so it's important to be able to form relationships with lots of kinds of people. Hopefully some experience/interest in rationality could help identify otherwise puzzling communication failures.

Also, practically speaking, less wrong / EA / rationalism are heavily male, so most people will need to find someone outside the community as a life partner.

Experience with Cue Covid Testing

I'll chime in with my experience, presumably from the same employer.

I and my team at work have found a pretty high rate of failures, in one case a majority of tests failed. Seems like the failure rate is around 30%, sample size of ~8 people, though not sure of how many tests/person. So that $60/test might look more expensive.

Having said that, I really like the Cue tests. More accurate than fast tests, and super convenient. I'm not very price sensitive so for me it's a clear win.

The Meta-Puzzle

This is the Godel Escher Bach solution :)

The Meta-Puzzle

This was my solution! :)

The Meta-Puzzle

I found a different solution to the initial puzzle, which I won't spoil here, but post as a follow-on:

Same scenario, except after you hear the statement, you know the person is single -- but you don't know who they worship!

What was the statement?

Are there substantial research efforts towards aligning narrow AIs?

Of course tons of this research is going on. Do you think people who work at Facebook or YouTube are happy that their algorithms suggest outrageous or misleading content? I know a bit about the work at YouTube (I work on an unrelated applied AI team at Google) and they are altering metrics, penalizing certain kinds of content, looking for user journeys that appear to have undesirable outcomes and figuring out what to do about them, and so on.

I'm also friends with a political science professor who consults with Facebook on similar kinds of issues, basically applying mechanism design to think about how people will act given different kinds of things in their feeds.

Also you can think about spam or abuse with AI systems, which have similar patterns. If someone figures out how to trick the quality rating system for ads into thinking this is a high quality ad, then they'll get a discount (this is how e.g. Google search ads works). All kinds of tricky things happen, with web pages showing one thing to the ad system and a different thing to the user, for instance, or selling one thing to harvest emails to spam for something else. 

In general the observation from working in the field is that if you have a simple metric, people will figure out how to game it. So you need to build in a lot of safeguards, and you need to evolve all the time as the spammers/abusers evolve. There's no end point, no place where you think you're done, just an ever changing competition.

I'm not sure this provides much comfort for the AGI alignment folks....

Long Covid Is Not Necessarily Your Biggest Problem

I think seasonality is going to push in the other direction for a while. I think in 7-8 months things could plausibly be much better, in fact I think it's likely, but February is still very winter.

[Crosspost] On Hreha On Behavioral Economics

Out of curiosity, when do you crosspost to LW versus just posting to ACX? I can see that this post is very LW since it's about biases and thus rationality, but I have to think almost all the readers of LW also read ACX.

Is the discussion in the comments different here? (Certainly the comment interface and threading are way better here.)

Load More