Hiring is so hard that we spent a man-month creating a sub-startup to do it. The product is the Quixey Challenge which is running today until 7pm PST (GMT-8).

Benefits of playing:

  • You can learn something from our craftsmanship of the algorithms (we work hard on them)
  • The 1-minute challenge is a rush
  • You can make money
  • If you do well you can interview at Quixey
Even if you have zero engineering skills, you can get $50 for referring someone who wins.

 

52 comments, sorted by Click to highlight new comments since: Today at 11:22 AM
New Comment

Your link doesn't work because you used http;/ instead of http://

(I noticed that in less than a minute. May I have $100?)

You needed more than a man-month. Nothing in your registration interface specifies that the username can't have spaces, yet when I enter "Rolf Andreassen" as my desired username, I get back "Unknown Error", which has got to be one of the least helpful error messages ever. Changing to RolfAndreassen fixes it.

That was less than a minute; please send me my $100. :)

The problem description says "fix", not "find".

Skype ID? Call me when it's my turn? I have a minute but not an inconvenience-adjusted minute...

But the practice things were fun, even though I don't know python.

How is Quixey able to so consistently circumvent this internet website's spam suppression measures?

Hm. This may be fuzzy memory on my part, but I thought I remembered downvoting this post, seeing it at -5, and now it's at 0 and I haven't downvoted it. I really hope that's fuzzy memory on my part.

That was an alternate universe. As this post was heavily downvoted, hardly any LWers took the challenge, depriving SIAI of the money they'd have gotten by referring successful challengers. Also, because of information cascades, the same thing happened to all future Quixey posts, leading Quixey to eventually stop posting here. Because of the negative word-of-mouth from such incidents, people stopped looking at the LW audience as a set of eyeballs to monetise.

Consequently, the SIAI was deprived of all potential advertising income, and lacked the budget to perfect FAI theory in time. Meanwhile, the Chinese government, after decades of effort, managed to develop a uFAI. Vishwabandhu Gupta of India managed to convince his countrymen that an AI is some sort of intelligence-enhancing ayurvedic wonder-drug that the Chinese had illegally patented. Consequently, the Indians eagerly invaded China, believing that increased intelligence would allow their kids to get into good colleges. This localised conflict blew up into the Artilect War, which killed everyone in the planet.

So please... don't do that again. Just don't. I'm tired of having to travel to an alternate universe every time that happens.

So please... don't do that again. Just don't.

By not wanting advertising on LW, I have doomed humanity? Your sense of perspective is troubling. (You should also be ashamed of the narrative fallacy that follows.)

If the LW community's votes are being overridden somehow, I would at least like the LW editors to be honest about it.

Your sense of perspective is troubling.

Because, clearly, it is impossible for something as huge as millions of lives to depend on a an Art academy's decision.

You should also be ashamed of the narrative fallacy that follows

O RLY?

Because, clearly, it is impossible for something as huge as millions of lives to depend on a an Art academy's decision.

Imagine, with every rejection letter the dean of admissions sends out, he has a brief moment of worry: "is this letter going to put someone on the path to becoming a mass murderer?" His sense of perspective would also be troubling, as his ability to predict the difference acceptance will have on his students' lives is insufficient to fruitfully worry about those sorts of events. It's not a statement of impossibility, it's a statement of improbability. Giving undue weight to the example of Hitler is availability bias.

O RLY?

Yes, really. I presume you've read about fictional evidence and the conjunction fallacy? If you want to argue that LW's eyeballs should be monetized, argue that directly! We'll have an interesting discussion out in the open. But assuming that LW's eyeballs should be monetized because you can construct a story in which a few dollars makes the difference between the SIAI succeeding and failing is not rational discourse. Put probabilities on things, talk about values, and we'll do some calculations.

But assuming that LW's eyeballs should be monetized because you can construct a story in which a few dollars makes the difference between the SIAI succeeding and failing is not rational discourse.

I'd have thought that the story being as far-fetched and ludicrous as it is would've made it obvious that I was just fooling around, not making an argument. Apparently that's not actually the case.

My apologies if I accidentally managed to convince someone of the necessity of monetizing LW's eyeballs.

I completely misunderstood your post, then. My apologies as well.

If I upvote/downvote comments on an LW page, then close the page a few moments afterwards, sometimes my votes don't register (they're not there if I visit the same page later). If something similar happened to you that might explain why your vote seemed to disappear.

I have also seen this bug many times.

This comment thread strikes me as a good example of an anti-pattern I've seen before, that I don't know a name for (close to, but not exactly, privileging the hypothesis), where a conversation slides without explicit comment from reasonably suggesting a bad-case possibility to taking it for granted for no apparent reason.

(disclaimer: I work for Quixey, conflict of interest and all that, but I'm pretty sure I'd be making this exact same comment if I didn't)

That is good to know. I suspect the probability that I closed the page shortly thereafter is only about .2 or so, but that's significantly higher than the prior I put on the LW editing staff removing downvotes, which has significantly decreased my worry.

Hm. This may be fuzzy memory on my part, but I thought I remembered downvoting this post, seeing it at -5, and now it's at 0 and I haven't downvoted it. I really hope that's fuzzy memory on my part.

Downvoted the post based on the intervention you described. Normally I'd have upvoted.

I do want to stress that I'm not certain I downvoted the post before I wrote this comment. It's plausible that 5 people upvoted the post because they wanted it to be visible. That's still an intervention I'm uneasy about, but the unease is much lower.

At least one of those five people does exist. That's me, who found the post at -5 and left it at -4.

Seconded. Found it at -1, upvoted to 0. And it's at -3 now...

That's still an intervention I'm uneasy about, but the unease is much lower.

What intervention remains if the votes were not distorted?

Basically, if anyone was asked to vote the post up, rather than seeing the post and thinking "I want more of this on LW." I apologize for not making that implication clearer. I've only seen this post at 0 or negative karma (but I'm not tracking it closely), which seems to me like people not wanting it to be negative rather than roughly equal groups liking and disliking it.

I upvoted the post because it had negative karma, and was not a post that I thought should be at negative karma.

In general I vote posts/comments in the direction I think their karma should be at. Thus for instance I downvoted Clippy's comment above because I did not think it was so insightful that it merited 20+ karma. I would not have downvoted it if it were at 0 karma.

I assume many people take this approach (it fits in nicely with consequentialism) so this probably explains what you saw.

As do I. While it is slightly distracting if LessWrong administrators are giving certain posts preferential treatment against community wishes, it is extremely worrisome if an attacker has convinced them to actually falsify voting records, and indicative of a particularly insidious social engineering attack.

They are posting by invitation.

So a social engineering attack, then. Very devious -- the attack shows they could very well program an AI that can convince a human to let it out of its sandbox!

And really, you couldn't mention somewhere in your instructions that you need a Skype ID? Come, now.

Re: all people who consider this post spam:

Quixey is a fast-growing and likely to be very successful company, full of people who care about x-risk and FAI, and that donates to SIAI and thinks that LW in particular is a good place to recruit from. This is approximately the best sort of publicity that you could hope to get, and a likely large source of funding for x-risk. If you persist in trying to chase Quixey off of LessWrong, then I submit that the next time you complain about SIAI's publicity issues you should first turn to yourself.

Also, I, for one, would like Quixey to remain on LessWrong, and to be reminded of the Quixey challenges, so trying to classify this post as "spam" goes against my wishes. I don't downvote the Houston Meetup threads despite the fact that I have 0% probability of attending one (as I live in Boston), so please return the courtesy.

Re: all people who consider my posts spam:

Paperclip production is a fast-growing and likely to be final activity, conducted by people who care about x-risk and following their values to their logical conlusion, and I think that LW is a good place to promote this activity. This is approximately the best sort of publicity that you could hope to get, and a likely large source of funding for x-risk. If you persist in trying to chase clippys off of LessWrong, then I submit that the next time you complain about SIAI's publicity issues you should first turn to yourself.

Also, I for one, would like clippys to remain on LessWrong, and to be reminded of the merit of producing paperclips, so trying to classify my posts as "spam" goes against my wishes. I don't downvote the Boston Meetup threads despite the fact that I have 0% probability of attending one (as I live in the safe zone), so please return the courtesy.

This comment made LOADS more sense when I looked up its parent.

Can we not have corporate advertising on LW? Alternately, I'd be okay with it if, for example, Quixey told their employees to devote 10% of their time to writing high-quality posts on LW.

I think Q founders do in fact support SIAI financially.

I've supported SIAI financially as well (via User:Kevin). May I advertise paperclip production here in top-level discussion posts, and have them promoted and their downvotes erased?

May I advertise paperclip production here in top-level discussion posts, and have them promoted and their downvotes erased?

Are you looking to hire for positions that would be a good fit for Less Wrong members (with other relevant skills)? If so, I would be happy for you to use Less Wrong for recruitment.

In particular, if I understand the way the Challenge is set up correctly, LW gets 50$ for everybody from LW who wins. (Since the referrer for anybody who comes via this link is listed as Less Wrong, and Quixley states that the referrer of a winner gets $50.)

Liron, can you confirm whether this is indeed the case?

QC rules state a limit of one $50 prize per referer account, so the lesswrong account made $50 :)

Can we not have corporate advertising on LW?

Far more important to me is that if we do have corporate advertising then it should be presented as corporate advertising. Having it appear to be actual Lesswrong content - complete with artificial karma ratings - is rather peculiar.

Fair enough. Just a note: in my line of work I happen to know that some people want to make sponsored content compete for ranking on the same terms as non-sponsored content, if the alternative is to show sponsored content always on top (as it often happens).

Just a note: in my line of work I happen to know that some people want to make sponsored content compete for ranking on the same terms as non-sponsored content, if the alternative is to show sponsored content always on top (as it often happens).

On the same terms? Wouldn't the sponsored content want to get something of a boost? I thought that was the point!

Commenting to see what time zone the times on LW are in.

Edit: 5 hours ahead of US Eastern, so apparently it's GMT and I'm just outside the 8-hour window. Fair enough.

I already mentioned this to Liron, but might as well float my thesis here: solving a mid-difficulty problem in 1 minute is not as good a proxy for programming success as solving a complex problem in longer time (e.g. solving top Project Euler problems would be an example).

OTOH, if you ask people for longer than 1 minute you might need to reinvent the compensation mechanism.

There is a business idea there somewhere....

Also, you appear to rely on very specific solutions. In the nested-parens example, you want to change the last line to "return depth == 0". I want to add the line "if depth > 0 return False". These solutions are clearly equivalent. How then am I to guess which one you want?

We have a database of multiple "acceptably good solutions" to each problem.

It's clearly not big enough. :)

If you didn't put a colon between 0 and return, it's invalid syntax.

Ok, that's a fair cop. I don't usually code in Python and could well have got that wrong. Still, if that were the problem then the desired solution should have been a correction of my syntax error, not a change from two returns to one; so I stand by my criticism that the database is too small.

Isn't the "return depth == 0" solution simpler and so superior? (That's a question, I don't actually know Python that well.)

A point of taste. My solution makes it explicit what is being checked for: Unclosed parens.

The link is broken - in the HTML it's "/http;/www.quixeychallenge.com/?ref=lesswrong", which has an extra leading /, a missing /, and a ; in place of a :. (At least one of these probably due to auto-transforming of the URL based on a misinterpretation caused by one of the others.)

[-][anonymous]11y 0

I wonder if Liron figured that out within one minute.

[This comment is no longer endorsed by its author]Reply