JoshuaFox

JoshuaFox's Comments

Why aren't assurance contracts widely used?

You could well ask why they weren't used more in the past, but today they are becoming more widely used. Kickstarter is pretty popular, and they do assurance contracts. In my neighborhood, donations for a synagogue are being gathered on an assurance contract.

Aumann Agreement Game at LessWrong Israel

At Google offices, 12th floor, 98 Yigal Alon, Tel Aviv.

Test your Bayesian powers with the Aumann Agreement Game!!

Based on the theorem of our Israeli Economics-Nobel-winning superstar, this game involves guessing the answer to a trivia question and the probability that it is right, then adjusting your probability based on what other people guessed.

Thank you to David Manheim who will lead is in this game.

What's up with Arbital?

Eliezer is still writing AI Alignment content on it, ... MIRI ... adopt Arbital ...

How does Eliezer's work on Arbital relate to MIRI? Little is publicly visible of what is is doing in MIRI. Is he focusing on Arbital? What is the strategic purpose?

Meetup Discussion

Pre-existing friends, postings on Facebook (even though FB does not distribute events to the timelines of group members if there are more than 250 people in a group), occasionally lesswrong.com (not event postings, but more that people who are actively interested LW seek out a Tel Aviv group)

Meetup Discussion

In Tel Aviv, we have three types of meetings, all on Tuesdays. Monthly we have a full meeting, usually a lecture or sometimes Rump Sessions (informal lightning talks). Typical attendance is 12.

Monthly, alternating fortnights from the above, we do game nights.

We are graciously hosted by Meni Rosenfeld's Cluster startup hub. (For a few years we were hosted at Google.)

On other Tuesdays a few LessWrongers get together at a pub.

Progress and Prizes in AI Alignment

There certainly should be more orgs with different approaches. But possibly, CHCAI plays a role as the representative of MIRI in the mainstream academic world, and so from the perspective of goals, it is OK that the two are quite close.

Gatekeeper variation

You're quite right--these are among the standard objections for boxing, as mentioned in the post. However, AI boxing may have value as a stopgap in an early stage, so I'm wondering about the idea's value in that context.

Gatekeeper variation

Sure, but to "independently verify" the output of an entity smarter than you is generally impossible. This makes it possible, while also limiting the potential of the boxed AI to choose its answers.

Load More