Wiki Contributions


I'll be selling Dominant Assurance Contracts Insurance. If a post is not up to standards then you can file a claim and receive compensation. Power will shift towards my insurance company's internal adjudication board. Eventually we'll cover certain writers but not others, acting as a seal of quality.

If they pay you with PayPal and you pay them back with PayPal, then there is a 6% loss. It would be easier and less dead weight lossy to use a private challenge mechanism within Manifold. Or even to just create a market in Manifold and bet it down super low, and then resolve it against yourself if you get 10 bettors. That's a first thought... One objection is that since assassination markets don't seem to obtain, maybe they wouldn't work with blog posts either. However, in this case the market maker has both the market incentive and some other motivation not entirely captured by the market to complete the blog post.

I learned those from Wittgenstein, Aristotle, and Umberto Eco.

Numerical obsession can simplify important semantic subtleties. And semantic pyrotechnics can make my reasoning more muddled. Both are needed. As is, I suppose, a follow up post on the non-LW ideas that pay a lot of rent.

One way to do this would be to create houses of study dedicated to these exams for students and a tutor work together in the community to accomplish these goals without requiring a very large costly institution. Group house plus the tutor/academic coordinator.

Some fields only require completing a series of test for entry. No degree required. I'll put in parentheses one's that I'm not sure don't require a bachelor's degree. Certified Actuary Chartered Financial Analyst (Certified Public Accountant) (Various other financial certifications) (Foreign Service Officer's exam) (The bar exam: I don't know how one can get them to let you sit the exam without a law degree, but it is allegedly possible in California, Vermont, Virginia, and Washington). There are a lot of certificate programs out there for long established work that involves brains but not in person learning (money and law). In computer science, "building things" is the certificate, I suppose.

I agree that if you are looking at it in terms of art generators that it is not a promising view. I was thinking hypothetically about AI enabled advancements in the energy creation and storage and in materials science and in, idk, say environmental control systems in hostile environments. If we had paradigm shifting advancements in these areas we may then spend time implementing and exploiting these world changing discoveries.

Maybe another perspective on point three is the additional supply of 2d written and 2d visual material will increase the price and status of 3d material, which would equilibrate as more people moved in to 3d endeavours.

So might this be a way to increase not only the status of atoms relative to bits, but use bits to reinvent the world of atoms through new physical developments? And if the physical developments are good enough and compounding would that stall the progress of AI development?

While I would say your timeline is generally too long, I think the steps are pretty good. This was a visceral read for me.

Some sociological points:

  1. I think you don't give anti-AI development voices much credence and that's a mistake. Yes, there will be economic incentives, but social incentives can overcome those if the right people start to object to further specialized LLM development.

  2. Although you have a fairly worked out thought on AI development, where the path is clear, for AIS the fact that you ended with a coin flip almost seems like slight of hand! The scenario you describe was clearly heading towards self-destruction, not just for narrative tenor and story based reasons, but for the lack of finding a foundation upon which to ground true AIS.

  3. With improved robotics, there might be solutions to escape certain AI social traps by changing the balance of power between information and physical world development. I have in mind the idea that AI seems incredibly valuable today because physical progress has been so weak. But if physical progress becomes strong and fast, then allowing/forcing AI to slow down will be socially easier. It's an elasticity of demand model for competing goods. Does that make sense? If not, I'd happily elaborate.

Just read your latest post on your research program and attempt to circumvent social reward, then came here to get a sense at your hunt for a paradigm.

Here are some notes on Human in the Loop.

You say, "We feed our preferences in to an aggregator, the AI reads out the aggregator." One thing to notice is that this framing makes some assumptions that might be too specific. It's really hard, I know, to be general enough while still having content. But my ears pricked up at this one. Does it have to be an 'aggregator' maybe the best way of revealing preferences is not through an aggregator? Notice that I use the more generic 'reveal' as opposed to 'feed' because feed at least to me implies some methods of data discovery and not others. Also, I worry about what useful routes aggregation might fail to imply.

I hope this doesn't sound too stupid and semantic.

You also say, "This schema relies on a form of corrigibility." My first thought was actually that it implies human corrigibility, which I don't think is a settled question. Our difficulty having political preferences that are not self-contradictory, preferences that don't poll one way then vote another, makes me wonder about the problems of thinking about preferences over all worlds and preference aggregation as part of the difficulty of our own corrigibility. Combine that with the incorrigibility of the AI makes for a difficult solution space.

On emergent properties, I see no way to escape the "First we shape our spaces, then our spaces shape us" conundrum. Any capacity that is significantly useful will change its users from their previous set of preferences. Just as certain AI research might be distorted by social reward, so too can AI capabilities be a distorting reward. That's not necessarily bad, but it is an unpredictable dynamic, since value drift when dealing with previously unknown capabilities seems hard to stop (especially since intuitions will be weak to nonexistent).

Hey ya'll who RSVP'd, if you are interested send me a dm with your email and I will add you to our email list. 

We are doing two events before the ACX Meetup Everywhere in October. You can learn about them by getting on the email list. This Thursday September 22 we are doing an online event in Gathertown 8pm eastern, 5pm Pacific. Feel free to send this out to people who would like this sort of thing.

Loose Discussion Topic:
What does it mean to improve your local community/environment? Is local improvement possible? What would that mean? Should you care? Does local matter anymore? And how would one go about making such improvements possible? Come puzzle over these things and more with us Thursday night.

We tend to discuss the topic for a while and then move to whatever fits one's fancy. Also, gathertown allows splitting to talk in different groups. So that's, like, helpful if you want to talk Stable Diffusion, but another group of people wants to talk development economics, and another science politics.

Load More