Epistemic Status: I haven't actually used this through to completion with anyone. But, it seems like a tool that I expect to be useful, and it only really works if multiple people know about it.


In this post, I want to make you aware of a few things:

Iterated kickstarters: Kickstarters where all the payment doesn't go in instantly – instead people pay in incrementally, after seeing partial progress on the goal. (Or, if you don't actually have a government-backed-assurance-contract, people pay in incrementally as you see other people pay in incrementally, so the system doesn't require as much trust to bootstrap)

Trust kickstarters: Kickstarters that are not about money, and are instead about "do we have the mutual trust, goodwill and respect necessary to pull a project or relationship off?" I might be wary of investing into my relationship with you, if I don't think you're going to invest in me.

Iterated trust kickstarters: Combining those two concepts – incrementally ratcheting up trust over time. I think this something people intuitively do sometimes, but it's nice to be able to do it intentionally, and communicate crisply about it.

...

Alternate phrasing of a key insight: one solution to a prisoner's dilemma is to break it into multiple stages, so you have an iterated prisoner's dilemma, which has a different incentive structure.

Iterated Kickstarters

In The Strategy of Conflict, Thomas Schelling (of Schelling Point fame), poses a problem: Say you have a one-shot coordination game. If Alice put in a million dollars, and her business partner Bob puts in a million dollars, they both get their money back, plus $500,000 extra. But if only one of you puts in a million, the other can abscond with it.

A million dollars is a lot of money for most people. Jeez. 

What to do?

Well, hopefully you live in a society that has built well-enforced laws around assurance contracts (aka "kickstarters"). You put in a million. If your partner backs out, the government punishes them, and/or forces them to return the money.

But what if there isn't a government? What if we live in the Before Times, and we're two rival clans who for some reason have a temporary incentive to work together (but still incentive to defect)? What if we live in present day, but Alice and Bob are two entirely different countries with no shared tradition of cooperation?

There are a few ways to solve this. But one way is to split the one shot dilemma into an iterated game. Instead of putting in a million dollars, you each put in $10. If you both did that, then you each put in another $10, and another. Now that the game is iterated, the payoff strategy changes from prisoner's dilemma to stag hunt. Sure, at any given time you could defect, but you'd be getting a measly $10, and giving up on a massive half-million potential payoff.

You see small versions of this fairly commonly on craigslist or in other low-trust contract work. "Pay me half the money up front, and then half upon completion." 

This still sometimes results in people running off with the first half of the money. I'm assuming people do "half and half" instead of splitting it into even smaller chunks because the transaction costs get too high. But for many contractors, there are benefits to following through (instead of taking the money and running), because there's still a broader iterated game of reputation, and getting repeat clients, who eventually introduce you to other clients, etc.

(You might say that the common employment model of "I do a week of work, and then you pay me for a week of work, over and over again" is a type of iterated kickstarter).

If you're two rival clans of outlaws, trying to bootstrap trust, it's potentially fruitful to establish a tradition of cooperation, where the longterm payoff is better than any individual chance to defect.

Trust Kickstarters

Meanwhile: sometimes the thing that needs kickstarting is not money, but trust and goodwill.

Goodwill kickstarters

I've seen a few situations where multiple parties feel aggrieved, exhausted, and don't want to continue a relationship anymore. This could happen to friends, lovers, coworkers, or project-cofounders.

They each feel like the other person was more at fault. They each feel taken advantage of, and like it'd make them a doormat if they went and extended an olive branch when the other guy hasn't even said "sorry" yet. 

This might come from a pure escalation spiral: Alice accidentally is a bit of a jerk to Bob on Monday. Then Bob feels annoyed and acts snippy at Alice on Tuesday. Then on Wednesday Alice is like "jeez Bob what's your problem?" and then is actively annoying as retribution. And by the end of the month they're each kinda actively hostile and don't want to be friends anymore.

Sometimes, the problem stems from cultural mismatches. Carl keeps being late to meetings with Dwight. For Dwight, "not respecting my time" is a serious offense that annoys him a lot. For Carl, trying to squeeze in a friend hangout when you barely have time is a sign of love (and meanwhile doesn't care when people are late). At first, they don't know about each other's different cultural assumptions, and they just accidentally 'betray' each other. Then they start getting persistently mad about the conflict and accrue resentment.

Their mutual friend Charlie comes by and sees that Alice and Bob are in conflict, but the conflict stems is all downstream from a misunderstanding, or a minor mishap that really didn't need to have been a big deal.

"Can't you just both apologize and move on?" asks Charlie.

But by now, after months of escalation, Alice and Bob have both done some things that were legitimately hurtful to each other, or have mild PTSD-like symptoms around each other. 

They'd be willing to sit down, apologize, and work through their problems, if the other one apologized first. When they imagine apologizing first, they feel scared and vulnerable.

I'll be honest, I feel somewhat confused about how to best to relate to this sort of situation. I'm currently related it through the lens of game-theory. I can imagine the best advice for most people is to not overthink it, don't stress about game theory. Maybe you should just be letting your hearts and bodies be talking to each other, elephant to elephant.

But... also, it seems like the game theory is just really straightforward here. A "goodwill kickstarter" really should Just Work in these circumstances. If it's true that "I would apologize to you if you apologized to me", and vice versa, holy shit, why are you two still fighting? 

Just, agree that you will both apologize conditional on the other person apologizing, and that you would both be willing to re-adopt a friendship relational stance conditional on the other person doing that.

And then, do that. 

Competence Kickstarter

Alternately, you might to kickstart "trust in competence."

Say that Joe keeps screwing up at work – he's late, he's dropping the ball on projects, he's making various minor mistakes, he's communicating poorly. And his boss Henry has started getting angry about it, nagging Joe constantly, pressuring Joe to stay late to finish his work, constantly micromanaging him.

I can imagine some stories here where Joe was "originally" the one at fault (he was just bad at his job for some preventable reason one week, and then Henry started getting mad). I can also imagine stories here where the problems stemmed originally from Henry's bad management (maybe Henry was taking some unrelated anger out on Joe, and then Joe started caring less about his job).

Either way, by now they can't stand each other. Joe feels anxious heading into work each day. Henry feels like talking to Joe isn't worth it.

They could sit down, earnestly talk through the situation, take stock of how to improve it. But they don't feel like they can have that conversation, for two reasons.

One reason is that there isn't enough goodwill. The situation has escalated and both are pissed at each other.

Another reason, though, is that they don't trust each other's competence.

Manager Henry doesn't trust that Joe can actually reliably get his work done.

Employee Joe doesn't believe that Henry can give Joe more autonomy, talk to him with respect, etc.

In some companies and some situations, by this point it's already too late. It's pretty overdetermined that Henry fires Joe. But that's not always the right call. Maybe Henry and Joe have worked together long enough to remember that they used to be able to work well together. It seems like it should be possible to repair the working relationship. Meanwhile Joe has a bunch of talents that are hard to replace – he built many pieces of the company infrastructure and training a new person to replace him would be costly. And there's a bunch of nice things about the company they work that makes Joe prefer not to have to quit to find a better job elsewhere.

To repair the relationship, Henry needs to believe that Joe can start getting work done reliably. Joe needs to believe that Henry can start treating him with respect, without shouting angrily or micromanaging.

This only works if they in fact both can credibly signal that they will do these things. This works if the missing ingredient is "just try harder." Maybe the only reason Joe isn't working reliably is that he no longer believes it's worth it, and the only reason Henry is being an annoying manager is that he felt like he needed to get Joe to get his stuff done on time.

In that case, it's reasonably straightforward to say: "I would do my job if you did yours", coupled with the relational-stance-change of "I would become genuinely excited to be your employee if you became genuinely excited about being my boss".

Sometimes, this won't work. The kickstarter can't trigger because Henry doesn't, in fact, trust Joe to do the thing, even if Joe is trying hard.

But, you can still clearly lay out the terms of the kickstarter. "Joe, here's what I need from you. If you can't do that, maybe I need to fire you. Maybe you need to go on a sabbatical and see if you can get your shit together." Maybe you can explore other possible configurations. Maybe the reason Joe isn't getting his work done is because of a problem at home, and he needs to take a couple weeks off to fix his marriage or something, but would be able to come back and be a valuable team member afterwards.

I think having the terms of the kickstarter clearly laid out is helpful for thinking about the problem, without having to commit to anything.

Why do you need to think about this in terms of "kickstarter", rather than just "a deal?". What feels special to me about relationship kickstarters is that relationship (and perhaps other projects) benefit from investment and momentum. If your stance is "I'm ready to jump and execute this plan if only other people were onboard and able to fulfill their end", then you can be better positioned to get moving quickly as soon as the others are on board.

The nice thing about the kickstarter frame, IMO, is I can take a relationship that is fairly toxic, and I can set my internal stance to be ready to fix the relationship, but without opening myself up to exploitation if the other person isn't going to do the things I think are necessary on their end.

Iterated Trust Kickstarters

And then, sometimes, a one-shot kickstarter isn't enough.

Henry and Joe

In the case of Henry and Joe: maybe "just try harder" isn't good enough. Joe has some great skills, but is genuinely bad at managing his time. Henry is good at the big picture of planning a project, but finds himself bad at managing his emotions, in a way that makes him bad at actually managing people.

It might be that even if they both really wanted things to work out, and were going to invest fully in repairing their working relationship... the next week, Joe might miss a deadline, and Henry would snippily yell at him in a way that was unhelpful. They both have behavioral patterns that will not change overnight. 

In that case, you might want to combine "trust kickstarter" and "iterated kickstarter."

Here, Joe and Henry both acknowledge that they're expecting this to be a multi-week (or month) project. The plan needs to include some slack to handle the fact that they might fuck up a bit, and a sense of what's supposed to happen when one of them screws up. It also needs a mechanism for saying "you know what, this isn't working."

"Iterated Trust Kickstarter" means, "I'm not going to fully start trusting you because you say you're going to try harder and trust me in turn. But, I will trust you a little bit, and give it some chance to work out, and then trust you a bit more, etc." And vice versa.

Rebuilding a Marriage

A major reason to want this is that sometimes, you feel like someone has legitimately hurt you. Imagine a married couple who had a decade or so of great marriage, but then ended up in a several-year spiral where they stop making time for each other, get into lots of fights. Each of them has built up a story in their head where the other person is hurting them. Each of them has done some genuinely bad things (maybe cheated, maybe yelled a lot in a scary way).

Relationships that have gone sour can be really tricky. I've seen a few people end up in states where I think it's legitimately reasonable to be worried their partner is abusive, but also, it's legitimately reasonable to think that the bad behavioral patterns are an artifact of a particularly bad set of circumstances. If Alice and Bob were to work their way out of those circumstances, they could still rebuild something healthy and great.

In those cases, I think it's important for people to able invest a little back into the relationship – give a bit of trust, love, apology, etc, as a signal that they think the relationship is worth repairing. But, well, "once bitten, twice shy." If someone has hurt you, especially multiple times, it's sometimes really bad to leap directly into "fully trusting the other person."

I think the Iterated Trust Kickstarter concept is something a lot of people do organically without thinking about it in exactly these terms (i.e lots of people damage a relationship and then slowly/carefully repair it). 

I like having the concept handle because it helps me think about how exactly I'm relating to a person. It provides a concrete frame for avoiding the failure modes of "holding a relationship at a distance, such that you're basically sabotaging attempts to repair it", and "diving in so recklessly that you end up just getting hurt over and over."

The ITK frame helps me lean hard into repairing a relationship, in a way that feels safe. 

(disclaimer: I haven't directly used this framework through to completion, so I can't vouch for it working in practice. But this seems to mostly be a formalization of a thing I see people doing informally that works alright)

Concrete Plans

For an ITK to work out, I think there often needs to be a concrete, workable plan. It may not enough to just start trusting each other and hope it works out. 

If you don't trust each other's competence (either at "doing my day job", or "learning to speak each other's love languages"), then, you might need to check:

  • Does Alice/Bob each understand what things they want from one another? If this is about emotional or communication skills they don't have, do they have a shared understanding of what skills they are trying to gain and why they will help?
  • Do they have an actual workable plan for gaining those skills?

Say that Bob has tried to get better at communication a few times, but he keeps running into the same ugh fields which prevent him from focusing on the problem. He and Alice might need to work out a plan together for navigating those ugh fields before Alice will feel safe investing more in the relationship.

And if Alice is already feeling burned, she might already be so estranged that she's not willing to help Bob come up with a plan to navigate the ugh-fields. "Bob, my terms for the initial step in the kickstarter is that I need you to have already figured out how to navigate ugh fields on your own, before I'm willing to invest anything."

Unilaterally Offering Kickstarters

Part of why I'd like to have this concept in my local rationalist-cultural-circles is that I think it's pretty reasonable to extend a kickstarter offer unilaterally, if everyone involved is already familiar with the concept and you don't have to explain it.

(New coordinated schemes are costly to evaluate, so if your companion isn't already feeling excited about working with you on something, it may be asking too much of them to listen to you explain Iterated Trust Kickstarters in the same motion as asking them to consider "do you want to invest more in your relationship with me?")

But it feels like a useful tool to have in the water, available when people need it.

In many of the examples so far, Alice and Bob both want the relationship to succeed. But, sometimes, there's a situation Alice has totally given up on the relationship. Bob may also feel burned by Alice, but he at least feels there's some potential value on the table. And it'd be nice to easily be able to say:

"Alice, for what it's worth, I'd be willing to talk through the relationship, figure out what to do, and do it. I'm still mad, but I'd join the Iterated Kickstarter here." If done right, this doesn't have to cost Bob anything other than the time spent saying the sentence, and Alice the time spent listening to it. If Alice isn't interested, that can be the end of that. 

But sometimes, knowing that someone else would put in effort if you also would, is helpful for rekindling things.

New to LessWrong?

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 7:58 AM

I think an important obstacle to "I'll apologize if they'll apologize" situations is that people often have very specific needs for the traits of an apology they're receiving, doing it correctly without instructions is a very important signal of being on the same page about what went wrong, and incorrect apologies can be downright insulting (such as "I'm sorry you feel that way", a classic, or, "I'm sorry about X" "this whole time you thought I was mad about X???  I don't give a crap about X!")  The existence of a hypothetical apology doesn't serve the same purposes as a fully featured one.

(this comment let me through a chain of thoughts culminating in "reciprocity.io, except instead of 'I'd like to date or something' you check boxes for 'I'm secretly mad but want you to apologize first.'")

Yeah.

'I didn't get into some of the messy implementation details here because it was hard to come up with good-but-hypothetical-examples. But, I do think it's pretty key that often the steps along an iterated trust kickstarter are pretty oddly specific, and yeah it's often important to do without spelling out the particular thing you want someone to do. 

A sort of generic version of a message I might send, if I were in some situations like this is "hey, I'm pretty mad. I see you are pretty mad. I'd be willing to invest effort trying to empathize with you and figure out where you're coming from if you were willing to invest effort trying to empathize with me and figure out where I'm coming from."

(where, something is lost here from Bob not figuring out on his own what exactly Alice is mad about and revealing himself to already be on the same page. But, I’m assuming by the time you get to consider an ITK type solution, its already revealed that you’re not in the nicest of worlds where that was an option)

((also: an okay outcome is that Alice wanted to check if Bob was on the same page, and then if bob reveals himself to not be on the same page, the ITK  truncates, hopefully as gracefully as possible))

A thing I am legit confused about:

What exactly is up with it feeling so awful to be the person to apologize first, when you think the other person has aggrieved you more (but you realize you also aggrieved them).

I do get some naive implementation of "I don't want to be a doormat", or "I want to make sure that the Thing I'm Protecting gets protected." But... even so, in most situations, the amount of resistance people have about this feels often out-of-proportion.

My hypothesis - Apologising is low status.

Possibly this is no longer the case (apologising first is often seen as a sign of maturity) but I can certainly see this being the case in the ancestral environment.

This would match my experience that apologising is feels awful even when it is entirely my fault. 

I guess if the other person has already apologised then me also apologising just puts our status back roughly where we started which is why going second feels much easier.

In the case where the aggriving is very proportionate it is somewhat easy.

In the case where it is not proportionate you would probably be willing to do it for the symmetric part and not do it for the "remainder part". If the remainder part is greater than the symmetric part you are more angry than peaceable.

One can take the most inconvenient case, you step on the toes of a unconvicted murderer. With one logic your faults, even minor don't have anything to do the faults of others, you should apologise for stepping on peoples shoes. On the other hand, "there is kicking to be done" and apologising for stepping on toes and then chopping their head off would seem like 1 step forward and 100 steps back. Why not just go 98 steps backwards? 

I think it comes from a feeling that proportion of blame needs to add to one, and by apologizing first you're putting more of the blame on your actions. You often can't say "I apologize for the 25% of this mess I'm responsible for."

I think the general mindset of apportioning blame (as well as looking for a single blame-target) is a dangerous one. There's a whole world of things that contribute to every conflict outside of the two people having it.

It occurs to me that there's a class of person for whom this entire line of thinking is a subtle trap, and the wrong approach.

I included one paragraph in the OP:

I'm currently related it through the lens of game-theory. I can imagine the best advice for most people is to not overthink it, don't stress about game theory. Maybe you should just be letting your hearts and bodies be talking to each other, elephant to elephant.

I think, perhaps for a large fraction of LessWrongers, this class of advice is more useful than all the rest of the coordination theory here. (unfortunately, as an overly cerebral rationalist I am not very good at giving the elephant advice)

I think the coordination theory here is, like, the true-mechanics that will be going on under-the-hood in a healthy relationship (interested in arguments to the contrary), but much of the time it's the wrong thing to focus on. Like, knowing how all your muscles fit together might hypothetically be relevant to dancing, but it is not the thing to preoccupy yourself with while actually dancing.

I think where it is helpful is in overcoming intimidation. For example, one of the best behavior changes I've made this year is in reaching out to strangers, both through the LW/EA community, and in the grad schools I've been applying to. I've gotten more value out of those few hours of conversation than almost anything else I've learned in the last year.

But it took a certain level of cerebral understanding in advance that this isn't an imposition, but rather a way for both myself and the other person to build trust with each other. It's a way to start iterating on that much earlier than if we had to wait to meet in person. It leads to unpredictable but strong opportunities later. Zoom is not built into the elephant-mind, so having an intellectual understanding of how to apply these principles to our weird digital social world can actually be really helpful.

This is what I wanted Jester's Court to be. An iterated kickstarter for trust and competence, of self and of others.

Didn't have words for it.

I am wondering about a failure mode in applying this advice (and other advice of this type): While trying to apply it going to the meta-level and explaining what the goal is. For example saying: "Let's try trust-kickstarters" and linking to this post. Using the meta-level may disrupt the object-level process and should probably be avoided.

Epistemic status: I have not read about this communication norm on LW or elsewhere but it has been brought to my attention a while ago and seems to be pretty standard in practice. It doesn't seem to be related to high-context vs. low-context communication (See e.g. Decoupling vs Contextualising Norms or Culture Map).

For example saying: "Let's try trust-kickstarters" and linking to this post. Using the meta-level may disrupt the object-level process and should probably be avoided.

That's particularly why I'm hoping to get this into the groundwater so it's seen as something that you might just do without it being a big deal that distracts everyone. (similarly, I think having the concept of "Guess/Ask/Tell culture" or "wait vs interrupt culture" as easy handles has been useful to me for navigating tricky conversations)

But, also: In the contexts where I have used this with some success, both parties were already invested enough and the conversation was tricky enough that stopping to say "okay, it looks like by default, this conversation won't work, so we need to do something weird to make it work, and I think "Iterated Trust Kickstarter" is a frame that has a chance of actually working."

(part of it is that I think Iterated Trust Kickstarters are a thing people already do intuitively, just putting a name to it, and the point of the name is more as a pointer to a concept people already had vaguely defined, than illustrating a whole new thing)

That's particularly why I'm hoping to get this into the groundwater so it's seen as something that you might just do without it being a big deal that distracts everyone. 

Seems like you agree and have an intuition when and when not to use meta. Do you know of any systematic treatment of this boundary?

Well actually I just use meta All The Goddamn Time and sometimes it’s advisable and sometimes it’s not.

But, I guess my answer is: going meta tends to be seen as more annoying when your partner isn’t invested in the conversation. And ITK is disproportionately useful when your partner isn’t invested in the conversation. So it benefits disproportionately from already being in the water.

I also use meta frequently. I think it is part of some sub-cultures LW in particular (meta is even a signal in Automoderation). I just want to understand when it is going to work and when not. 

I like the suggestion that meta and ITK are some kind of opposites.

Really nice post! I honestly need to think about it for a while before jumping in to what I like about it, but it reflects thoughts I've been stewing on RE: signaling and personal development. I wanted to point out one small issue while it's on my mind (just a minor nitpick).

This example I found needing a bit of a touch-up.

If Alice put in a million dollars, and her business partner Bob puts in a million dollars, they both get 10 million dollars. But if only one of you puts in a million, the other can abscond with it.
 

As stated, if Alice puts in a million dollars, Bob can either abscond with it ($1 million in profit) or put in his million as well ($10 million in profit). This alternative also doesn't quite get the same point across:

If Alice put in a million dollars, and her business partner Bob puts in a million dollars, they both get their money back, plus $500,000 extra. But if only one of you puts in a million, the other can abscond with it.

Because now we've shifted from a "hunting stag" problem that's not quite perfectly formulated, to a prisoner's game dilemma.

Nod. I was in the process of noticing that the math didn't check out here but got distracted and then YOLO posted it.

I think the implied situation I was half remembering when I wrote it was something like a business deal, where you both have to invest, and also the payoff isn't guaranteed. I think either "Stag hunt without guaranteed payoff" or "Prisoner Dilemma" would each be a reasonable example. Agreed that in the current one, defecting is just... stupid.

I updated the OP to use your suggestion, which seems like a nicer, simpler version.

Cheers! I think it's interesting to consider what makes this situation fit the stag hunt vs. PD framework.

Like, say the situation is:

Alice and Bob can each put in $1 million. If they both do, they get a 50% chance of getting their money back, plus an extra $1 million (for a total of $2 million each), and a 50% chance of getting their money back with no bonus.

However, whoever puts their money in first is also at risk of the other person absconding with their $1 million, so that the second person has a 100% chance of getting $2 million total (instead of a 50% chance).

In this case, it is advantageous to Alice and Bob to have enough trust to make them willing to take the deal.

A Request for Help

I find this idea of an Iterative Kickstarter particularly useful because of my fear of being taken advantage of. 

This fear doesn't come out of nowhere; some work of mine from a student portfolio I created about 7 years ago as a graphic design student  was basically stolen and I wasn't credited or paid for the work I had done. I sent it out as requested, with my resume while applying for design positions. Just last year I was watching tv and saw a sllightly changed version of the spec work I had done in my portfolio. Same type of company, same domain, but a national company used it without giving me any compensation. I consulted an IP lawyer who told me this kind of thing happens all the time, especially to students and in the world of Graphic Design. So I'm sensitive to the possibilities of sharing my work with untrustworthy people, but I would like to maximize the potential of sharing it with people I trust. 

I'm looking to connect with people who can offer suggestions, and/or who I can bounce ideas of off and/or who I can potentially collaborate with, because I'm also anxious about sharing my work in public before it seems reasonably fit for 'publication.' ( I do see posting on forums as a sort of publication, maybe not as formally recognized as being published in a journal, but it is publicity none the less.)

Seeing as how I'm not currently allied with an organization I can rely on for support, I'm wondering:

"How do I build trust within the LW community so that I could share and develop some of my ideas without the unreasonable fear of other people using my work in a meaningful way without sharing credit with me?" 

I'm trying to move from concept to prototype

A lot of my work is pretty conceptual at this point, and in order to move it forward I think it needs statistical and/or mathematical analysis. I've made some of my best attempts at this process, which has gotten me some promising results, but my 'para-mathematical' analysis ( the off the top of my heard term for the creative use of formulaic principles to find meaning by people with little to no formal scientific training that's sort of 'adjacent to math') could benefit from the input of people who do have those scientific and mathematical skills. 

I look at my work as art at this point, which I would love to share as art, but it also has the potential to be more than a pretty, thought provoking image. It deals with very real issues in a way which creates in me at least, the motivation to pursue the ideas involved with a scientific rigor

Some of my ideas involve the need for datasets that don't yet exist, and the consideration of the possibility - as well as the utility and meaning - of creating them; what would it mean for modern society if some of these datasets did exist, how would it change the world, and is it worth devoting the time, energy and resources to gather and analyze the data? Discussing it is one thing, but actually working at creating the datasets would be something else.

Too often I think that data scientists, ML and AI researchers and statisticians settle for working with the data sets that exist, which confine their work to predetermined areas and domains of interest. In essence I think this forces them into filter bubbles which dilutes the objectivity of the overall scientific understanding of society and culture, and sets us along a path which is potentially unhealthy for the planet and for the continued existence of all life on the planet). Obviously I think there are many ML and AI researchers who share these same concerns as is evident in many of the posts on this forum.

With these ideas in mind, I'm trying to push my ideas into concepts which can be prototyped and tested.

I try to study what I refer to as the Unintended Consequences of human cultural and technological endeavors.

For instance, one of the major ideas I keep coming back to involves a common phrase applied to the information age: 

"Data is the new oil." 

Thinking this way provides a good analogy to help people understand how data can provide fuel for the engine of society, and for all the positive progress that can be made in this regard. But It also has serious negative connotations.

Since plastics are produced from oil, we could consider "what is the new 'mass of plastic bags floating in the ocean?"  or the new "micro plastics polluting  our environment?" What is the new "choice at the grocery store between paper bags or plastic bags?" or the new "city dump clogged with millions of tons of the decades worth of discarded plastic wrappers from food packaging, or the discarded electronics of a consumer culture?" or even more directly, what is the new "Exxon Valdez oil spill?"

Polluted Data and Data Pollution

Society is already dealing with many of these issues, as thinking of Data as the new Oil has many, many implications for many parts of society. One of which that should concern us all is the idea that even just one of the many recent data breaches is analogous to something like the Exxon Valdez oil spill. The uncontrolled, unintended leaking of important resources into a ecosystem is a huge cause of concern. If each of the data breaches individually, are analogous to a single massive oil spill, when we consider how many data breaches there have been over the last 2 decades, that's a huge environmental disaster.

When the environmental costs of massive oil spills last for decades if not centuries, In much the same way, I believe the societal costs of Data breaches will potentially last for at least as long, if not for the rest of the time humanity exists. 

Pointing out these types of relationships of meaning, to analyze and assess how they matter, and to conceptualize designs for their solution, is important and I believe that I have some unique insight into how ML and AI could be used. 

Recognizing my Limits

I have concepts and ideas, but I don't have the skills to realize a prototype. This is why I want input and collaboration. Hopefully I can use use my 'artistic' sense of intuition to help the scientific community get outside of a filter bubble I believe they are partly inside of, and I can finally put to use the research and critical thinking skills I've worked all my life to develop in a way that creates a sense of community for me, and maybe even making a living doing it as some point. 

I know that some of the ideas I've come up with more than likely already exist in the pantheon of the sciences, but I'm pretty sure some of them are new. It would be nice to work out which was which, and to be able to then focus on the ideas which are new, instead of continually reinventing the wheel.

In my posts and comments I've tried to leave indications of the types of work and thinking I've done, without providing full disclosure. I don't want to give it all away, without  getting something worthwhile in return.

So  I'm looking to collaborate with people on something, but I don't know exactly what yet. In the long run I think I would like to be the intermediary between tool makers and the tool users, helping both to create the tool and to use it for further research if things turn out well. But how I get there from here is the challenge.

If I were able to run a non-profit, it would consist of statisticians and programmers working to help analyze specific parts of the culture which I believe are under-served by the technology sector. These under-represented facets of society, skew the data and statistics, leave impurities in the oil if you will, that the government and corporate worlds rely on for making decisions which affect nearly every aspect of human life and the natural world. But without funding, I can't hire people to help, so I'm trying to work around the edges of the problem.

Pushing Limits

Some of my beliefs could be considered controversial, and might result in the formation of unpopular opinions. But the thing about culture is, many unpopular opinions at the time become common sense decades later.

The War on Drugs in the US is a prime example of the costs of making miscalculations in legislation that aren't backed up by science. The societal costs of this once popular War - in terms of money, resources, lives, and overall human suffering - is staggering in the light of the recent seeming 180 in the issue of Marijuana in the states. Legalization at the state level in many states of Marijuana, seems 'common sense' now, but in the previous decades it was 'common sense' to hide your use because of how severely drug related crimes were being prosecuted. Mass incarceration and the Opioid Crisis are only 2 examples of the suffering caused by faulty legislation based on junk science. Could we have avoided this disastrous War with better, more rational thinking a couple decades back?

 I'm continuing to absorb the posts I find interesting and accessible, and going back to the Sequences a little at a time as i try to examine what I read against my own thinking. But ultimately I hope to get help moving forward with my work in a more  productive manner, as I do view my work as being pretty rational.