Wiki Contributions

Comments

Eli_3mo142

The answer seems to be yes. 

On the manifund page it says the following: 

Virtual AISC - Budget version 

Software etc 
$2K
Organiser salaries, 2 ppl, 4 months
$56K
Stipends for participants 
$0

Total $58K

In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.

Salaries are calculated based on $7K per person per month.

Based on the minimum threshold of $28k, that would seem to offer about 2 ppl for 2 months. 

Eli_1y20

In my country it says to take 1-2 paracetamol, so that might be the cause of the confusion. 

Eli_1y10

Thank you for your feedback! This is a mistake on my part. I will take the article down until I've looked into this and have updates my resources.

Edit: I have updated the article. It should be better now :)

Eli_1y30

After learning about the Portfolio Diet I have been doing the same! Whenever I'm cooking I tend to ask three questions:

  1. Can I add nuts or a nut paste to this dish? I love adding tahini!
  2. Can I add more fiber? Tends to be by replacing the source of carbohydrates with black rice, quinoa, bulgur or whole wheat bread. And I always have oats for breakfast. 
  3. Can I add some plant protein? Either by replacing something or by adding something extra. 

For me these questions work because I'm already eating plenty of fruits and vegetables. And, I haven't really added plant sterols into my diet yet. 

Good luck to you!

Eli_1y10

I didn't know these numbers and I didn't know about the Taeuber paradox, but they definitely put Part 5 into perspective. 

I wonder if early treatment should be considered a refinement? That is debatable and I honestly don't know the answer. But it does put an upper bound on the benefits of starting early treatment, for which I'm grateful. 

Eli_1y30

You make some good points, but thinking about the fact that researchers should correct for multiple-hypothesis testing always makes me a little sad—this almost never happens. Do you have an example where a study does this really nicely?

Also, do you have any input on the hypothesis that treating early is a worthwhile risk?

Eli_2y30

I wanted to make this comment for a while now, but I was worried that it wasn't worth posting because it assumes some familiarity with multi-agent systems (and it might sound completely nuts to many people). But, since your writing on this topic has influenced me more than anyone else, I'll give it a go. Curious what you think. 

I agree with the other commenters that having GPT-3 actively meddle with actual communications would feel a little off.

Although intuitively it might feel off—for me it does as well—in practice GPT-3 would be just another agent interacting with all the other agents using global neural workspace (mediated by vision). 

Everyone is very worried about transferring their thinking and talking to a piece of software, but for me this seems to come from a false sense of agency. It is true that I have no agency over GPT-3, but the same goes for my thoughts. All my thoughts simply appear in my mind and I have absolutely zero choice in choosing them. From my perspective there is no difference other than automatically identifying with thoughts as "me".

There might be a big difference in performance. I can systematically test and tune GPT-3. If I don't like how it performs, then I shouldn't let it influence my colony of agents. But if I do like it, then it is the first time that I have added another agent to my mind, that has the qualities I have chosen.

It is very easy to read about non-violent communication (or other topics like biases), and think to yourself, that is how I want to write and act, but in practice it is hard to change. By introducing GPT-3 as another agent, that is tuned for this exact purpose, I might be able to make this change orders of magnitude easier.

Eli_2y10

I think that is a fair point, I honestly don't know. 

Intuitively, the translation would seem to help me more to become less reactive. I can think of two reasons:

  1. It would unload some of the cognitive effort when it is most difficult, making it easier to switch the focus to feelings and needs.  
  2. It would make the message different every time, attuned to the situation. I would worry that the pop-up would be automatically clicked away after a while, because it is always the same.

But having said that, it is a fair point and I would definitely be open to any solution that would achieve the same result. 

Eli_2y10

Can you explain why you think this is blunting feelings? Since my intention would be for it to do the exact opposite.

If we look at the example you give: shielding a partner from having to read a "fuck you". In that case you would read the message containing "fuck you" first and GPT-3 might suggest that this person is feeling sad, vulnerable and angry. In my perspective this is exactly the opposite of "blunting feelings". I would be pointed towards the inner life of the other person.

Load More