# All of Cube's Comments + Replies

I'm definitely looking for a system where agent can see the other, although just simulating doesn't seem robust enough. I don't understand all the terms here but the gist of it looks as if there isn't a solution that everyone finds satisfactory? As in, there's no agent program that properly matches human intuition?

I would think that the best agent X would cooperate iff (Y cooperates if X cooperates). I didn't see that exactly.. I've tried solving it myself but I'm unsure of how to get past the recursive part.

It looks like I may have to don a decent amount of research before I can properly formulize my thoughts on this. Thank you for the link.

2JoshuaZ9y
Essentially this is an attempt to get past the recursion. The key issue is that one can't say "X would cooperate iff (Y cooperates if X cooperates)" because one needs to talk about provability of cooperation.

I'm looking for a mathematical model for the prisoners dilemma that results in cooperation. Anyone know where I can find it?

0satt9y
One example of a prisoner's dilemma resulting in cooperation is the infinitely/indefinitely repeating prisoner's dilemma (assuming the players don't discount the future too much). (The non-repeated, one-shot prisoner's dilemma never results in cooperation. As the game theorist Ken Binmore explains in several of his books, among them Natural Justice, defection strongly dominates cooperation in the one-shot PD and it inexorably follows that a rational player never cooperates.)
7JoshuaZ9y
Can you be more precise? Always cooperating in the prisoner's dilemma is not going to be optimal. Are you thinking of something like where each side is allowed to simulate the other? In that case, see here.

A friend of mine has started going into REM in frequent 5 minutes cycles during the day, in order to boost his learning potential. He developed this via multiple acid trips. Is that safe? It seems like there should be some sort of disadvantage to this system but so far he seems fine.

6skeptical_lurker9y
How does LSD help you get develop an ability to get to sleep faster? LSD makes one less sleepy, so this seems like an improbably ability to ascribe to it. But if it actually works, its a really useful ability. You might want to try asking this question to a polyphasic sleeping community BTW.

How does he know that he actually is in REM? How does he know it boosts his learning potential?

5Douglas_Knight9y
What is "this"? this ability? Does he also get a full night's sleep? Eliminating other stages of sleep is almost certainly bad, but supplementing with REM seems to me unlikely to be bad. People with narcolepsy basically only have REM sleep. Narcolepsy is very bad, but many people who eventually develop it seemed to have only had REM sleep when they were functional with no ill effects. In particular, they greatly benefit from naps (both before and after developing full-blown narcolepsy).

I'm a computer expert but a brain newbie.

The typical CPU is built from n-NOR, n-NAND, and NOT gates. The NOT gates works like a 1-NAND or a 1-NOR (they're the same thing, electronically). Everything else, including AND and OR, are made from those three. The actual logic only requires NOT and {1 of AND, OR, NAND, NOR}. Notice there are several sets of minimum gates and and a larger set of used gates.

The brain (I'm theorizing now, I have no background in neural chemistry) has a similar set of basic gates that can be organized into a Turing machine, and the gate I described previously is one of them.

0V_V9y
No. You can represent logic gates using neural circuits, and use them to describe arbitrary finite-state automata that generalize into Turing-complete automata in the limit of infinite size (or by adding an infinite external memory), but that's not how the brain is organized, and it would be difficult to have any learning in a system constucted in this way.
4[anonymous]9y
We don't run on logic gates. We run on noisy differential equations.
0Lumifer9y
You might want to look into what's called ANN -- artificial neural networks.

I think I've figured out a basic neural gate. I will do my best to describe it.

4 nerves: A,B,X,Y, A has it's tail connected to X. B has it's tail connected to X and Y. If A fires, X fires. If B fires, X and Y fire. If A then B fire, X will fire then Y will fire (X need a small amount of time to reset, so B will only be able to activate Y). If B then A fire, X and Y will fire at the same time.

This feels like it could be similar to the AND circuit. Just like modern electronics need AND, OR, and NOT, if I could find all the nerve gates I'd have all the parts needed to build a brain. (or at least a network based computer)

0philh9y
How familiar are you with this area? I think that this sort of thing is already well-studied, but I have only vague memories to go by. As an aside, you only need (AND and NOT) or (OR and NOT), not all three; and if you have NAND or NOR, either of those is sufficient by itself.

I have not been able to get rid of internet addiction by blocking or slowing it. Conversely I've had (less than ideal) success with over saturation. I don't think it's a thing I'll get rid of soon, aimless browsing is to much of a quick fix. Lately I've been working on making productivity a quicker fix. Getting a little excited everytime I complete something small, doing a dance when its something bigger, etc.

I've found that having a two computers, one for work and one for play, has helped immensely.

1CAE_Jones9y
I like this idea. It's difficult to implement; I have enough computers, but my attempt at enforcing their roles hasn't worked so well. I've had better success with weaker, outdated hardware: anything without wireless internet access, for starters. Unfortunately, the fact that it's weaker and outdated means it tends to break, and repairs become more difficult due to lack of official support. Then they sort of disappear whenever things get moved due to being least used, and I'm back to having to put willpower against the most modern bells and whistles in my possession. Generally speaking, the less powerful the internet capabilities, the better. Perhaps a good idea of the optimal amount of data to use would help pick a service plan that disincentivizes wasteful internet use? Or maybe even dialup, if one can get by without streaming video and high-speed downloads. Another possibility is office space without internet access. Bonus points if there's a way to make getting there easier than leaving (without going overboard, of course). Or, a strictly monitored or even public internet connection for work, where anything that is not clearly work-related is visible (hopefully, to someone whose opinion/reaction would incentivize staying on task). If possible, not even having a personal internet connection, and using public locations (Starbucks? Libraries?) when internet is necessary might be another strategy. If work requires internet access, but not necessarily active, one could make lists of the things that need downloading, and the things that do not, and plan around internet availability (this worked pretty well for me in parts of high school, but your mileage may vary). These solutions all have something in common: I can't really implement any of them right now, without doing some scary things on the other end of a maze constructed from Ugh Fields, anxiety, and less psychological obstacles. So my suggesting them is based on a tenuous analysis of past experience.

Does "most unexpected" differ from "least predictable" in any way? Seems like a random number generator would match any algorithm around 50% of the time so making an algorithm less predictable than that is impossible no?

Destroying something that would be useful ir even necessary in the future so that you can better get through or perhaps survive the present.

Going to the same college as your high school sweetheart for example. Perhaps it will work out and you won't need the map.

What kind of things override loss of life and and can be widely agreed upon?

-1Eugine_Nier9y
In the self-driving car example, say "getting to your destination". Keep in mind that the mere act of the car getting out on the road increases the expected number of resulting deaths.
1Lumifer9y
Going to war, for example. Or consider involuntary organ harvesting.

Conventional mortality would dictate that the car minimize global loss of life, followed by permanent brain damage, permanent body damage. I think in the future that other algorithms will be illegal but existent.

However. The lives each car would have the most effect on would be those inside of it. So in most situations all actions would be directed towards said persons.

0DanielLC9y
I disagree. The driver of a car is much less in danger than a pedestrian.
2Houshalter9y
The issue is that it could create bad incentives. E.g. motorcyclists not wearing helmets and even acting inappropriately around self-driving cars, knowing it will avoid them, even if it causes it to crash. Or people stop buying safer cars because they are always chosen as "targets" by self-driving cars to crash into, making them statistically less safe. I don't think the concerns are large enough to worry about, but hypothetically it's an interesting dilemma.
1Lumifer9y
I don't know about that. "Conventional morality" is not a well-formed or a coherent system and there are a lot of situations where other factors would override minimizing loss of life.