Andrew Currall


Sorted by New

Wiki Contributions


Our daughter went through a fairly long period of calling cats "dog", and would aggressively correct us if we tried to correct her. Possibly something of the same thing. 

R0 is not remotely immutable. It is a function of people's behaviour and physical infrastructure as well as physical properties of the virus (which are themselves likely changing, especially early in a pandemic, as the virus evolves). 

It is not affected by levels of exposure, because R0 is defined as the infection rate in the absence of any exposure. 

Nice write-up.

I'd also be interested in discussion of treatments that are only meant to relieve symptoms rather than reduce the risk of infection, for example expectorants (e.g. guaifenesin), antihistamines, and decongestants (e.g. phenylephrine). 

"After all: the purpose of copyright law is, to a very large extent, to preserve the livelihood of intellectual property creators, who would otherwise have limited ability to profit from their own works due to the ease of reproducing it once made. Modern AI systems are threatening this, whether or not they technically violate copyright."


Yes, this is 100% backwards. The purpose of copyright law is to incetivise the production of art so that consumers of art can benefit from it. It incidentally protects artists livelihoods, but that is absolutely not it's main purpose.

We only want to protect the livelihood of artists because humans enjoy consuming art- the consumption is the ultimate point. We don't have laws protecting the livelihood of people who throw porridge at brick walls because we don't value that activity. We also don't have laws protecting the livelihood of people who read novels, because while lots of people enjoy doing that, other people don't value the activity. 

If we can get art produced without humans invovled, that is 100% a win for society. In the short term it puts a few people out of work, which is unfortunate, but short-lived. The fact that AI art is vastly more efficiently-produced than human art is a good thing, that we should be embracing. 

I think this doesn't work even with time-ordering. A spam bot will probably get to the post first in any case. A bot that simply upvotes everything will gain a huge amount of trust. Even a bot paid only to upvote specific posts will still gain trust if some of those posts are actually good, which it can "use" to gain credibility in its upvotes for the rest of the posts (which may not be good). 

I'm OK with 3 out of 4, but I have serious issues with this:

We value triumphing over stagnation to achieve vitality.  

I don't think this is a univeral value at all. This looks like valuing change as a fundamental good, and I certainly don't do this- quite the reverse. All other things being equal I'd much rather things stayed the same. Obviously I'd like bad things to change to good things, but that seems to be covered by the other three virtues. Stagnation, all other things being equal, is a good thing.

If I had two buttons, to give me $1000 of consumption today, or $1001 of consumption in thirty years (inflation adjusted of course), I would press the second button. 

This sounds nuts to me. Firstly, what about risk? You might be dead in 30 years. We might have moved to a different economy where money is worthless. You might personally not value money (or not value the kind of things you can get with money) as much. Admittedly there's also some upside risk, but it's clearly lower than the downside. 

We're ignoring investment possibilities, of course. But even then, in any case, if you have £1000 now, you can use it to buy something that would last more than 30 years and benefit you over that time. 

Re: reciprocal altruism. Given the vast swathe of human prehistory, virtually anything not absurdly complex will be "tried" occasionally. It only takes a small number of people whose brains happen to wired to "tit-for-tat" to get started, and if they out-compete people who don't cooperate (or people who help everyone regardless of behaviour towards them), the wiring will quickly become universal. 

Humans do, as it happens, explicitly copy successful strategies on an individual level. Most animals don't though, and this has minimal relevance to human niceness, which is almost certainly largely evolutionary. 

I did not experience any changes like this at all when my daughter was born. When a child myself, I loved younger children, but as an adult I've not been very keen on young children, and I'm not particularly attached to my daughter either. 

Niceness in humans has three possible explanations:

  • Kin altruisim (basically the explanation given above)- in the ancestral environment, humans were likely to be closely related to most of the people they interacted with, giving them genetic "incentive" to be at least somewhat nice. This obviously doesn't help in getting a "nice" AGI- it won't share genetic material with us and won't share a gene-replication goal anyway.
  • Reciprocal altruism- humans are social creatures, tuned to detect cheating and ostratice non-nice people. This isn't totally irrelevant- there is a chance a somewhat dangerous AI may have use for humans in achieving its goals, but basically, if the AI is worried that we might decide it's not nice and turn it off or not listen to it, then we didn't have that big a problem in the first place. We're worried about AGIs sufficiently powerful that they can trivially outwit or overpower humans, so I don't think this helps us much. 
  • Group selection. This is a bit controversial and probably least important of the three. At any rate, it obviously doesn't help with an AGI.

So in conclusion, human niceness is no reason to expect an AGI to be nice, unfortunately. 

Load More