FactorialCode's Posts

Sorted by New

FactorialCode's Comments

FactorialCode's Shortform

Due to the corona virus, masks and disinfectants are starting to run out in many locations. Still working on the mask situation, but it might be possible to make your own hand sanitizer by mixing isopropyl alcohol or ethanol with glycerol. The individual ingredients might be available even if hand sanitizer isn't. From what I gather, you want to aim for for at least 90% alcohol. Higher is better.

Tessellating Hills: a toy model for demons in imperfect search

Hmm, the inherent 1d nature of the visualization kinda makes it difficult to check for selection effects. I'm not convinced that's actually what's going on here. 1725 is special because the ridges of the splotch function are exactly orthogonal to x0. The odds of this happening probably go down exponentially with dimensionality. Furthermore, with more dakka, one sees that the optimization rate drops dramatically after ~15000 time steps, and may or may not do so again later. So I don't think this proves selection effects are in play. An alternative hypothesis is simply that the process gets snagged by the first non-orthogonal ridge it encounters, without any serous selection effects coming into play.

Tessellating Hills: a toy model for demons in imperfect search

Now this is one of the more interesting things I've come across.

I fiddled around with the code a bit and was able to reproduce the phenomenon with DIMS = 1, making visualisation possible:


Here's the code I used to make the plot:

import torch
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d

DIMS = 1   # number of dimensions that xn has
WSUM = 5    # number of waves added together to make a splotch
EPSILON = 0.10 # rate at which xn controlls splotch strength
TRAIN_TIME = 5000 # number of iterations to train for
LEARN_RATE = 0.2   # learning rate
MESH_DENSITY = 100 #number of points ot plt in 3d mesh (if applicable)


# knlist and k0list are integers, so the splotch functions are periodic
knlist = torch.randint(-2, 3, (DIMS, WSUM, DIMS)) # wavenumbers : list (controlling dim, wave id, k component)
k0list = torch.randint(-2, 3, (DIMS, WSUM))       # the x0 component of wavenumber : list (controlling dim, wave id)
slist = torch.randn((DIMS, WSUM))                # sin coefficients for a particular wave : list(controlling dim, wave id)
clist = torch.randn((DIMS, WSUM))                # cos coefficients for a particular wave : list (controlling dim, wave id)

# initialize x0, xn
x0 = torch.zeros(1, requires_grad=True)
xn = torch.zeros(DIMS, requires_grad=True)

# numpy arrays for plotting:
x0_hist = np.zeros((TRAIN_TIME,))
xn_hist = np.zeros((TRAIN_TIME, DIMS))
loss_hist = np.zeros(TRAIN_TIME,)

def model(xn,x0):
    wavesum = torch.sum(knlist*xn, dim=2) + k0list*x0
    splotch_n = torch.sum(
            (slist*torch.sin(wavesum)) + (clist*torch.cos(wavesum)),
    foreground_loss = EPSILON * torch.sum(xn * splotch_n)
    return foreground_loss - x0

# train:
for t in range(TRAIN_TIME):

    loss = model(xn,x0)
    with torch.no_grad():
        # constant step size gradient descent, with some noise thrown in
        vlen = torch.sqrt(x0.grad*x0.grad + torch.sum(xn.grad*xn.grad))
        x0 -= LEARN_RATE*(x0.grad/vlen + torch.randn(1)/np.sqrt(1.+DIMS))
        xn -= LEARN_RATE*(xn.grad/vlen + torch.randn(DIMS)/np.sqrt(1.+DIMS))
    x0_hist[t] = x0.detach().numpy()
    xn_hist[t] = xn.detach().numpy()
    loss_hist[t] = loss.detach().numpy()

plt.xlabel('number of steps')
for d in range(DIMS):
plt.xlabel('number of training steps')

fig = plt.figure()
ax = plt.axes(projection='3d')

#plot loss landscape
if DIMS == 1:
    x0_range = np.linspace(np.min(x0_hist),np.max(x0_hist),MESH_DENSITY)
    xn_range = np.linspace(np.min(xn_hist),np.max(xn_hist),MESH_DENSITY)
    x,y = np.meshgrid(x0_range,xn_range)
    z = np.zeros((MESH_DENSITY,MESH_DENSITY))
    with torch.no_grad():
        for i,x0 in enumerate(x0_range):
            for j,xn in enumerate(xn_range):
                z[j,i] = model(torch.tensor(xn),torch.tensor(x0)).numpy()
Making Sense of Coronavirus Stats

The whole situation surrounding the corona virus strikes me as a spectacular clusterfuck of global proportions.

I wouldn't put much confidence on any of those numbers. There's a whole bunch of factors that could skew an estimate of the mortality rate in either direction.

Off the top of my head, here's a few:

-The hospitals in Wuhan are completely overloaded, so:

    -the true mortality rate is going to be higher due to lack of intensive care for people who would otherwise pull through, creating a gap between the mortality rate in wuhan and elsewhere.

    -only the worst cases are going to be dealt with in hospitals, skewing the reported mortality rate towards the higher end.

    -Wuhan is taking extreme containment measures. For instance, [people are probably being welded into their rooms](https://www.the-sun.com/news/378365/coronavirus-patients-welded-into-homes-in-china-as-death-toll-spirals-to-813/) God knows what happens to them and how their infections turn out.

-The spread of the virus is rapid and exponential, so mortality can only be lower bounded by taking total dead/total cases

-It's faster to die than to recover, so you get the opposite effect from looking at deaths/(deaths + recovered)

-China seems to have a different method for reporting causes of deaths, leading to underestimation of the mortality rate.

-China has really bad air pollution, and there's weak evidence that smokers might be more suceptible to this disease. Men also smoke far more than women in china and it's being reported that men are disproportionately affected by the disease.

-CPP is probably not being very transparent or outright lying about the situation.

 -for instance as I understand it WHO officials have still not been allowed into Wuhan.

 -this is probably also happening internally such that even CPP officials can't really get a good grasp of the situation if they wanted to.

 -there are rumors of crematoriums operating 24/7 indicating that the real death rate is far higher than the reported death rate.

-Some paper also indicated (very)weak evidence of discrepancies in susceptibility between Asians and other races and at the moment I'm unaware of any confirmed deaths that aren't east asian. Apparently 2 4 Iranians died from the virus.

-I've also heard rumours of widespread online censorship, evidenced by trending misspellings of #coronavirus but without the proper spelling trending on twitter and moderator and administrator actions on reddit. Further obfuscating the underlying situation.

-There are rumours of possible reinfection and even that the second time round might be worse along with heavy censorship/firing of the people starting these rumours in china.

-Many countries are only screening people who have had direct contact with people coming in from china, and there's evidence of many asymptomatic/weakly symptomatic carriers. Thus, external death rates will also be difficult to estimate, as many unusual pneumonia deaths will not be attributed to the virus and the total number infected will not be known either.

-There's also evidence that the incubation period might be in some cases even higher than 14 days. This skews estimates based on total infected vs total dead.

-The tests we are using are new and have false positive and negative rates, further skewing the numbers.

All these factors either add uncertainty or skew the numbers in one direction or the other in a way that is both region and context dependent. When you put all of it together you get mortality rate estimates ranging anywhere from 0.1% up to +15%, and who knows about the long term effects of the disease.

Have epistemic conditions always been this bad?

Wei_Dai, in the past

A bit off topic, but does LW have username pinging?

FactorialCode's Shortform

I think for it work you'd definitely need to do it on a larger scale. When you go on a cruise, you pay money to abstract away all of the upkeep associated with operating a boat.

I read the post and did some more research. The closest analog to what I'm thinking looks to be google's barge project, which encountered some regulatory barriers during the construction phase. However, the next closest thing is this startup and they seem to be the real deal. With regards to what you brought up>

It's pretty hard to get permits for large-berth ships (and not that easy for small berth ships either)

Correct me if I'm wrong, but AFAICT most regulation is for parking a boat at a berth. I don't think the permits are nearly as strict if you are out in the bay. I don't think coastal housing can scale. That's why I mentioned the amphibious vehicles. Or more realistically, a ferry to move people to and from the housing ship.

Ships have all kinds of hidden costs you don't notice at first There are huge upfront costs to figuring out how to go about this

Yeah, there's no getting around that. It's the kind of thing that you contract out to a maritime engineering firm. 10 million for the ship, maybe 5 million for the housing units, maybe another 5 million to pay an engineering firm to design the thing to comply with relevant regulations. Throw in another 5 million to assemble the thing. Then who knows how much to cut through all the red tape. However, rents keep going up significantly faster than inflation. Condos in SF seem to be on the order of ~1 million per bedroom. I think you could easily recoup your costs after deploying a single 60-120 bedroom ship.

It seems likely that if you attempt to do this at scale, you'll just trigger some kind of government response that renders your operation not-much-cheaper, so you pay the upfront costs but not reap the scale benefits

I think this is the big one, rent seekers are going to do everything they can to stop you if you're a credible threat. I really don't know how politics works at that level. I'd imagine you'd need to appease the right people and make PR moves that make your opponents look like monsters. Then again, if you go with the cargoship model, it's not like you've lost your capital investment. You can just pack up and move to a different city anywhere in the world. You can also spread a fleet around to multiple cities across the world so as to avoid triggering said response while building up credibility/power.

FactorialCode's Shortform

Here's an idea. Buy a container ship, and retrofit it with amphibious vehicles, shipping container houses, and associated utility and safety infrastructure. Then take it to any major costal city and rent the shipping container houses at a fraction of the price of the local rent.

You could also convert some of the space into office space and stores.

Assuming people can live in a single 40' shipping container, the price per person should be minimal. You can buy some pretty big old ships for less than individual houses and we can probably upper bound the cost per unit by looking at cruise ship berth prices.

The best part? You can do all your construction where wages are cheap and ship the apartment anywhere it's needed.

Raemon's Scratchpad

Has there been any discussion about showing the up/down vote counts? I know reddit used to do it a long time ago. I don't know why they stopped though.

Using vector fields to visualise preferences and make them consistent

You can work around this by making your "state space" descriptions of sequences of states. And defining preferences between these sequences.

Load More