sludgepuddle

Posts

Sorted by New

Wiki Contributions

Comments

However good an idea it is, it's not as good an idea as Aaronson just taking a year off and doing it on his own time, collaborating and sharing whatever he deems appropriate with the greater community. Might be financially inconvenient but is definitely something he could swing.

How do we deal with institutions that don't want to be governed, say idk the Chevron corporation, North Korea, or the US military?

Well I don't think it should be possible to convince a reasonable person at this point in time. But maybe some evidence that we might not be doomed. Yudkowsky and other's ideas rest on some fairly plausible but complex assumptions. You'll notice in the recent debate threads where Eliezer is arguing for inevitability of AI destroying us he will often resort to something like, "well that just doesn't fit with what I know about intelligences". At a certain point in these types of discussions you have to do some hand waving. Even if it's really good hand waving, if there's enough if it there's a chance at least one piece is wrong enough to corrupt your conclusions. On the other hand, as he points out, we're not even really trying, and it's hard to see us doing so in time. So the hope that's left is mostly that the problem just won't be an issue or won't be that hard for some unknown reason. I actually think this is sort of likely, given how difficult it is to analyze, it's hard to have full trust in any conclusion.

While we're sitting around waiting for revolutionary imaging technology or whatever, why not try and make progress on the question of how much and what type of information can we obscure about a neural network and still approximately infer meaningful details of that network from behavior. For practice, start with ANNs and keep it simple. Take a smallish network which does something useful, record the outputs as it's doing its thing, then add just enough random noise to the parameters that output deviates noticeably from the original. Now train the perturbed version to match recorded data. What do we get here, did we recover the weights and biases almost exactly? Assuming yes, how far can this go before we might as well have trained the thing from scratch? Assuming success, does it work equally on different types and sizes of networks, if not what kind of scaling laws does this process obey? Assuming some level of success, move on to a harder problem, a sparse network, this time we throw away everything but connectivity information and try to repeat the above. How about something biologically realistic but we try to simulate the spiking neurons with groups of standard artificial ones.. you get the drift.

This is outright saying ETH is likely to outperform BTC, so this is Scott’s biggest f*** you to the efficient market hypothesis yet. I’m going to say he’s wrong and sell to 55%, since it’s currently 0.046, and if it was real I’d consider hedging with ETH.

I'm curious what's behind this, is Zvi some sort of bitcoin maximalist? I tend to think that bitcoin having a high value is hard to explain, it made sense when it was the only secure cryptocurrency out there but now it's to a large degree a consequence of social forces rather than economic ones. Ether I can see value in, since it does a bunch of things and there's at least an argument that it's best in class for all those.

So many times I've been reading your blog and I'm thinking to myself, "finally something I can post to leftist spaces to get them to trust Scott more", and then I run into one or two sentences that nix that idea. It seems to me like you've mostly given up on reaching the conflict theory left, for reasons that are obvious. I really wish you would keep trying though, they (we?) aren't as awful and dogmatic as they appear to be on the internet, nor is their philosophy as incompatible. For me, it's less a matter of actually adopting the conflict perspective, and more just taking it more seriously and making fun of it less.

What about some form of indirect supervision, where we aim to find transcripts in which H has a decision of a particular hardness? A would ideally be trained starting with things that are very very easy for H, with the hardness ramped up until A maxes out it's abilities. Rather than imitating H, we use a generative technique to create fake transcripts, imitating both H and it's environment. We can incorporate into our loss function the amount of time H spends on a particular decision, the reliability of that decision, and maybe some kind of complexity measure on the transcript to find easier/harder situations which are of genuine importance to H.

Isn't The Least Convenient Possible World directly relevant here? I'm surprised it hasn't been mentioned yet.

Perhaps I'm just being dense, but I don't really get what Carl Sagan's look has to do with praise, or why you should find it disgusting.

One thing I've personally witnessed is people claiming to have had the exact same vivid dream the night before. I'm talking stuff like playing scrabble with Brad Pitt and Former President Carter on the summit of mount McKinley, so it seems unlikely that they were both prompted by the same recent event. Assuming that these people haven't been primed until after the fact, I would expect even stronger effects to be possible for those who have.

Load More