Robert Miles

Wiki Contributions

Comments

Not a very helpful answer, but: If you don't also require computational efficiency, we can do some of those. Like, you can make AIXI variants. Is the question "Can we do this with deep learning?", or "Can we do this with deep learning or something competitive with it?"

I think they're more saying "these hypothetical scenarios are popular because they make good science fiction, not because they're likely." And I have yet to find a strong argument against the latter form of that point.

Yeah I imagine that's hard to argue against, because it's basically correct, but importantly it's also not a criticism of the ideas. If someone makes the argument "These ideas are popular, and therefore probably true", then it's a very sound criticism to point out that they may be popular for reasons other than being true. But if the argument is "These ideas are true because of <various technical and philosophical arguments about the ideas themselves>", then pointing out a reason that the ideas might be popular is just not relevant to the question of their truth.
Like, cancer is very scary and people are very eager to believe that there's something that can be done to help, and, perhaps partly as a consequence, many come to believe that chemotherapy can be effective. This fact does not constitute a substantive criticism of the research on the effectiveness of chemotherapy.

The approach I often take here is to ask the person how they would persuade an amateur chess player who believes they can beat Magnus Carlsen because they've discovered a particularly good opening with which they've won every amateur game they've tried it in so far.

Them: Magnus Carlsen will still beat you, with near certainty

Me: But what is he going to do? This opening is unbeatable!

Them: He's much better at chess than you, he'll figure something out

Me: But what though? I can't think of any strategy that beats this

Them: I don't know, maybe he'll find a way to do <some chess thing X>

Me: If he does X I can just counter it by doing Y!

Them: Ok if X is that easily countered with Y then he won't do X, he'll do some Z that's like X but that you don't know how to counter

Me: Oh, but you conveniently can't tell me what this Z is

Them: Right! I'm not as good at chess as he is and neither are you. I can be confident he'll beat you even without knowing your opener. You cannot expect to win against someone who outclasses you.

I was thinking you had all of mine already, since they're mostly about explaining and coding. But there's a big one: When using tools, I'm tracking something like "what if the knife slips?". When I introspect, it's represented internally as a kind of cloud-like spatial 3D (4D?) probability distribution over knife locations, roughly co-extentional with "if the material suddenly gave or the knife suddenly slipped at this exact moment, what's the space of locations the blade could get to before my body noticed and brought it to a stop?". As I apply more force this cloud extends out, and I notice when it intersects with something I don't want to get cut. (Mutatis mutandis for other tools of course. I bet people experienced with firearms are always tracking a kind of "if this gun goes off at this moment, where does the bullet go" spatial mental object)

I notice I'm tracking this mostly because I also track it for other people and I sometimes notice them not tracking it. But that doesn't feel like "Hey you're using bad technique", it feels like "Whoah your knife probability cloud is clean through your hand and out the other side!"

This is actually a lot of what I get out of meditation. I'm not really able to actually stop myself from thinking, and I'm not very diligent at noticing that I'm thinking and returning to the breath or whatever, but since I'm in this frame of "I'm not supposed to be thinking right now but it's ok if I do", the thoughts I do have tend to have this reflective/subtle nature to them. It's a lot like 'shower thoughts' - having unstructured time where you're not doing anything, and you're not supposed to be doing anything, and you're also not supposed to be doing nothing, is valuable for the mind. So I guess meditation is like scheduled slack for me.

I also like the way it changes how you look at the world a little bit, in a 'life has a surprising amount of detail', 'abstractions are leaky' kind of way. To go from a model of locks that's just "you cannot open this without the right key", to seeing how and why and when that model doesn't work, can be interesting. Other problems in life sometimes have this property, where you've made a simplifying assumption about what can't be done, and actually if you look more closely that thing in fact can sometimes be done, and doing it would solve the problem.

it turns out that the Litake brand which I bought first doesn't quite reach long enough into the socket to get the threads to meet, and so I had to return them to get the LOHAS brand.

 

I came across a problem like this before, and it was kind of a manufacturing/assembly defect. The contact at the bottom of the socket is meant to be bent up to give a bit of spring tension to connect to the bulb, but mine were basically flat. You can take a tool (what worked best for me was a multitool's can opener) and bend the tab up more so it can contact bulbs that don't screw in far enough. UNPLUG IT FIRST though

Learning Extensible Human Concepts Requires Human Values

[Based on conversations with Alex Flint, and also John Wentworth and Adam Shimi]

One of the design goals of the ELK proposal is to sidestep the problem of learning human values, and settle instead for learning human concepts. A system that can answer questions about human concepts allows for schemes that let humans learn all the relevant information about proposed plans and decide about them ourselves, using our values.

So, we have some process in which we consider lots of possible scenarios and collect a dataset of questions about those scenarios, along with the true answers to those questions. Importantly these are all 'objective' or 'value-neutral' questions - things like "Is the diamond on the pedestal?" and not like "Should we go ahead with this plan?". This hopefully allows the system to pin down our concepts, and thereby truthfully answer our objective questions about prospective plans, without considering our values.

One potential difficulty is that the plans may be arbitrarily complex, and may ask us to consider very strange situations in which our ontology breaks down. In the worst case, we have to deal with wacky science fiction scenarios in which our fundamental concepts are called into question.

We claim that, using a dataset of only objective questions, it is not possible to extrapolate our ontology out to situations far from the range of scenarios in the dataset. 

An argument for this is that humans, when presented with sufficiently novel scenarios, will update their ontology, and *the process by which these updates happen depends on human values*, which are (by design) not represented in the dataset. Accurately learning the current human concepts is not sufficient to predict how those concepts will be updated or extended to novel situations, because the update process is value-dependent.

Alex Flint is working on a post that will move towards proving some related claims.


 

Ah ok, thanks! My main concern with that is that it goes to "https://z0gr6exqhd-dsn.algolia.net", which feels like it could be a dynamically allocated address that might change under me?

Is there a public-facing API endpoint for the Algolia search system? I'd love to be able to say to my discord bot "Hey wasn't there a lesswrong post about xyz?" and have him post a few links

Load More