Sorted by New


SpaceX will have massive impact in the next decade

Not quite sure that part about a tungsten rod being equivalent to a nuclear weapon is correct.

Earth orbital velocity is 7.8km/s. so if all the mass in a starship launch went into one tungsten rod then that rod would have an energy of 0.5 * 100000 * 7800^2  = 3 terajoules, or 3/4 of a kiloton of TNT. Nuclear weapons are tens of kilotons at a minimum and single digit megatons often, so I don't think this is a fair comparison.

This actually makes a great deal of sense if you think about it for a little bit. The energy that the tungsten rod has is given to it by the fuel in the starship. So if a nuclear bomb can have a yield in tons of TNT that is much greater than the mass of the starship, then you should be suspicious of claims that it can impart more energy than it has fuel.

I would argue further that a tungsten rod might just disappear into the ground unlike a nuclear airburst, but that's just conjecture on my part.

Book Review: Reinforcement Learning by Sutton and Barto

I think there are two things being talked about here, the ability to solve mazes and knowledge of what a maze looks like. 

If the internal representation of the maze is a generic one like a Graph/Tree/whatever then you can generate a whole bunch of fake mazes easily since here a random maze == a random tree, and a random tree is easy to make. The observations that the robot makes about mazes that it does encounter can then be used to inform what type of tree to generate. 

For example you would not be likely to observe a 2D maze with four left turns in a row, and so when you are generating your random tree you wouldn't for example generate a tree with four left branches in a row or whatever. 

The generation of correct-looking mazes is the "have I got a good understanding of the problem" part of the problem, and the simulation of lots of maze solving events is the "given this understanding, how well can I solve this problem" part of the problem.

Thoughts on Neuralink update?

I think that the AI here is going to have to not just fill in the blanks but convert to a whole new intermediary format. I say this because there are lots of people who despite from the outside appearing normal don't even see images. A less extreme example would be the people who do and don't subvocalise whilst reading - I know that when I'm stuck in the middle of a novel it's basically just a movie playing in my head, there's no conscious spelling out of the words, but for other people there is a narrator. Because of this the large differences between internal brain formats will necessitate some kind of common format as an intermediary.

Personally I'm more interested in seeing (ethics aside) what happens when you give this to a child. If you stick a direct feed to a computer+internet into someones brain whilst it's still forming, I would not be surprised if what comes out the other end is quite unlike a regular human. The base model at the moment already has a 6 axis imu+compass+barometer - it would not surprise me if that information just got fused into that persons regular experience like those compass belts people have started wearing.