Wiki Contributions

Comments

The wording of the question is ambiguous. It asks for your determination on the likelihood it was heads when you were "first awakened", but by your perception any wakening is you being first awakened. If it is really asking about your determination given you have the information that the question is being asked on your first wakening regardless of your perception, then it's 1/2. If you know the question will be asked on your first or second wakening (though the second one will in the moment feel like the first), then it's 1/3.

This suggests a general rule/trend via which unreported but frequent phenomenon can be extrapolated. If X phenomenon is discovered accidentally via method Y almost all the time, then method Y must be done far more frequently than people suspect. 

Generally it makes no sense for every country to collectively cede the general authority of law and order and unobstructed passage of cargo wrt global trade. He talks about this great US pull back because the US will be energy independent, but America pulling back and the global waters to turning into a lawless hellscape would send the world economy into a dark age. Hinging all his predictions on this big head-turning assumption gives him more attention but the premise is nonsensical.

Why can't this be an app. If their LAM is better than competitors then it would be profitable in their hardware and standalone.

The easiest way to check whether this would work is to determine a causal relationship between diminished levels of serotonin in the bloodstream and neural biomarkers similar to that of people with malnutrition.

I feel the original post, despite ostensibly being a plea for help, could be read as a coded satire on the worship of "pure cognitive heft" that seems to permeate rationalist/LessWrong culture. It points out the misery of g-factor absolutism.

It would help if you clarified why specifically you feel unintelligent. Given your writing style: ability to distill concerns, compare abstract concepts and communicate clearly, I'd wager you are intelligent. Could it be imposter syndrome?

I totally agree with that notion, I however believe the current levers of progress massively incentivize and motivate AGI development over WBE. Currently regulations are based on flops, which will restrict progress towards WBE long before it restricts anything with AGI-like capabilities. If we had a perfectly aligned international system of oversight that assured WBE were possible and maximized in apparent value to those with the means to both develop it and push the levers, steering away from any risky AGI analogue before it is possible, then yes, but that seems very unlikely to me. 

Also I worry. Humans are not aligned. Humans having WBE at our fingertips could mean infinite tortured simulations of the digital brains before they bear any more bountiful fruit for humans on Earth. It seems ominous, fully replicated human consciousness so exact a bit here or there off could destroy it.

It really is. My conception of the future is so weighed by the very likely reality of an AI transformed world that I have basically abandoned any plans with a time scale over 5 years. Even my short term plans will likely be shifted significantly by any AI advances over the next few months/years. It really is crazy to think about, but I've gone over every single aspect of AI advances and scaling thousands of times in my head and can think of no reality in the near future not as alien to our current reality as ours is to pre-eukaryotic life.

Load More