ChrisHibbert

Comments

Yet another world spirit sock puppet

The RSS feed is visible at the bottom of the home page

Be secretly wrong

I'm all about epistemology. (my blog is at pancrit.org) But in order to engage in or start a conversation, it's important to take one of the things you place credence in and advocate for it. If you're wishy-washy, in many circumstances, people won't actually engage with your hypothesis, so you won't learn anything about it. Take a stand, even if you're on slippery ground.

A Visualization of Nick Bostrom’s Superintelligence

To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain.

This is all going to change over time. (I don't know how quickly, but there is already work on trans-cranial methods that is showing promise.) If we can't get the bandwidth quickly enough, we can control infections, electrodes will get smaller and more adaptive.

enhancement is likely to be far more difficult than therapy.

Admittedly, therapy will come first. That also means that therapy will drive development of techniques that will also be helpful for enhancement. The boundary between the two is blurry, and therapies that shade into enhancement will definitely be developed before pure enhancement, and be easier to sell to end users. For example, for some people, treatment of ADHD spectrum disorders will definitely be therapeutic, while for others it be seen as attractive enhancements.

Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.70 Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded.

The visual pathway is impressive, but it's very limited in the kinds of information it transmits. It's a poor way of encoding bulk text, for instance. Even questions and answers can be sent far more densely with a much narrower channel. A tool like Google Now that tries to anticipate areas of interest and pre-fetch data before questions arise to consciousness could provide a valuable backchannel, and it wouldn't need near the bandwidth, so ought to be doable with non-invasive trans-cranial techniques.

Solomonoff Cartesianism

I'm confused by the framing of the Anvil problem. For humans, a lot of learning is learning from observing others, seeing their mistakes and their consequences. We can predict various events that will result in other's deaths based on previous observation of what happened to yet other people. If we're above a certain level of solipsism, we can extrapolate to ourselves.

Does the AIXI not have the ability to observe other agents? Is it correct to be a solipsist? Seems like a tough learning environment if you have to discover all consequences yourself.

It's still possible to extrapolate from stubbing your toe, burning your fingers on the stove, and mashing your thumb with a hammer. Is there some reason to expect that AIXI will start out its interactions with the world by picking up an anvil rather than playing with rocks and eggs?

2013 Less Wrong Census/Survey

I don't answer survey questions that ask about race, but if you met me you'd think of me as white male.

I'm more strongly libertarian (but less party affiliated) than the survey allowed me to express.

I have reasonably strong views about morality, but had to look up the terms "Deontology", "Consequentialism", and "Value Ethics" in order to decide that of these "consequentialism" probably matches my views better than the others.

Probabilities: 50,30,20,5,0,0,0,10,2,1,20,95.

On "What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions?", I had to parse several words very carefully, and ended up deciding to read "significant" as "measureable" rather than "consequential". For consequential, I would have given a smaller value.

I answered all the way to the end of the super bonus questions, and cooperated on the prize question.

Wait vs Interrupt Culture

In my group at work, it's relatively common to chat "interruptible?" to someone who's sitting right next to you. You can keep working until they're free to take the interrupt, and they don't need to take the interrupt utill they're ready.

In f2f conversations, it's mostly an interrupt culture, but with some conventions about not breaking in in groups larger than 4 or so.

Three ways CFAR has changed my view of rationality

I believe that emotions play a big part in thinking clearly, and understanding our emotions would be a helpful step. Would you mind saying more about the time you spend focused on emotions? Are you paying attention to your concrete current or past emotions (i.e. "this is how I'm feeling now", or "this is how I felt when he said X"), or more theoretical discussions "when someone is in fight-or-flight mode, they're more likely to Y than when they're feeling curiosity"?

You also mentioned exercises about exploiting emotional states; would you say more about what CFAR has learned about mindfully getting oneself in particular emotional states?

How to Measure Anything

|New information can be gained that increases the expected work remaining despite additional valuable work having been done.

That's progress.

[This comment is no longer endorsed by its author]Reply
The Least Convenient Possible World

When I've argued with people who called themselves utilitarian, they seemed to want to make trade-offs among immediately visible options. I'm not going to try to argue that I have population statistics, or know what the "proper" definition of a utilitarian is. Do you believe that some other terminology or behavior better characterizes those called "utilitarians"?

Rationality Quotes October 2012

Did Munroe add that? It's incorrect. There are lots of situations in which it's reasonable to calculate while throwing away an occasional factor of 2.2.

Load More