All of Isaac Poulton's Comments + Replies

Nudging My Way Out Of The Intellectual Mosh Pit

Interesting. Complete greyscale sounds like a lot of hassle, but I'm going to try turning the contrast on my phone down to nearly zero and see if I notice any difference.

Nudging My Way Out Of The Intellectual Mosh Pit

I'm curious about your reasons for making your monitors greyscale. What are the benefits of that for you?

2Joe D Williams1dColor is used by designers to get your attention. Red, especially, being the color of blood, draws your eye. Color can be especially distracting for those with ADHD.
3Conor Sullivan1dI just made my entire digital life grayscale and I like it a lot. I’ve even figured out how to make retro monitor colors like amber and green. Its only been a few hours, but notice that I appreciate real life colors more and YouTube is less addictive. I also feel for colorblind people now, because I’ve noticed people in articles or videos talk about “the blue line in this chart” or whatever and I have to guess which one they mean.
The Unreasonable Feasibility Of Playing Chess Under The Influence

If I'm not mistaken (and I'm not a biologist so I might be), alcohol mainly impacts the brain's system 2, leaving system 1 relatively intact. That lines up well with this post.

Brain Efficiency: Much More than You Wanted to Know

If EfficientZero-9000 is using 10,000 times the energy of John von Neumann, and thinks 1,000 times faster, it's actually actually 10 times less energy efficient.

The point of this post is that there is some small amount of evidence that you can't make a computer think significantly faster, or better, than a brain without potentially critical trade offs.

2Daniel Kokotajlo10dI'm saying we will build AGI and it will be significantly faster and more capable than the brain. According to this post that means it will be significantly less energy-efficient. I agree. I don't see why that matters. Energy is cheap, and people building AGI are wealthy.
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

I don't agree with Eliezer here. I don't think we have a deep enough understanding of consciousness to make confident predictions about what is and isn't conscious beyond "most humans are probably conscious sometimes".

The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that.

If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

If I'm not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn't mean A is B.

See also: Chinese Room Problem

1cajals_dream3moI see, but in that case what is the claim about gpt3, that if it had behavioral equivalence to a complicated social being it would have consciousness?
Dating profiles from first principles: heterosexual male profile design

While it's important to bear in mind the possibility that you're not as below average as you think, I don't know your case so I will assume you're correct in your assessment.

Perhaps give up on online dating. "Offline" dating is significantly more forgiving than online.

Truthful AI: Developing and governing AI that does not lie

I think this touches on the issue of the definition of "truth". A society designates something to be "true" when the majority of people in that society believe something to be true.

Using the techniques outlined in this paper, we could regulate AIs so that they only tell us things we define as "true". At the same time, a 16th century society using these same techniques would end up with an AI that tells them to use leeches to cure their fevers.

What is actually being regulated isn't "truthfulness", but "accepted by the majority-ness".

This works well for thin... (read more)

4Daniel Kokotajlo3moWould this multiple evaluation/regulatory bodies solution not just lead to the sort of balkanized internet described in this story [https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like-daniel-s-median-future#What_about_all_that_AI_powered_propaganda_mentioned_earlier__] ? I guess multiple internet censorship-and-propaganda-regimes is better than one. But ideally we'd have none. One alternative might be to ban or regulate persuasion tools, i.e. any AI system optimized for an objective/reward function that involves persuading people of things. Especially politicized or controversial things.
Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions)

I wonder if this makes any testable predictions. It seems to be a plausible explanation for how some people are extremely good at some reflexive mental actions, but not the only one. It's also plausible that some people are "wired" that way from birth, or that a single or small number of developmental events lead to them being that way (rather than years of involuntary practice).

I suppose if the hypothesis laid out in this post is true, we'd expect people to exhibit get significantly better at some of these "cup-stacking" skills within a few years of being... (read more)

How much slower is remote work?

Specialising days like that seems like a good idea at first glance, but I get the feeling I'd burn out on meetings pretty quick if all my week's meetings were scheduled on one day. Being able to use a meeting as a break from concrete thinking to switch to more abstract thinking for a while is very refreshing.

The Towel Census: A Methodology for Identifying Orphaned Objects in Your Home

IMO, this is pretty necessary in any shared space. My company does this twice a year for for the office umbrella rack, fridge, and cupboard.

What's going on with this failure of Bayes to converge?

This highlights an interesting case where pure Bayesian reasoning fails. While the chance of it occurring randomly is very low (but may rise when you consider how many chances it has to occur), it is trivial to construct. Furthermore, it potentially applies in any case where we have two possibilities, one of which continually becomes more probable while the other shrinks, but persistently doesn't become disappear.

Suppose you are a police detective investigating a murder. There are two suspects: A and B. A doesn't have an alibi, while B has a stro... (read more)

What I talk about when I talk about AI x-risk: 3 core claims I want machine learning researchers to address.

IMO, this is a better way of splitting up the argument that we should be funding AI safety research than the one presented in the OP. My only gripe is in point 2. Many would argue that it wouldn't be really bad for a variety of reasons, such as there are likely to be other 'superintelligent AIs' working in our favour. Alternatively, if the decision making were only marginally better than a human's it wouldn't be any worse than a small group of people working against humanity.

3capybaralet2yTBC, I'm definitely NOT thinking of this as an argument for funding AI safety.