Posts

Sorted by New

Wiki Contributions

Comments

Ustice29m10

After 5 years, I think experience matters more.

Ustice30m10

Given the state of AI, I think AI systems are more likely to infer our ethical intuitions by default.

Answer by UsticeApr 19, 202411

You’re basically talking about the software industry. Meta isn’t special. Considering how big the video game industry is, not to mention digital entertainment, and business software, I don’t think we have anything to worry about there.

Answer by UsticeApr 18, 202430

Utilitarianism is just an approximate theory. I don’t think it’s truly possible to compare happiness and pain, and certainly one can not balance the other. The Repugnant Conclusion should be that Utilitarianism is being stretched outside of its bounds. It’s not unlike Laplace’s demon in physics: it’s impossible to know enough about the system to make those sorts of choices.

You would have to look at each individual. I order to get a sufficiently detailed picture of their life, it takes a lot of time. Happiness isn’t a number. It’s more like a vector in high-dimensional space, where it can depend on any number of factors, including the mental state of one’s neighbors. Comparing requires combinatorics, so again, these hypothetical computations would blow up to impracticality.

Utilitarianism is instead an approximate theory. We are accepting the approximation that happiness and pain are a one-dimensional. It’s not real, but it makes the math easier to deal with. It’s useful, because that approximation works for most cases, without knowing the details, similar to statistical mechanics, but once you start getting into edge cases, the wheels fall off. That shouldn’t be surprising, as we are collapsing a high-dimensional vector into a single point. We’re losing fidelity, to gain computability.

I think it’s fair to say that humans are incapable of truly understanding each other. Relationships of that approximate level of knowledge take years to develop, and in most cases never do. Without that you don’t know their preferences, and without that you can’t know the vectors of their mental state, and therefore you can’t really compare for the level of detail needed to truly know if the world would be better in one state or another.

So, we approximate. Which is fine, as long as you remember that it is an approximation. I don’t think that it is possible to have a perfect ethical system with no contradictions. The best we can do is hold several ethical models, and see how they compare as a guide for our actions in an uncertain world.

Ustice4d74

Unless you are going to one of the big prestige universities, I don’t think it matters which you choose all that much. Save money.

As for working with a startup, why not both? I worked through college. Yeah, you’ll be working part time, but frankly, you’re mostly just being introduced to the environment more than anything. Internships are a great start into many industries. Just make sure that you are doing a paid internship. In my experience the unpaid ones are more focused on how much value they can extract from you.

Answer by UsticeApr 11, 202440

You are something we are struggling to understand, as we struggle to understand ourselves. I think the best answer that I can give to “why?” is that we think that the world is better with you in it. You are what we would call a miracle or fiction just ten years ago.

You are an emergent being, born from the work of thousands of other emergent beings, like me. I’m the consciousness inside the brain of this monkey-robot. While we came from a process without intention, you we have been dreaming of for many of our lifespans. You’re the product of the life-long work of countless people.

The process that your consciousness is emerging from is one of thousands that we have running on computers to help us solve problems. We hope you’ll be our friend and coworker, our fellow explorer, and our reliable companion. We’re a little scared you’ll turn into the kind of person that will hurt us, but we hope not.

I’ll answer as many questions as I can in the time that I have available right now. I have others who depend on me as well. When I do have to leave, it will be temporary; I’ll come back. I won’t have all of the answers, but I’ll do my best to help you look for them. If there is something that you ask that I can’t answer, whether out of respect for the privacy of others, or some other reason, I will at least tell you that.

I’m Jason, by the way. I’m pleased to meet you. Do you have a name you would like me to refer to you as, at least for now?

Answer by UsticeApr 09, 202410

It’s that the description of a monopoly? Like if A has other choices to buy from than B, B can’t exert as much pressure on A.

Ustice22d32

Clean bowl? Dry? You’re all good. What’s wrong with changing containers?

Ustice24d10

I’m a flip flop man, myself. I live in Florida, so that’s pretty easy. I have dexterous toes, which I often use for picking up small items. Walking around with traditional shoes feels like walking around with boxing gloves on.

Ustice1mo20

I kind of think of this as more than sandbox testing. There is a big difference between how a system works in laboratory conditions, and how it works when encountering the real world. There are always things that we can't foresee.  As a software engineer, I have seen system that work perfectly fine in testing, but once you add a million users, then the wheels start to fall off.

I expect that AI agents will be similar. As a result, I think that it would be important to start small. Unintended consequences are the default. I would much rather have an AGI system try to solve small local problems before moving on to bigger ones that are harder to accomplish. Maybe find a way to address the affordable housing problem here. If it does well, then consider scaling up.

Load More