All of Gesild Muka's Comments + Replies

The older a transhumanist gets the less you should trust them to accurately judge AGI risk.

This has some interesting implications. Reminds me of psychics who make confident predictions 100 years in the future but will refuse or be offended if you challenge them to make confirmable predictions within their lifetimes or the next week/month/year.

Maybe I'm not understanding it correctly, if I'm selfish about my own experiences I wouldn't get into the machine in the first place. If I have no choice in whether or not to get in the machine I'd refuse the lottery ticket even if it was free just to spite my future copied self who gets to exist because I was destroyed.

Could Alice befriend someone who is closer to building AGI than Bob? If so, perhaps they can protect Alice or at least offer some peace of mind.

My intuition is that housing prices will go down assuming that population doesn't change too dramatically and assuming that while AI is slowly taking off there are also sizeable takeoffs in energy production, transportation and automation of physical labor. I assume the price of land will be mostly evenly distributed as logistical costs of building and labor drop. The exception might be areas in the world that are especially sought after for AI infrastructure.

It does appear that the process to become a certified teacher is more rigorous than a university professor. The way k-12 schools track progress is also very different from colleges.

The main reason for Altman's firing was due to a scandal of a more personal nature mostly unrelated to the everyday or strategic operations of OpenAI.

For how fun and whimsical the story is, the ending is somewhat dark.

Of course and if it were up to me students would do so by studying the War of the Ring by reading and analyzing Tolkien but I don't think that would be as useful for their academic and professional careers as studying current events. The original question is who should make such decisions?

1Bruce Lewis17d
My humble opinion is that teachers should make such decisions. From my own education I've come to think that the best education comes from enthusiastic teachers.

Thank you for your answer. I'm not so sure it is a divisive issue for the students, they seem to have little context or interest. If the purpose of AP courses is just to pass the AP test then there's already a lot of pointless materials and discussions, current events are not less relevant by comparison. Or maybe they are? I don't know who should make this decision (I've added this to the question).

Before I started teaching I would have thought that at the AP level students should, even briefly, learn to analyze conflicts and write about them in a nuanced ... (read more)

1Bruce Lewis21d
Do you still think students should learn to analyze conflicts and write about them in a nuanced and researched way? I think answering that question will lead you to the answer to your original question.

The rationality community will noticeably spill over into other parts of society in the next ten years. Examples: entertainment, politics, media, art, sports, education etc.

I think I understand, we're discussing with different scales in mind. I'm saying individually (or if your community is a small local group) nothing has to end but if your interests and identity are tied to sizeable institutions, technical communities etc. many will be disrupted by AI to the point where they could fade away completely. Maybe I'm just an unrealistic optimist, I don't believe collective or individual meaning has to fade away just because the most interesting and cutting edge work is done exclusively by machines.

Even if we don’t die, it still feels like everything is coming to an end.

Everything? I imagine there will be communities/nations/social groups that completely ban AI and those that are highly dependent on AI. There must be something between those two extremes.

8cousin_it1mo
This is like saying "I imagine there will be countries that renounce firearms". There aren't such countries. They got eaten by countries that use firearms. The social order of the whole world is now kept by firearms. The same will happen with AI, if it's as much a game changer as firearms.

This list is great. I especially resonate with 7, for a long time I didn't take responsibility because I felt I lacked the intellect/skill/certain something that others had who seemed so much more capable than me but it helps to keep in mind, as the post states, there are no adults.

I left notes throughout. The main issue is the structure which I usually map out descriptively by trying to answer: where do you want the characters to end up psychologically and how do you want the audience to feel? Figuring out these descriptive beats for each scene, episode and season will help refine the story and drive staging, dialogue etc. and of course nothing is set in stone so you can always change the structure to accommodate a scene or vice versa. I also recommended a companion series like a podcast or talkshow to discuss the ideas in each epis... (read more)

I don't fully understand the obsession with exploring the moral dimensions of the conflict, many from multiple sides, fronts, factions etc. have committed atrocities and I condemn them all regardless of affiliation. I've yet to find anyone on LW willing to seriously engage on solutions. It's like we've collectively given up on the possibility of peace and all that's left is to hash out the specific wording for the history books.

hiding your beliefs, in ways that predictably leads people to believe false things, is lying. This is the case regardless of your intentions, and regardless of how it feels.

I think people generally lie WAY more than we realize and most lies are lies of omission. I don't think deception is usually the immediate motivation but due to a kind of social convenience. Maintaining social equilibrium is valued over openness or honesty regarding relevant beliefs that may come up in everyday life.

2William the Kiwi 1mo
I would agree that people lie way more than they realise. Many of these lies are self-deception.

Old fashioned lobbying might work. Could there be a political candidate in a relevant country that could build a strong platform on getting rid of malaria?

Could you perhaps share your to-do lists with other people who have a stake in your productivity? Would that give you more motivation to follow through with the items on the list?

1TeaTieAndHat2mo
Could be. But there’s a lot of things I mostly want to do for myself, so I don’t know

Thank you for responding. I'm sorry for my ignorance, this is something that I've followed from afar since ~2004 so it's not just a grim fascination (although I guess it kind of is), I couldn't pass up the chance to ask questions of someone on the ground. I have a few more questions if that's ok..

How often are comprehensive plans to achieve peace reported in the media or made available to the public? Is there anything like ongoing discourse between Jewish Israelis, Palestinians who have Israeli citizenship and Palestinians in Gaza who are all of a similar ... (read more)

3Yovel Rom2mo
Thanks for your question! It's complicated, and I'll try to adress it tomorrow. 

Is there an overall solution or movement towards a solution that you think is underreported?

6Yovel Rom2mo
Nope.  I don't think you will be able to get an actual solution in the next 10-20 years (barring SAGI- scale changes), since there's a sizable fraction of the palestinian population that wants literal jewish genocide and the destruction of Israel [1]. I do think Israeli government is planning to take over the Gaza Strip, so I imagine we'll get some kind of a different equilibrium after. But I can promise you nobody knows how what will happen in the day after. Some people are trying to promote solutions, such as the Palestinian Authority taking over the Strip, but nobody knows what's possible yet and much will change in the next weeks.   [1] Couldn't find a survey, but Hamas won elections handily in the Gaza Strip in 2006 and there were no other elections in the West Bank since because Hamas would win them too. Hamas's constitution literally called genocide of the jews until 2017 (in Hebrew, sorry), and is still an extremely anti semitic document that aims for the destruction of Israel.

In this incident something was true because the “experts” decided it must be true. That’s humanities in (almost?) every incident.

Keeping a work diary helps. A work diary can be just quick notes, comments or ideas that you don't mind looking back on later. After long enough you'll find that you want to be more ambitious regarding time spent and the quality of the work you're doing and plans you may have for the future.

I would guess that the percentage of gay men who watch live music is roughly the same as gay men who watch live sports (or pretty much any leisure activity in society) but openly gay men are historically more common at concerts. Gays were considered dangerous deviants for a long time, maybe classical music/opera became a go to for 'openness' because it's mostly adults that attend so you could be openly gay without being harassed or accused of ulterior motives. My main belief: the stereotype is just because of association, not because of anything intrinsic.

Historically there are few public places you could be openly gay and not be harassed, concerts are one of those places.

2lc2mo
I think this is putting the cart before the horse. Why concerts as the original venue for that? Probably because concert people tend to be more gay.

I can’t help but read this simply as a politician who worries about their future hold on power. (I’d be curious to know how leaders discuss AI behind closed doors)

I mostly agree with the last part of your post about some experts never agreeing whether others (animal, artificial, etc) are conscious or not. A possible solution would be to come up with new language or nomenclature to describe the different possible spectrums and different dimensions they fall under. So many disagreements regarding this topic seem to derive from different parties having different definitions for AGI or ASI.

Here's how I tried (I haven't 100% succeeded): I decided that what goes on in my head wasn't enough. For a long time it was enough, I'd think about the things that interested me and maybe discuss them with some people and then move on to the next thing. This went on for years and some time was spent thinking about how I might put all that mental discourse to use but I never did. I worked at day jobs producing things I didn't care about and spent my free time exploring. Eventually I quit my day job, I spent more energy on personal pursuits anyway, and found... (read more)

With their sharper senses I'd imagine that dogs experience the world in a much richer way than humans. Then, depending on your definition, you could say it makes dogs more 'conscious'. This opens the door for many other animals with bigger brains and more complex sensory organs than humans. Are they also more conscious?

1Super AGI3mo
Yes, agreed. Given the vast variety of intelligence, social interaction, and sensory perception among many animals (e.g. dogs, octopi, birds, mantis shrimp, elephants, whales, etc.), consciousness could be seen as a spectrum with entities possessing varying degrees of it. But, it could also be viewed as a much more multi-dimensional concept, including dimensions for self-awareness and multi-sensory perception, as well as dimensions for: * social awareness * problem-solving and adaptability * metacognition * emotional depth and variety * temporal awareness * imagination and creativity * moral and ethical reasoning Some animals excel in certain dimensions, while others shine in entirely different areas, depending on the evolutionary advantages within their particular niches and environments. One could also consider other dimensions of "consciousness" that AI/AGI could possess, potentially surpassing humans and other animals. For instance: * computational speed * memory capacity and recall * multitasking * rapid upgradability of perception and thought algorithms * rapid data ingestion and integration (learning) * advanced pattern recognition * universal language processing * scalability * endurance

I really love movies. This year I’ve gone to more re-releases and anniversary showings than new releases. I chalk that up to formulaic thinking behind newer movies. So we’re not running out of art but rather existing niches are being filled faster and in more clever ways and arguably faster than new niches emerge.

It could also have to do with the nature of film production. If a movie takes five years to make the design and production team is predicting what viewers will want 5 years in the future. The result can be stale overly commercialized type movies.

That’s a rather extreme idea, even if humanity was on the brink of extinction deceit is hard to justify.

We haven’t even scratched the surface of possible practical solutions, once those are exhausted there are many more possible paths.

Perhaps standards and practices for who can and should teach AI safety (or new related fields) should be better defined.

There are many atoms out there and many planets to strip mine and a superintelligence has infinite time. Inter species competition makes sense depending on where you place the intelligence dial. I assume that any intelligence that’s 1,000,000 times more capable than the next one down the ladder will ignore their ‘competitors’ (again, there could be collateral damage but likely not large scale extinction). If you place the dial at lower orders of magnitude then humans are a greater threat to AI, AI reasoning will be closer to human reasoning and we should p... (read more)

My assumption is it’s difficult to design superintelligence and humans will either hit a limit in the resources and energy use that go into keeping it running or lose control of those resources as it reaches AGI.

My other assumption then is an intelligence that can last forever and think and act at 1,000,000 times human speed will find non-disruptive ways to continue its existence. There may be some collateral damage to humans but the universe is full of resources so existential threat doesn’t seem apparent (and there are other stars and planets, wouldn’t i... (read more)

1mruwnik4mo
It's not that it can't come up with ways to not stamp on us. But why should it? Yes, it might only be a tiny, tiny inconvenience to leave us alone. But why even bother doing that much? It's very possible that we would be of total insignificance to an AI. Just like the ants that get destroyed at a construction site - no one even noticed them. Still doesn't turn out too good for them. Though that's when there are massive differences of scale. When the differences are smaller, you get into inter-species competition dynamics. Which also is what the OP was pointing at, if I understand correctly. A superintelligence might just ignore us. It could also e.g. strip mine the whole earth for resources, coz why not? "The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else".

I’m not sure the analogy works or fully answers my question. The equilibrium that comes with ‘humans going about their business’ might favor human proliferation at the cost of plant and animal species (and even lead to apocalyptic ends) but the way I understand it the difference in intelligence from human and superintelligence is comparable to humans and bacteria, rather than human and insect or animal.

I can imagine practical ways there might be friction between humans and SI, for resource appropriation for example, but the difference in resource use would... (read more)

2hairyfigment4mo
Just as humans find it useful to kill a great many bacteria, an AGI would want to stop humans from e.g. creating a new, hostile AGI. In fact, it's hard to imagine an alternative which doesn't require a lot of work, because we know that in any large enough group of humans, one of us will take the worst possible action. As we are now, even if we tried to make a deal to protect the AI's interests, we'd likely be unable to stop someone from breaking it. I like to use the silly example of an AI transcending this plane of existence, as long as everyone understands this idea appears physically impossible. If somehow it happened anyway, that would mean there existed a way for humans to affect the AI's new plane of existence, since we built the AI, and it was able to get there. This seems to logically require a possibility of humans ruining the AI's paradise. Why would it take that chance? If killing us all is easier than either making us wiser or watching us like a hawk, why not remove the threat? I'm not sure I understand your point about massive resource use. If you mean that SI would quickly gain control of so many stellar resources that a new AGI would be unable to catch up, it seems to me that: 1. people would notice the Sun dimming (or much earlier signs), panic, and take drastic action like creating a poorly-designed AGI before the first one could be assured of its safety, if it didn't stop us; 2. keeping humans alive while harnessing the full power of the Sun seems like a level of inconvenience no SI would choose to take on, if its goals weren't closely aligned with our own.

Why wouldn’t a superintelligence simply go about its business without any regard for humans? What intuition am I missing?

1DaemonicSigil4mo
The idea is that there are lots of resources that superhuman AGI would be in competition with humans for if it didn't share our ideas about how those resources should be used. The biggest one is probably energy (more precisely, thermodynamic free energy). That's very useful, it's a requirement for doing just about anything in this universe. So AGI going about its business without any regard for humans would be doings things like setting up a Dyson sphere around the sun, maybe building large fusion reactors all over the Earth's surface. The Dyson sphere would deprive us of the sunlight we need to live, while the fusion reactors might produce enough waste heat to make the planet's surface uninhabitable to humans. AGI going about its business with no regard for humans has no reason to make sure we survive. That's the base case where humanity doesn't fight back. If (some) humans figure that that's how it's going to play out, then they're likely to try and stop the AGI. If the AGI predicts that we're going to fight back, then maybe it can just ignore us like tiny ants, or maybe it's simpler and safer for it to deliberately kill everyone at once, so we don't do complicated and hard-to-predict things to mess up its plans. TLDR: Even if we don't fight back, AGI does things with the side effect of killing us. Therefore we probably fight back if we notice an AGI doing that. A potential strategy for the AGI to deal with this is to kill everyone before we have a chance to notice.
1RHollerith4mo
If I'm a superintelligent AI, killing all the people is probably the easiest way to prevent people from interfering with my objectives, which people might do for example by creating a second superintelligence. It's just easier for me to kill them all (supposing I care nothing about human values, which will probably be the case for the first superintelligent AI given the way things are going in our civilization) than to keep an eye on them or to determine which ones might have the skills to contribute to the creation of a second superintelligence (and kill only those). (I'm slightly worried what people will think of me when I write this way, but the topic is important enough I wrote it anyways.)
3Mark Xu4mo
Humans going about their business without regard for plants and animals has historically not been that great for a lot of them.

‘Dimension hopping’ or ‘dimension manipulation’ could be a solution to the Fermi paradox. The universe could be full of intelligent life that remain silent and (mostly) invisible behind advanced spatial technology.

(the second type refers to more limited hypothetical dimension technology such as creating pocket dimensions, for example, rather than accessing other universes)

The last point is a really good one that will probably be mostly ignored and my intuition is that the capacity for suffering argument will also be ignored.

My reasoning is in the legal arena arguments have a different flavor and I can see a judge ruling on whether or not a trial can go forward that an AI can sue or demand legal rights simply because it has the physical capacity and force of will to hire a lawyer (who are themselves ‘reasonable persons’) regardless if they’re covered by any existing law. Just as if a dog, for example, started talking and had... (read more)

I think they'll just need the ability to hire a lawyer. 2017 set the precedent for animal representation so my assumption is that AGI isn't far behind. In the beginning I'd imagine some reasonable person standard as in "would a reasonable person find the AGI human-like?" Later there'll be strict definitions probably along technical lines.

1Super AGI6mo
True.  There are some legal precedents where non-human entities, like animals and even natural features like rivers, have been represented in court.  And, yes the "reasonable person" standard has been used frequently in legal systems as a measure of societal norms.  As society's understanding and acceptance of AI continues to evolve, it's plausible to think that these standards could be applied to AGI. If a "reasonable person" would regard an advanced AGI as an entity with its own interests—much like they would regard an animal or a Human—then it follows that the AGI could be deserving of certain legal protections.  Especially, when we consider that all mental states in Humans boil down to the electrochemical workings of neurons, the concept of suffering in AI becomes less far-fetched. If Human's synapses and neurons can give rise to rich subjective experiences, why should we definitively exclude the possibility that floating point values stored in vast training sets and advanced computational processes might not do the same?

My assumption is that most branches don’t get extradimensional dictators so it’s unlikely we will. (I suppose it’s not impossible that some individual or civilization could have created a super intelligence to police the universe, I’d have to think about it).

My first question to Grusch would be is he basing his claims of exotic material discovery off a physical analysis? Or is he basing it off of his interpretation of a physical analysis? Then, is that analysis available for public scrutiny?

We're not colonized because of the number of branches, wouldn't there be a small overall chance of ending up in our branch?

2avturchin6mo
But many UFOs are ending here, so each branch is getting many visitors - so why one of them is not eating all our atoms? This again could be explained by the existence some overwhelming control force, like super-intelligence which only prevents creation of other super-intelligences. 

Maybe the questions should have specified gender as most parents intuitively know that girls mature faster and without specifying in the questions the respondents might project their own children's gender on the question. For example, a parent with two daughters might have a different bias when answering the questions than a parent with sons.

I’ll try my best, I’m by no means an expert. I don’t think there’s a one size fits all answer but let’s take your example of relationship between IQ and national prosperity. You can spend time researching what makes up prosperity and where that intersects with IQ and find different correlates between IQ and other attributes in individuals (the assumption being that individuals are a kind of unit to measure prosperity).

You can use spaced repetition to avoid burnout and gain fresh perspectives. The point is to build mental muscle memory and intuition on what... (read more)

The way you word the second type might be working against you. ‘Updating’ brings to mind the computer function which neatly and quickly fills a bar and then the needed changes are complete. The human brain doesn’t work like that. To build new intuitions you need spaced repetition and thoughtful engagement with the belief you want to internalize. Thinking does work, you just can’t force it.

1[anonymous]6mo

It’s not a big thing to get upset about if you’re not in a culture that highly values community and social cohesion—where it can be quite emotionally exhausting to always conform/accommodate to the thinking and values (mental models?) of others.

And of course I don’t want to upset anyone, the post is worthwhile (and powerful) because it describes behaviors that might lead people to give up on finding community, fulfilling relationships or common ground. For me it was an invitation to better describe or explain these behaviors and a twofold message: 1). don’t give up, you’re not alone 2). keep an open mind with other’s perspectives

Corporate real estate is what I call it when I want to sound fancy. Really, it was a call center for a relocation company which was a subsidiary of a large real estate company.

Our department was like a dispatch service, we took calls from customers (of companies we had contracts with) and after a short exposition-heavy conversation we’d refer them to the real estate firms that were under the parent company’s umbrella.

A real estate agent would be automatically assigned and receive our referral. It was free and if they closed with our agent they’d get a kick... (read more)

The grumpy vs talkative example and the Alice vs Bob example remind me of the knowledgeability vs competence debates that I used to have before and after conducting interviews. When I first entered the corporate world I thought knowledge was paramount, I learned very quickly that likeability and team synergy are valued much more. The best line workers who couldn't break into the management team even after years of hard work often wondered what they were doing wrong, their metrics and productivity were consistently the highest and they got along with everyone on their teams. Why weren't they advancing? My advice to them was focus less on productivity and more on impressing the boss.

1lukehmiles6mo
Curious what industry this is if you don't mind saying

This reminds me of the incident in Belgium a few months ago:

https://www.lesswrong.com/posts/9FhweqMooDzfRqgEW/chatbot-convinces-belgian-to-commit-suicide

The question of liability in these kinds of circumstances is fascinating and important. The legal system will decide these questions by setting precedents as they occur if we don't try to address them (or at least think about them) in advance.

The consensus is don't ask, don't tell. It's the only media consumption (among movies, tv, books, sports, videos, music, games etc.) that's not openly discussed around the water cooler or in polite society. A disservice, in my opinion.

It appears to be harmful to children because it takes time away from other activities and creates false impressions of sex/relationships. Especially if porn is their only reference for adult relationships. Children younger than 16 should be educated on media (including porn) but probably shouldn't have free reign to explore p... (read more)

I feel there may be more to it, focus testing might provide some answers, get to the heart of why they don't take tech disruption more seriously.

Here's a crazier but more specific possible course of action: When I was visiting Albania in 2011 I heard from locals about foreign religious groups that offered free English language courses. Naturally, I assumed it was mainly for recruitment and now know it's a fairly common tactic. 

So the idea is what if someone offered coding classes (or English courses) and used it as a platform to discuss AI? And you could be upfront about it in the marketing: "Learn coding and AI fundamentals from an experienced expert"

Again, there's very few (<10) people working on technical alignment in China right now, and I feel a bit lost. Any advice is welcome. 

Maybe there's a backdoor approach. Instead of trying to convince the general public about the importance of alignment perhaps there's a model for creating 'alignment departments' in existing tech and AI companies. Another idea could be outreach for high school or university students that want to pursue careers in tech.

From reading Part 1 it seems the ways that we've tried to spread the message in Western countries wo... (read more)

Yes! That sounds like it could work! But as long as it isn't something people can get a career in, it'll just stay in the realm of a cool but easily forgotten idea. This is why I think it's so important to hire people to work in technical alignment. If it is even a slightly theoretical possibility, it would get people thinking about it more seriously.

Load More