Maybe I'm not understanding it correctly, if I'm selfish about my own experiences I wouldn't get into the machine in the first place. If I have no choice in whether or not to get in the machine I'd refuse the lottery ticket even if it was free just to spite my future copied self who gets to exist because I was destroyed.
Could Alice befriend someone who is closer to building AGI than Bob? If so, perhaps they can protect Alice or at least offer some peace of mind.
My intuition is that housing prices will go down assuming that population doesn't change too dramatically and assuming that while AI is slowly taking off there are also sizeable takeoffs in energy production, transportation and automation of physical labor. I assume the price of land will be mostly evenly distributed as logistical costs of building and labor drop. The exception might be areas in the world that are especially sought after for AI infrastructure.
It does appear that the process to become a certified teacher is more rigorous than a university professor. The way k-12 schools track progress is also very different from colleges.
The main reason for Altman's firing was due to a scandal of a more personal nature mostly unrelated to the everyday or strategic operations of OpenAI.
Of course and if it were up to me students would do so by studying the War of the Ring by reading and analyzing Tolkien but I don't think that would be as useful for their academic and professional careers as studying current events. The original question is who should make such decisions?
Thank you for your answer. I'm not so sure it is a divisive issue for the students, they seem to have little context or interest. If the purpose of AP courses is just to pass the AP test then there's already a lot of pointless materials and discussions, current events are not less relevant by comparison. Or maybe they are? I don't know who should make this decision (I've added this to the question).
Before I started teaching I would have thought that at the AP level students should, even briefly, learn to analyze conflicts and write about them in a nuanced ...
The rationality community will noticeably spill over into other parts of society in the next ten years. Examples: entertainment, politics, media, art, sports, education etc.
I think I understand, we're discussing with different scales in mind. I'm saying individually (or if your community is a small local group) nothing has to end but if your interests and identity are tied to sizeable institutions, technical communities etc. many will be disrupted by AI to the point where they could fade away completely. Maybe I'm just an unrealistic optimist, I don't believe collective or individual meaning has to fade away just because the most interesting and cutting edge work is done exclusively by machines.
Even if we don’t die, it still feels like everything is coming to an end.
Everything? I imagine there will be communities/nations/social groups that completely ban AI and those that are highly dependent on AI. There must be something between those two extremes.
This list is great. I especially resonate with 7, for a long time I didn't take responsibility because I felt I lacked the intellect/skill/certain something that others had who seemed so much more capable than me but it helps to keep in mind, as the post states, there are no adults.
I left notes throughout. The main issue is the structure which I usually map out descriptively by trying to answer: where do you want the characters to end up psychologically and how do you want the audience to feel? Figuring out these descriptive beats for each scene, episode and season will help refine the story and drive staging, dialogue etc. and of course nothing is set in stone so you can always change the structure to accommodate a scene or vice versa. I also recommended a companion series like a podcast or talkshow to discuss the ideas in each epis...
I don't fully understand the obsession with exploring the moral dimensions of the conflict, many from multiple sides, fronts, factions etc. have committed atrocities and I condemn them all regardless of affiliation. I've yet to find anyone on LW willing to seriously engage on solutions. It's like we've collectively given up on the possibility of peace and all that's left is to hash out the specific wording for the history books.
hiding your beliefs, in ways that predictably leads people to believe false things, is lying. This is the case regardless of your intentions, and regardless of how it feels.
I think people generally lie WAY more than we realize and most lies are lies of omission. I don't think deception is usually the immediate motivation but due to a kind of social convenience. Maintaining social equilibrium is valued over openness or honesty regarding relevant beliefs that may come up in everyday life.
Old fashioned lobbying might work. Could there be a political candidate in a relevant country that could build a strong platform on getting rid of malaria?
Could you perhaps share your to-do lists with other people who have a stake in your productivity? Would that give you more motivation to follow through with the items on the list?
Thank you for responding. I'm sorry for my ignorance, this is something that I've followed from afar since ~2004 so it's not just a grim fascination (although I guess it kind of is), I couldn't pass up the chance to ask questions of someone on the ground. I have a few more questions if that's ok..
How often are comprehensive plans to achieve peace reported in the media or made available to the public? Is there anything like ongoing discourse between Jewish Israelis, Palestinians who have Israeli citizenship and Palestinians in Gaza who are all of a similar ...
In this incident something was true because the “experts” decided it must be true. That’s humanities in (almost?) every incident.
Keeping a work diary helps. A work diary can be just quick notes, comments or ideas that you don't mind looking back on later. After long enough you'll find that you want to be more ambitious regarding time spent and the quality of the work you're doing and plans you may have for the future.
I would guess that the percentage of gay men who watch live music is roughly the same as gay men who watch live sports (or pretty much any leisure activity in society) but openly gay men are historically more common at concerts. Gays were considered dangerous deviants for a long time, maybe classical music/opera became a go to for 'openness' because it's mostly adults that attend so you could be openly gay without being harassed or accused of ulterior motives. My main belief: the stereotype is just because of association, not because of anything intrinsic.
Historically there are few public places you could be openly gay and not be harassed, concerts are one of those places.
I can’t help but read this simply as a politician who worries about their future hold on power. (I’d be curious to know how leaders discuss AI behind closed doors)
I mostly agree with the last part of your post about some experts never agreeing whether others (animal, artificial, etc) are conscious or not. A possible solution would be to come up with new language or nomenclature to describe the different possible spectrums and different dimensions they fall under. So many disagreements regarding this topic seem to derive from different parties having different definitions for AGI or ASI.
Here's how I tried (I haven't 100% succeeded): I decided that what goes on in my head wasn't enough. For a long time it was enough, I'd think about the things that interested me and maybe discuss them with some people and then move on to the next thing. This went on for years and some time was spent thinking about how I might put all that mental discourse to use but I never did. I worked at day jobs producing things I didn't care about and spent my free time exploring. Eventually I quit my day job, I spent more energy on personal pursuits anyway, and found...
With their sharper senses I'd imagine that dogs experience the world in a much richer way than humans. Then, depending on your definition, you could say it makes dogs more 'conscious'. This opens the door for many other animals with bigger brains and more complex sensory organs than humans. Are they also more conscious?
I really love movies. This year I’ve gone to more re-releases and anniversary showings than new releases. I chalk that up to formulaic thinking behind newer movies. So we’re not running out of art but rather existing niches are being filled faster and in more clever ways and arguably faster than new niches emerge.
It could also have to do with the nature of film production. If a movie takes five years to make the design and production team is predicting what viewers will want 5 years in the future. The result can be stale overly commercialized type movies.
That’s a rather extreme idea, even if humanity was on the brink of extinction deceit is hard to justify.
We haven’t even scratched the surface of possible practical solutions, once those are exhausted there are many more possible paths.
Perhaps standards and practices for who can and should teach AI safety (or new related fields) should be better defined.
There are many atoms out there and many planets to strip mine and a superintelligence has infinite time. Inter species competition makes sense depending on where you place the intelligence dial. I assume that any intelligence that’s 1,000,000 times more capable than the next one down the ladder will ignore their ‘competitors’ (again, there could be collateral damage but likely not large scale extinction). If you place the dial at lower orders of magnitude then humans are a greater threat to AI, AI reasoning will be closer to human reasoning and we should p...
My assumption is it’s difficult to design superintelligence and humans will either hit a limit in the resources and energy use that go into keeping it running or lose control of those resources as it reaches AGI.
My other assumption then is an intelligence that can last forever and think and act at 1,000,000 times human speed will find non-disruptive ways to continue its existence. There may be some collateral damage to humans but the universe is full of resources so existential threat doesn’t seem apparent (and there are other stars and planets, wouldn’t i...
I’m not sure the analogy works or fully answers my question. The equilibrium that comes with ‘humans going about their business’ might favor human proliferation at the cost of plant and animal species (and even lead to apocalyptic ends) but the way I understand it the difference in intelligence from human and superintelligence is comparable to humans and bacteria, rather than human and insect or animal.
I can imagine practical ways there might be friction between humans and SI, for resource appropriation for example, but the difference in resource use would...
Why wouldn’t a superintelligence simply go about its business without any regard for humans? What intuition am I missing?
‘Dimension hopping’ or ‘dimension manipulation’ could be a solution to the Fermi paradox. The universe could be full of intelligent life that remain silent and (mostly) invisible behind advanced spatial technology.
(the second type refers to more limited hypothetical dimension technology such as creating pocket dimensions, for example, rather than accessing other universes)
The last point is a really good one that will probably be mostly ignored and my intuition is that the capacity for suffering argument will also be ignored.
My reasoning is in the legal arena arguments have a different flavor and I can see a judge ruling on whether or not a trial can go forward that an AI can sue or demand legal rights simply because it has the physical capacity and force of will to hire a lawyer (who are themselves ‘reasonable persons’) regardless if they’re covered by any existing law. Just as if a dog, for example, started talking and had...
I think they'll just need the ability to hire a lawyer. 2017 set the precedent for animal representation so my assumption is that AGI isn't far behind. In the beginning I'd imagine some reasonable person standard as in "would a reasonable person find the AGI human-like?" Later there'll be strict definitions probably along technical lines.
My assumption is that most branches don’t get extradimensional dictators so it’s unlikely we will. (I suppose it’s not impossible that some individual or civilization could have created a super intelligence to police the universe, I’d have to think about it).
My first question to Grusch would be is he basing his claims of exotic material discovery off a physical analysis? Or is he basing it off of his interpretation of a physical analysis? Then, is that analysis available for public scrutiny?
We're not colonized because of the number of branches, wouldn't there be a small overall chance of ending up in our branch?
Maybe the questions should have specified gender as most parents intuitively know that girls mature faster and without specifying in the questions the respondents might project their own children's gender on the question. For example, a parent with two daughters might have a different bias when answering the questions than a parent with sons.
I’ll try my best, I’m by no means an expert. I don’t think there’s a one size fits all answer but let’s take your example of relationship between IQ and national prosperity. You can spend time researching what makes up prosperity and where that intersects with IQ and find different correlates between IQ and other attributes in individuals (the assumption being that individuals are a kind of unit to measure prosperity).
You can use spaced repetition to avoid burnout and gain fresh perspectives. The point is to build mental muscle memory and intuition on what...
The way you word the second type might be working against you. ‘Updating’ brings to mind the computer function which neatly and quickly fills a bar and then the needed changes are complete. The human brain doesn’t work like that. To build new intuitions you need spaced repetition and thoughtful engagement with the belief you want to internalize. Thinking does work, you just can’t force it.
It’s not a big thing to get upset about if you’re not in a culture that highly values community and social cohesion—where it can be quite emotionally exhausting to always conform/accommodate to the thinking and values (mental models?) of others.
And of course I don’t want to upset anyone, the post is worthwhile (and powerful) because it describes behaviors that might lead people to give up on finding community, fulfilling relationships or common ground. For me it was an invitation to better describe or explain these behaviors and a twofold message: 1). don’t give up, you’re not alone 2). keep an open mind with other’s perspectives
Corporate real estate is what I call it when I want to sound fancy. Really, it was a call center for a relocation company which was a subsidiary of a large real estate company.
Our department was like a dispatch service, we took calls from customers (of companies we had contracts with) and after a short exposition-heavy conversation we’d refer them to the real estate firms that were under the parent company’s umbrella.
A real estate agent would be automatically assigned and receive our referral. It was free and if they closed with our agent they’d get a kick...
The grumpy vs talkative example and the Alice vs Bob example remind me of the knowledgeability vs competence debates that I used to have before and after conducting interviews. When I first entered the corporate world I thought knowledge was paramount, I learned very quickly that likeability and team synergy are valued much more. The best line workers who couldn't break into the management team even after years of hard work often wondered what they were doing wrong, their metrics and productivity were consistently the highest and they got along with everyone on their teams. Why weren't they advancing? My advice to them was focus less on productivity and more on impressing the boss.
This reminds me of the incident in Belgium a few months ago:
https://www.lesswrong.com/posts/9FhweqMooDzfRqgEW/chatbot-convinces-belgian-to-commit-suicide
The question of liability in these kinds of circumstances is fascinating and important. The legal system will decide these questions by setting precedents as they occur if we don't try to address them (or at least think about them) in advance.
The consensus is don't ask, don't tell. It's the only media consumption (among movies, tv, books, sports, videos, music, games etc.) that's not openly discussed around the water cooler or in polite society. A disservice, in my opinion.
It appears to be harmful to children because it takes time away from other activities and creates false impressions of sex/relationships. Especially if porn is their only reference for adult relationships. Children younger than 16 should be educated on media (including porn) but probably shouldn't have free reign to explore p...
I feel there may be more to it, focus testing might provide some answers, get to the heart of why they don't take tech disruption more seriously.
Here's a crazier but more specific possible course of action: When I was visiting Albania in 2011 I heard from locals about foreign religious groups that offered free English language courses. Naturally, I assumed it was mainly for recruitment and now know it's a fairly common tactic.
So the idea is what if someone offered coding classes (or English courses) and used it as a platform to discuss AI? And you could be upfront about it in the marketing: "Learn coding and AI fundamentals from an experienced expert"
Again, there's very few (<10) people working on technical alignment in China right now, and I feel a bit lost. Any advice is welcome.
Maybe there's a backdoor approach. Instead of trying to convince the general public about the importance of alignment perhaps there's a model for creating 'alignment departments' in existing tech and AI companies. Another idea could be outreach for high school or university students that want to pursue careers in tech.
From reading Part 1 it seems the ways that we've tried to spread the message in Western countries wo...
Yes! That sounds like it could work! But as long as it isn't something people can get a career in, it'll just stay in the realm of a cool but easily forgotten idea. This is why I think it's so important to hire people to work in technical alignment. If it is even a slightly theoretical possibility, it would get people thinking about it more seriously.
This has some interesting implications. Reminds me of psychics who make confident predictions 100 years in the future but will refuse or be offended if you challenge them to make confirmable predictions within their lifetimes or the next week/month/year.