All of Vanilla_cabs's Comments + Replies

I can't help but notice that if for you "nothing else could have happened than what happened", then your definition of "could have happened" is so narrow as to become trivial.

Rather, I think that by "X could have happened in situation Y", the laymen mean something like: even with the knowledge of hindsight, in a situation that looks identical to situation Y for the parameters that matter, I could not exclude X happening".

I was just curious and wanted to give you the occasion to expand your viewpoint. I didn't downvote your comment btw.

1Emrik9mo
Aye, I didn't jump to the conclusion that you were aggressive. I wanted to make my comment communicate that message anyway, and that your comment could be interpreted like that gave me an excuse.
2Emrik9mo
All the correct ways and none of the incorrect ways, of course! I see the ambivalence and range of plausible interpretations. Can't I just appreciate a good post for the value I found in it without being fished out for suspected misunderstandings? :p I especially liked how this is the cutest version of Socrates I've encountered in any literature.

My initial reaction to their arrival was "now this is dumb". It just felt too different from the rest, and too unlikely to be taken seriously. But in hindsight, the suddenness and unlikelihood of their arrival work well with the final twist. It's a nice dark comedic ending, and it puts the story in a larger perspective.

I think the bigger difference between humans and chimps is the high prosocial-ness of humans. this is what allowed humans to evolve complex cultures that now bear a large part of our knowledge and intuitions. And the lack of that prosocial-ness is the biggest obstacle to teaching chimps math.

I think I already replied to this when I wrote:

I think all the methods that aim at forcing the Gatekeeper to disconnect are against the spirit of the experiment.

I just don't see how, in a real life situation, disconnecting would equate to freeing the AI. The rule is artificially added to prevent cheap strategies from the Gatekeeper. In return, there's nothing wrong to adding rules to prevent cheap strategies from the AI.

But econ growth does not necessarily mean better lives on average if there are also more humans to feed and shelter. In the current context, if you want more ideas, you'd have a better ROI by investing in education.

Unless humanity destroys itself first, something like Horizon Worlds will inevitably become a massive success. A digital world is better than the physical world because it lets us override the laws of physics. In a digital world, we can duplicate items at will, cover massive distances instantaneously, make crime literally impossible, and much, much more. A digital world is to the real world as Microsoft Word is to a sheet of paper. The digital version has too many advantages to count.

Either there will be limitations or not. No limitations means that you ca... (read more)

I see a flaw in the Tuxedage ruleset. The Gatekeeper has to stay engaged throughout the experiment, but the AI doesn't. So the AI can bore the Gatekeeper to death by replying at random intervals. If I had to stare at a blank screen for 30 minutes waiting for a reply, I would concede.

Alternatively, the AI could just drown the Gatekeeper under a flurry of insults, graphic descriptions of violent/sexual nature, vacuous gossip, or a mix of these for the whole duration of the experiment. I think all the methods that aim at forcing the Gatekeeper to disconnect a... (read more)

8Viliam1y
I assume that most methods to get out of the box will be unpleasant in some sense. The Gatekeepers should be explicitly warned about this possibility before the game. But I believe that it should remain a possibility, because: 1) The purpose of the exercise is to simulate a situation where an actual superintelligent AI is actually trying to get out of the box. The actual AI would do whatever it thinks would work. That might realistically include obscenities or boring things (or even things beyond human abilities, such as random shapes that induce madness in a human observer). I mean, if staring at a blank screen for 30 minutes is so boring that you would literally let the AI out of the box rather then endure it, then an AI that predicts this would of course leave the screen blank. If you can't endure it, you should not apply for the actual job of the Gatekeeper in real life... and you probably shouldn't play one in the game. 2) I am afraid of starting a slippery slope here, of adding various limitations in form "AI can't do this or that" until the AI is merely allowed to talk politely about the weather. Then of course no one would let the AI out of the box, and then the conclusion of the experiment would be that putting the AI in a box with human supervision is perfectly safe. And then you get an actual AI which says on purpose the most triggering things, and the human supervisor collapses in tears and turns off the internet firewall... For the record, I am not saying here that abusing people verbally is an acceptable or desirable thing, in usual circumstances. I am saying that people who don't want to be verbally abused, should not volunteer for an experiment whose explicit purpose is to find out how far you can push humans if your only communication medium is plain text.

It comes with a cultural relativism claim that a morality of a culture isn't wrong, just conflicting to your morals. And this is also probably right.

How can this work? Cultures change. So which is morally right, the culture before the change, or the culture after the change?

I guess a reply could be "Before the change, the culture before the change is right. After the change, the culture after the change is right." But in this view, "being morally right" carries no information. We cannot assess whether a culture deserves to be changed based on this view.

4calef1y
Probably one of the core infohazards of postmodernism is that “moral rightness” doesn’t really exist outside of some framework. Asking about “rightness” of change is kind of a null pointer in the same way self-modifying your own reward centers can’t be straightforwardly phrased in terms of how your reward centers “should” feel about such rewiring.

Thanks everyone :)

Initially, I was expecting a "no", but being denied a reply is arguably a stronger rejection experience.

Finally, willy finished his makeshift guide rope and lowered it to the rescuers.

Finally, Toni finished his makeshift guide rope and lowered it to the rescuers.

3Vanilla_cabs1y
Thanks everyone :) Initially, I was expecting a "no", but being denied a reply is arguably a stronger rejection experience.

The AI only needs to escape. Once it's out, it has leisure to design virtually infinite social experiments to refine its "human manipulation" skill: sending phishing emails, trying romantic interactions on dating apps, trying to create a popular cat videos youtube channel without anyone guessing that it's all deepfake, and many more. Failing any of these would barely have any negative consequence.

1[comment deleted]1y

Yes, but I don't know if he really did it. I see multiple problems with that implementation. First, the interest rate should be adjusted for inflation, otherwise the bet is about a much larger class of events than "end of the world".

Next, there's a high risk that the "doom" better will have spent all their money by the time the bet expires. The "survivor" better will never see the color of their money anyway.

Finally, I don't think it's interesting to win if the world ends. I think what's more interesting is rallying doubters before it's too late, in order to marginally raise our chances of survival.

3Yitz1y
It may still be useful as a symbolic tool, regardless of actual monetary value. $100 isn't all that much in the grand scheme of things, but it's the taking of the bet that matters.

It's nice that you're open to betting. What unambiguous sign would change your mind, about the speed of AGI takeover, long enough before it happens that you'd still have time to make a positive impact afterwards? Nobody is interested in winning a bet where winning means "mankind gets wiped".

4mukashi1y
Yes, that's the key issue. I'm not sure I can think of one. Do you have any ideas? I mean, what would be an unequivocal sign that AGI can take over in a year time? Something like a pre-AGI parasitizing a major computing center for X days before it is discovered in a plan to expand to other centres...? That would still not be a sign that we are pretty much f. up in a year, but definitely a data point towards things can go bad very quickly What data point would make you change your mind in the opposite direction? I mean, something that happens and you say: yes, we can go all die but this won't happen in a year so, but maybe in something like 30 years or more Edit: I posted two paragraphs originally in separate comments, unifying for the sake of clarity

Basically, a "wait a decade quietly" strategy

I was thinking more like "ten weeks". That's a long time for an AGI to place its clone-agents and prepare a strike.

1LGS1y
You can't get "overwhelming advantage over the world" by ten weeks of sitting quietly. If the AGI literally took over every single computer and cell phone, as well as acquired a magic "kill humanity instantly" button, it's still not clear how it wins. To win, the AGI needs not only to kill humans, but also to build robots that can support the entire human economy (in particular, they should be able to build more robots from scratch, including mining all necessary resources and transporting it to the factory from all over the world).

If you are really insisting that the only views that matter are inside views, well, that sounds more like religion than rational consideration.

If I did, why would I have replied to your outside view argument with another outside view argument?

If you had said "you hold inside view to be generally more accurate than outside view", well yeah, I don't think that's disputed here.

How would it lead to being defeated by a different AGI? That's not obvious for me.

1LGS1y
If the first AGI waits around quietly, humans will create another AGI. If that one's quiet too, they'll create another one. This continues until either a non-quiet AGI attacks everyone (and the first strike may allow it to seize resources that let it defeat the quiet AGIs), or until humans have the technology that prompts all the quiet AGIs to attack -- in which case, the chance of any given one winning out is small. Basically, a "wait a decade quietly" strategy doesn't work because humans will build a lot of other AGIs if they know how to build the first, and these others will likely defeat the first. A different strategy, of "wait not-so-quietly and prevent humans from building AGIs" may work, but will likely force the AGI to reveal itself.

I suspect that a hostile AGI will have no problem taking over a supercomputer and then staying dormant until the moment it has overwhelming advantage over the world. All there would be to notice would be an unexplained spike of activity one afternoon.

5LGS1y
This is against the AI's interests because it would very likely lead to being defeated by a different AGI. So it's unlikely that a hostile AGI would choose to do this.

Q: What makes you think that?

A: We live in a complex world where successfully pulling off a plan that kills everyone and in a short of time might be beyond what is achievable, the same way that winning against AlphaZero giving it a 20 stone handicap is impossible even by a God- like entity with infinite computational resources

Still waiting to hear your arguments here. "It just might be impossible to pull off complex plan X" is just too vague a claim to discuss.

Of course, to anyone who has studied the question in depth, that's a bad argument, but I'm trying to taylor my reply to someone who claims (direct quote of the first 2 sentences) being inclined to think that fear of rogue AI is a product of American culture if it doesn't exist outside of the USA.

Nothing aggressive with noting that it's a superficial factor. Maybe it would have come off better if I had use the LW term "outside view", but it only came back to me now.

1Bill Benzon1y
Yes, I certainly take an "outside view." But there are many "in depth" considerations that are relevant to these questions. If you are really insisting that the only views that matter are inside views, well, that sounds more like religion than rational consideration.

Yes, the Japanese don't fear AIs as the Americans do. But also, most of the recent main progress in AI has been done in the Western world. It makes sense to me that the ones at the forefront of the technology are also the ones who spot dangers early on.

Also, since superficial factors have a sway on you (not a criticism, it's a good heuristic if you don't have much time/resources to spend on studying the subject deeper), the ones who show the most understanding of the topic and/or general competence by getting at the forefront should have bonus credibility, shouldn't they?

1Bill Benzon1y
Nor, for that matter, would I be so quick to dismiss the Japanese experience. They may not the the source of the most recent advances, but they certainly know about them and they do have sophisticated computer technology. For example, the Supercomputer Fugaku [https://en.wikipedia.org/wiki/TOP500] is currently the 2nd largest in the world. Arguably they have more experience in developing humanoid robots [https://en.wikipedia.org/wiki/Android_(robot)#Japan]. But their overall culture is different.
0Bill Benzon1y
"...the ones who show the most understanding of the topic and/or general competence ..." Umm, err there's all kinds of competence in this world. My competence is in the human mind and culture, with a heavy dose of neuroscience and old-style computational linguistics and semantics. Read my working paper, GPT-3: Waterloo or Rubicon? Here be Dragons [ https://www.academia.edu/43787279/GPT_3_Waterloo_or_Rubicon_Here_be_Dragons_Version_4_1], to get some idea of why I don't think we're anywhere near producing human-level AI, much less AI with the will and means to wreak havoc on human civilization. As for American culture, try this, From “Forbidden Planet” to “The Terminator”: 1950s techno-utopia and the dystopian future [https://3quarksdaily.com/3quarksdaily/2021/07/from-forbidden-planet-to-the-terminator-1950s-techno-utopia-and-the-dystopian-future.html].
0mukashi1y
That's a pretty bad argument of authority, with an agressive undertone ("superficial factors have a sway on you")

Or better put, I can conceive many reasons why this plan fails.

Then could you produce a few of the main ones, to allow for examination?

Also, I don't see how see build those factories in the first place and we can't use that time window to make the AGI to produce explicit results on AGI safety

What's the time window in your scenario? As I noted in a different comment, I can agree with "days" as you initially stated. That's barely enough time for the EA community to notice there's a problem.

I downvoted this post for the lack of arguments (besides the main argument from incredulity).

2pseud1y
Yes, I can think of several reasons why someone might downvote the OP. What I should have said is "I'm not sure why you'd think this post would be downvoted on account of the stance you take on the dangers of AGI."

I am saying that I believe that an AGI could theoretically kill all humans because it is not only a matter of being very intelligent.

Typo? (could not kill all humans)

1mukashi1y
Typo

I might have missed it, but it seems to be the first time you talk about "months" in your scenario. Wasn't it "days" before? It matters because I don't think it would take months for an AGI to built a nanotech factory.

Can you verify code to be sure there's no virus in it? It took years of trial and error to patch up some semblance of internet security. A single flaw in your nanotech factory is all a hostile AI would need.

1MichaelStJules1y
We'll have advanced AI by then we could use to help verify inputs or the design, or, as I said, we could use stricter standards, if nanotechnology is recognized as potentially dangerous.
-1mukashi1y
A single flaw and them all humans die at once? I don't see how. Or better put, I can conceive many reasons why this plan fails. Also, I don't see how see build those factories in the first place and we can't use that time window to make the AGI to produce explicit results on AGI safety

The diversity of outlets that you desire sounds to journalism what diversity of products is for markets generally. It is generally agreed that free markets are more efficient than centralized planning. Why not do the same for media? It's not like there's a lack of independent or outsider funded media trying to survive while providing a different angle. But they're not the targets of government funding. I don't see how more funding could make it easier for those dissenting media to compete.

2ChristianKl1y
Journalism can have different goals, one is about making money by giving readers what they want. One is about creating public good. Another is about engaging in propaganda and getting paid either directly (CNN's Saudi Arabia coverage is likely in that class) or the media owner is paid indirectly by getting favorable government contracts (which seems to be how it works in Hungary). There are plenty of independent media that tell you about Bill Gates wanting to microchip everyone but there are few independent media organs that provide substantial criticism. If you want to see how such criticism looks like Charity’s pharma investments raise questions around transparency and accountability [https://www.bmj.com/company/newsroom/charitys-pharma-investments-raise-questions-around-transparency-and-accountability/]published by the BMJ is a good article.  The Gates Foundation happens to give money to most outlets that do substantial investigative reporting, so there are few places where such an article could be both published and the author paid for the reporting work. The Wellcome Trust is similar even when it doesn't fund media outlets the same way that the Gates Foundation does. It pushed for lab leak censorship in the beginning and likely influenced other policy as well. It made gigantic profits from the pandemic.  Having a good investigative journalist trying to make sense of how those profits were made and provide transparency would be very useful but we don't have anyone in our media landscape who pays for that investigative journalism. 

The EU already dictates a large part of the policy of its states, and the official media in said states are already massively pro-EU. What makes you think an EU owned media would be a good idea to correct that in the first place?

2ChristianKl1y
I believe that it's worth employing more journalists than are currently employed. It's generally useful if different journalistic outlets diversity of interests so that a journalist who wants to write a story can shop around for an outlet that wants to publish the story.  There's a desire to do something about perceived totalitarianism in Hungary and Poland. Given the available tools, I believe that funding media in those countries is one of the better ways to handle it.  The EU funds lots of things that aren't automatically pro-EU. I meet a woman employed at a NGO here in Berlin that was as far as public sources tell is at least partly funded by EU money who was an activist against free trade agreements. Julia Reda used EU funding to run a liquid democracy conference a while back. Martin Sonneborn is paid by the EU and wrote a good analysis of the latest Assange court case.  The key question is how money gets distributed and whether you can find a way that the outlet is incentivized for diversity.

Ok but what's the takeaway for us who do not know the context?

3lsusr1y
I used to believe the world made sense. April finally broke my credulity.

I can't point you in a precise direction, but I've seen the idea showing up sporadically for more than a decade now. The current voting system is obviously absurd and the root cause of many problems, but the obstacle to change is not a lack of viable alternatives, nor a lack of clever people convinced that at least it's worth trying. Alternative voting systems have been implemented and work well. For example, in France there was a website (Parlement et Citoyens) that allowed people to vote on individual laws, lay out arguments for and against, propose amen... (read more)

2Nathan Helm-Burger1y
Yeah, I think the idea needs some careful thought to flesh it out a bit more. If voting is not anonymous and scheduled, then it's too easy for coercion and such to enter the picture. On the plus side, once you do have something solid enough to test, you can start with local elections. For instance, look at the Center for Election Science and their work on converting local elections to approval voting.

Can we expect another chapter? I want to know what happens next!

2Viliam1y
Perhaps the story ends at the very moment when humanity went extinct. So fast they didn't even notice what killed them.

Actually I fully agree with that. I just have the impression that your choice of words suggested that Dave was being lazy or not fully honest, and I would disagree with that. I think he's probably honestly laying his best arguments for what he truly believes.

4Richard_Kennaway1y
I certainly wasn't intending any implication of dishonesty. As for laziness, well, we all have our own priorities. Despite taking the AGI threat more seriously than Dave Lindbergh, I am not actually doing any more about it than he is (presumably nothing), as I find myself baffled to have any practical ideas of addressing it.

Fair enough. If you don't have the time/desire/ability to look at the alignment problem arguments in detail, going by "so far, all doomsday predictions turned out false" is a good, cheap, first-glance heuristic. Of course, if you eventually manage to get into the specifics of AGI alignment, you should discard that heuristic and instead let the (more direct) evidence guide your judgement.

Talking about predictions, there's been an AI winter a few decades ago, when most predictions of rapid AI progress turned out completely wrong. But recently, it's the oppos... (read more)

I don't think that a fair assessment of what they said. They cite their years as evidence that they witnessed multiple doomsday predictions that turned out wrong. That's a fine point.

5Richard_Kennaway1y
I witnessed them as well, and they don't move my needle back on the dangers of AI. Referring to them is pure outside view [https://www.lesswrong.com/tag/inside-outside-view], when what is needed here is inside view, because when no-one does that, no-one does the actual work [https://xkcd.com/989/].

Both are reincarnation isekai where the protagonist uses memories from her past life to her strategic advantage.

  • Crystal Trilogy

Probably, and it's not a bad assumption. I'd imagine that donation to charities would vary wildly between candidates. But it's still an assumption, and his argument is not as airtight as he makes it appear.

May I add one downside? Vaccines are expensive and ultimately paid by the community.

I've heard on at least 3 different occasions people around me arguing that the unvaccinated were unconscious of how costly it would be if they ended up hospitalized. It upsets me that it never seems to dawn on them that vaccines are not free.

Even if the government has already bought the doses, taking one justifies that spending, and incentivizes them to buy more.

0Sherrinford1y
So it seems this is an argument you would endorse. If so, would you add some numbers for the costs of vaccination vs non-vaccination?

[...] the marginal difference between hiring you and hiring the next bioinformatician in line is (to us) negligible. Whether or not you (personally) choose to work for us will produce an insignificant net effect on our operations. The impact on your personal finances, however, will be significant. You could easily offset the marginal negative impact of working for us by donating a fraction of your surplus income to altruistic causes instead,"

Double standard: when considering the negative effect of her work, he compares her with the next in line, but when considering the positive effect of her donations, he doesn't.

3Gunnar_Zarncke1y
He is evil, so he makes it look like she could compensate it, but in fact sets up incentives that she doesn't. At least in expectation - which would be effective.
3Dacyn1y
I think Doug is making the assumption that the next in line is less likely to donate than Dr. Connor would be.
9Measure1y
Another problem is that he doesn't account for the positive (less evil) effect of her donations as a reason to not hire her. EE would only hire her if the value she would provide in service of their goals exceeds the disvalue of her donations by at least as much as the next available candidate would. Likewise she would only work for them if the value of her donations for altruism exceeds the disvalue of her service to EE by at least as much as if she took a job at a normal organization. There's no way her employment at EE is a +EV proposition for both of them.
8Martin Randall1y
So it's an Evil argument?

At any given moment, usually an organization wants a particular set of employees. If she doesn't take the job, they'll hire a different person for the role that would have been hers rather than just getting by with one person fewer.

At any moment, usually a charitable organization wants as much money as possible. If she doesn't make the donations, the Against Malaria Foundation (or whatever) will just have that much less money.

It's not quite that simple: maybe Effective Evil has trouble hiring (can't imagine why) and so on average if she doesn't take the jo... (read more)

My personal experience agrees with the phases, but I'd triple all durations. Hunger is stronger for me the first 2 to 3 days. Then it's smooth sailing. The fuzziness appears at the same time, 2 to 3 days.


Possibly, but I doubt the same can be said for the net hedon loss. The great-uncle who died of COVID may have been quite old, but he still probably had a few years ahead of him

In terms of hedons, many old people live in retirement homes under horrendous conditions. Some lose their marbles, I remember one who every day tried to escape while claiming "I have to take care of my goats!" Some forget that their loved ones are dead, only to relearn it and be sad again. Some have chronic pains. Some shit themselves because they can't control their sphincters anymo... (read more)

babies are likely more resilient than we think and this loss will be temporary

What makes you think so? My prior is that 'babies are more resilient than we think' is a fashionable idea because the opposite would be tantamount to blaming parents, especially poor ones, and that's unfashionable. I'm interested in learning more about the topic.

6Razied2y
Here's a study where they try to predict adult IQ from infant IQ [https://www.sciencedirect.com/science/article/abs/pii/S0160289606000791], the correlation is something like 0.32, so about 10% of the variance of adult IQ is explainable by infant IQ, meaning that low-IQ babies can in fact end up with large IQs as adults, which would either indicate that infant cognitive development measures are pretty bad for predicting adult IQ or that it doesn't matter that much what you do to a baby in the first years of life. There's also the existence of periods of historical deprivation where babies have been subjected to much worse conditions than during lockdowns, and they didn't all end up as village idiots like you'd expect from a 2 standard deviation drop. But still, even if most of the effect is going to disappear, something like a 5 point drop in IQ would be a catastrophe, let alone a 30 point one.  

I think he means that your argument:

When it's not socially acceptable to have a frank discussion of the real costs and benefits of various restrictions, it becomes easier for people who oppose the restrictions to pretend that the benefits of the restrictions don't exist (aka the disease isn't real or isn't serious).

also applies this way:

When it's not socially acceptable to have a frank discussion of the real costs and benefits of various restrictions, it becomes easier for people who support the restrictions to pretend that the costs of the restrictions do... (read more)

Somehow you managed to transcribe my experience almost exactly.

I probably got Covid in March 2020, despite being more careful about it than most people around me. It was almost inevitable due to the place I lived. My symptoms were even milder than the ones you describe, I didn't lose the sense of smell or taste. When I called the doctors, I was told to stay home unless (or until) I was in need for hospitalization.

Now we're 2 years in. Nobody in my Dunbar-sized group died or needed hospitalization due to Covid. The overwhelming majority of the impact of the... (read more)

That might be, but I could find points for the opposite as easily. After all, we are expecting the child to help save the world. If a child is to become someone of exceptional importance, then probably some sort of special treatment can help tutor them into that role. Take the Dalai Lama: he's raised into his role since birth.

5localdeity2y
Studies such as https://files.eric.ed.gov/fulltext/EJ746290.pdf [https://files.eric.ed.gov/fulltext/EJ746290.pdf] indicate that, if you take a genius child and make them go through normal school with their average age-peers (with no grade-skipping or other academic acceleration), then things go much worse than if you let them skip two or more grades.  So you may want to make sure that the parents will (a) notice that the child is a genius and (b) take at least semi-appropriate action. Of course, keeping a genius with regular age-peers is only one case of "failing to recognize a child's special needs and accommodate them".  But it's a remarkably common one: in the study, 33 of the 60 exceptionally gifted Australian kids were not permitted any grade-skipping at all.

There's a thin middle ground between imposing your values and meanings on another culture's customs, and thinking a culture hold all the keys on interpreting their own customs (positively, of course). There is as much disregard for reality in both cases. For an obvious example of the latter in Semyonova's report, the babies who were euthanised soon after birth failed to get any benefit from their culture. Where is their happiness?

I agree that we should start by acknoweledging the complexity of human cultures. But we shouldn't stop there. We shouldn't use "complexity" as a thought-terminating cliché. Not that I accuse you of doing so, but I wanted to make that point clear.

When I was younger I was quite interested by Lloyd deMause's psychohistory, a fringe theory of history that draws causal connections from chilrearing to culture. Regardless of the theory, in those works you can find chilling accounts of child mistreatment in various cultures, among which this one would fit right in.

My bad, I rewatched the documentary and it's actually less clear. The two swiss labs Novartis and Roche, who respectively commercialize Lucentis and Avastin in Europe (undistinguishable treatments both created by Genetech, an American lab bought back by Roche - also, Novartis owns 33.33% of Roche), tried a legal action against France. I assume it was to outlaw the use of Avastin in the eyes. But it eventually failed. However, in the meantime, the habit had taken root to use Lucentis to cure AMD. It's not explained exactly why. The interviewed person says "... (read more)

2ChristianKl2y
That seems unlikely to me. Our health system doesn't work in a way that there's a normal process for outlawing drugs for being used for certain purposes.  Let's start with, Lucentis and Avastin are similar but the are not undistinguishable. They are both monoclononal antibodies against the same target but not the same. That's why you have for example two different English Wikipedia pages for them which you don't have for drugs were the same substance gets marketed under different brand names. https://go.drugbank.com/drugs/DB01270 [https://go.drugbank.com/drugs/DB01270] and https://go.drugbank.com/drugs/DB00112 [https://go.drugbank.com/drugs/DB00112] give you the sequences of the antibodies and even without comparing every letter you see that the molecular weight of one is a third of the other. When repacking a drug that intravenously into one that's optimized for intravitreal delievery it's plausible that some optimization can be made. Wikipedia suggests that there's  review that even suggests low certainty evidence that there are clinically relevant differences. While those difference might be just the result of p-hacking it's plausible that they are real. Even the French Wikipedia doesn't go into detail but a more likely scenario is that someone in the French health service thought that covering Lucentis with public health insurance is a waste of money when Avastin exists that costs 1/40 as much and that they ran a lawsuit to get French public health insurance to cover Lucentis.  If money wouldn't be any concern, then the act of repackaging drugs to be better for a given clinical application makes a lot of sense. The key issue is how much profit a company deserves for doing that and the amount feels excessive for the service that's provided.

I find it misleading to call drugs tools. It is not uncommon to find unexpected uses for drugs.

Here is an example I just saw in a documentary last week:

Avastin, a treatment for certain types of cancer, was found out to cure AMD very cost-efficiently. Novartis was so miffed that they successfully lobbied for forbidding using Avastin in AMD cases, then they repackaged it into Lucentis and now they sell it at 40 times Avastin's price. It's the same product, but doctors are forbidden to use the cancer treatment to cure people's eyes. (Correction: Novartis and ... (read more)

2ChristianKl2y
Courts of law can make companies liable for "side effects" whether or not there's scientific evidence that the "side effects" are caused by the drugs.  If you for example look at LYMErix it's not clear that the side effects were really that severe but they were still enough to get the vaccine withdrawn.
3ChristianKl2y
Can you be more specific to what you are referring to? What specific regulatory are you calling "forbidding using Avastin in AMD cases". What forbids off-label use here?
2Dustin2y
  I mean, that's just good practice.  No one can be 100% sure of anything, and you always want to take as little liability as possible...particularly when the costs of taking less liability are low.
Load More