and we adhere to these principles
Do we though? As a species? I suppose we can claim to, as part of a transmission to try to persuade aliens of our niceness. But if they're able to receive and decode a transmission it seems like there's reasonable odds they'll also have other observations of us that demonstrate our less worthy aspects.
I would be concerned about the risk that details fabricated by the AI come to be confused with the actual organic memory. Memory can be malleable and an invented image could well somewhat overwrite what you remember.
Wealthier people have different concerns and interests than poor people. So any system making voting power proportional to wealth is liable to result in the upper class voting through changes that defund various forms of assistance/subsidy for low incomes. Including things that are broadly socially desirable, but the wealthy aren't using.
Like what already happens by way of representatives paying more attention to wealthy constituents, business owners, and donors—but even moreso by formalising their ability to simply directly vote with their money.
Cool service/feature, but would it be worth defusing the "jumpscare" with an interstitial that explains the function of the button? At least the first time any given user clicks it.
Found it: https://preservinghope.substack.com/p/a-victory-for-the-natural-order
How should we speak about "stressful events"? Maybe instead of, “buying a plane ticket is stressful”, something like, "buying a plane ticket made me stressed." But the word "made" implies inevitability and still cedes too much power to the event
"I am feeling stressed about buying a plane ticket" would acknowledge that the stress is coming from within you as an individual, and doesn't foreclose the possibility of instead not feeling stressed.
Pretty sure I've seen this particular case discussed here previously, and the conclusion was that actually they had published something related already, and fed it to the "co-scientist" AI. So it was synthesising/interpolating from information it had been given, rather than generating fully novel ideas.
Per NewScientist https://www.newscientist.com/article/2469072-can-googles-new-research-assistant-ai-give-scientists-superpowers/
However, the team did publish a paper in 2023 – which was fed to the system – about how this family of mobile genetic elements “steals bacteriophage tails to spread in nature”. At the time, the researchers thought the elements were limited to acquiring tails from phages infecting the same cell. Only later did they discover the elements can pick up tails floating around outside cells, too.
So one explanation for how the AI co-scientist came up with the right answer is that it missed the apparent limitation that stopped the humans getting it.
What is clear is that it was fed everything it needed to find the answer, rather than coming up with an entirely new idea. “Everything was already published, but in different bits,” says Penadés. “The system was able to put everything together.”
That was concerning the main hypothesis that agreed with their work. Unknown whether the same is also true for its additional hypotheses. But I'm sceptical by default of the claim that it couldn't possibly have come from the training data, or that they definitely didn't inadvertently hint at things with data they provided.
Technically it's still never falsifiable. It can be verifiable, if true, upon finding yourself in an afterlife after death. But if it's false then you don't observe it being false when you cease existing.
https://en.wikipedia.org/wiki/Eschatological_verification
If we define a category of beliefs that are currently neither verifiable or falsifiable, but might eventually become verifiable if they happen to be true, but won't be falsifiable even if they're false—that category potentially includes an awful lot of invisible pink dragons and orbiting teapots (who knows, perhaps one day we'll invent better teapot detectors and find it). So I don't see it as a strong argument for putting credence in such ideas.
My first thought was that the "ground beneath your feet", that might move around more than you initially expected, would be the libraries and dependencies you call on; other people's code that your code relies on. You might see old methods become deprecated in ways that break your use of them - or new methods introduced that you want to switch to, for efficiency gains or other reasons.
Which can be mitigated by some forethought to put in a layer of abstraction that wraps around the library, so that you only have to change how you call the library in the wrapper, without changing the rest of the code. But can also be taken too far (if you put a wrapper around all kinds of really basic functions, just creating extra cruft for no good reason).
Can also suffer from "leaky" abstractions, if your wrapper makes assumptions about the library that don't hold up, or if the code calling the wrapper needs to still know about the underlying library to work right. Not sure quite what the analogy to a building foundation would be there - I guess if you thought your big concrete slab was trustworthy as an immovable foundation, but then it turned out that big concrete slabs on top of dirt behave importantly differently to big concrete slabs on sand.