This link seems to be assuming that one's prior internal state does not influence the initial mental representation of data in any way. I don't have any concrete studies to share refuting that, but let's consider a thought experiment.
Say someone really hates trees. Like 'trees are the scum of the earth, I would never be in any way associated with such disgusting things' hates trees. It's such a strong hate, and they've dwelled on it for so long (trees are quite common, after all, it's not like they can completely forget about them), that it's bled over into nearly all of their subconscious thought patterns relevant to the subject.
I would think it plausible that the example claim in the article you link wouldn't reach whatever part of this person's brain/mind encodes beliefs in the form "You're a tree". Instead, their subconscious would transform the input into "<dissonance>You're a <disgust>tree</disgust>.</dissonance>". Or perhaps the disgust at the term tree would inherently add the dissonance while the sentence was still being constructed from its constituent words.Just as their visual recognition and language systems are translating the patterns of black and white into words and then a sentence before they reach their belief system, their preexisting emotional attachments would automatically be applied to the mental object before it was considered, causing their initial reaction to be disbelief rather than belief.
It may be more accurate to say we believe everything we think, even if only for a moment; and in most cases we do think what we read/hear in the instant we're perceiving it. But when the two are different I'd expect even our instantaneous reactions to reflect the actual thought, rather than the words that prompted it.
keeping in mind I haven’t gotten a chance to read the paper itself…
the learning process is the main breakthrough, because it creates agents that can generalise to multiple problems.
There are admittedly commonalities between the different problems (e.g. the physics), but the same learning process applied to something like board game data might make a “general board game player”, or perhaps even something like a “general logistical-pipeline-optimiser” on the right economic/business datasets.
The ability to train an agent to succeed on such diverse problems is a noticeable step towards AGI, though there’s little way of knowing how much it helps until we figure out the Solution itself.
keeping in mind I’ve only recently started studying ML and modern AI techniques…
Judging from what I’ve read here on LW, it’s maybe around 3/4ths as significant as GPT-3? I might be wrong here, though.
This seems to be distinct from List of Links, but they're similar enough that it might still be a merge candidate.
My initial ideas (e.g. cases where time are important) are pretty well captured by other comments, but in reviewing my thoughts I noticed some assumptions I was making, which might themselves qualify as additional requirements to eradicate trade:
A) I assumed that the skill-download feature includes knowledge downloading and no task requires more 'knowledge+skills in active use at a time' than the human brain can feasibly handle. If this is violated, specialization is still somewhat valuable despite free and presumably-unrestricted knowledge-sharing.If you add immortality, reliable perpetual staving-off of the end of the universe, & a lack of boredom I doubt this assumption would still be required, but I haven't thought through it enough to be certain of that.
B) I assumed that fundamental computational/cognitive ability is not an issue (i.e. working memory capacity limits, which can be helped by group-problem-solving even with equalized IQ), either because 'build AI to solve problem' is among the downloadable skills or because the problems themselves do not require it. If this is violated, then cognitive enhancements will also be required to truly eradicate the inequalities fueling trade.
C) I assumed that terminal goals don't inherently involve interpersonal conflict (the subset of conflict-space that requires multiple involved agents in order to exist). If this is violated (i.e. everyone has a passionate love for gladiator fighting) then such conflicts would likely qualify as trades (since you can't experience them on your own, and thus are both gaining the experience from another and granting it to them). Plus, depending on the broadness of the conflict-types desired, trade itself may be invented as an independent cultural concept, purely for entertainment purposes.
My reading on that last point was that the government has incentive to declare the vaccines valid solutions to COVID-19 even if they haven’t been properly tested for efficacy and side effects, in the spirit of downplaying the risks of the epidemic. And similarly (in the spirit of steelmanning), the companies developing the virus need to do visibly better than their competitors and preferably come out before or simultaneously with them, for the sake of profits; incentives which also push towards incomplete/inadequate testing procedures.
However, my prior for that is only low-moderate in range, since the increased scrutiny involved means governmental organizations need to may much more attention to avoid even the slightest possible issue they could be blamed for. After all, they’ve already ‘delayed the vaccine’ to ensure it’s safe — in accordance with somewhat-expedited standard procedure, sure, but that’s not how the public will see it — and if after that it still ends up unsafe, it would be a significant negative blow to their reputation and would likely result in significant amounts of firing throughout the hierarchy, especially considering the rise in unemployed alternatives.
And I agree with your points on the personal risks of not taking the vaccine. Actually, I’d expect vaccination to have similar properties relative to population included as herd immunity does, so the other footnote also doesn’t deserve so little attention.