James Camacho

Wiki Contributions

Comments

There's a recent paper doing something similar: https://arxiv.org/pdf/2305.13304.pdf. They tell GPT to create long- and short-term memory, then use semantic search to look up the long-term memory.

Have you considered that signaling could play a large part into this? A European friend of mine once said, "people in the US try to do everything in high school." Because, to get into a top undergraduate program, Americans have to signal very hard. Worse, a master's degree is quickly becoming the new high school diploma, due to signaling to employers.

When kids are spending their lives trying to signal stronger, it's a lot harder to balance it with friends. It used to be dating as an undergraduate made sense--people would actually get married during or out of college! Now, it makes less sense to date for a year or two and try to maintain a long-distance relationship as you split off into different PhD programs.

I think the correct reasoning is, if you didn't get the job you didn't pray hard enough. You weren't faithful enough to be rewarded. Or maybe you were, and this is just a trial of your faith. It's easy to have faith when faith seems to work, it's only when all experiments you perform seem to contradict your faith that it is really tested.

I think this needs to be done for >18 year-olds as well. Most research positions require a PhD as a prerequisite, when there are many talented undergraduates who could drop out of college and perform the research after a few weeks' training.

AIs need immense databases to provide decent results. For example, to recognize if something is a potato, an AI will take 1,000 pictures of a potato and 1,000 pictures of not-a-potato, so that it can tell you if something is a potato with 95% accuracy.

Well, 95% accurate isn't good enough--that's how you get Google labelling images of African Americans as gorillas. So what's the solution? More data! But how do you get more data? Tracking consumers.

Websites track everything you do on the internet, then sell your data to Amazon, Netflix, Facebook, etc. to bolster their AI predictions. Phone companies tracks your location, credit card companies track your purchases.

Eventually, true AI will replace these pattern matching pretenders, but in the meantime data has become a new currency, and it's being stolen from the general public. Many people know and accept their cookies being eaten by every website, but more have no idea.

Societally, this threatens a disaster for AI research. Already people say to leave your phones at home when you go to a protest--no matter which side of the political spectrum it's on. Soon enough, people will turn on AI altogether if this negative perception isn't fixed.

So, to tech executives: Put more funds into true AI, and less into growing databases. Not only is it fiscally costly, but the social cost is too high.

To policymakers: Get your data from consenting parties. A checkbox at the end of a three page legal statement is hardly consent. Instead, follow the example of statisticians. Use studies, but instead of a month-long trial, all you ask is a picture and a favorite movie.

To both: Invest more money in the future of AI. In the past ten years we've gone from 64x64 pixel ghoulish faces to high-definition GAN's and chess grandmasters trained in hours on a home computer. Imagine how much better AI will be in another ten years. Fifteen thousand now could save you Fifteen million or more in your companies' lifetime.