Lech Mazur

Advameg, Inc. CEO 

Founder, city-data.com 

https://twitter.com/LechMazur

Author: County-level COVID-19 machine learning case prediction model. 

Author: AI assistant for melody composition.

Posts

Sorted by New

Wiki Contributions

Comments

While there are still significant improvements in data/model/generation you might be able to imperfectly detect whether some text was generated by the previous generation of models. But if you're training a new model, you probably don't have such a next-gen classifier ready yet. So if you want to do just one training run, it could be easier to just limit your training data to the text that was available years ago or to only trust some sources.

A related issue is the use of AI writing assistants that fix grammar and modify human-written text in other ways that the language model considers better. While it seems like a less important problem, they could make the human-written text somewhat harder to distinguish from the AI-written text from the other direction.

Generated data can be low quality but indistinguishable. Unless your classifier has access to more data or is better in some other way (e.g. larger, better architecture), you won't know. In fact, if you could know without labeling generated data, why would you generate something that you can tell is bad in the first place? I've seen this in practice in my own project.

From what I gather, Alameda wasn't worth quite as much but he had a larger stake in it. Both appear to be worthless now. There is also a U.S. subsidiary ftx.us, which according to them "is a separate entity with separate management personnel, tech infrastructure, and licensing." Some calculations I've seen put SBF's net worth below $1 billion now and I think it's probable that he'll have to deal with some big legal issues.

"Anonymous" and 540B parameters, hmm... I'm sure it's not from the company named after an even larger number.

GSM8K = grade school math word problems

DROP = reading comprehension benchmark requiring discrete reasoning over paragraphs

OpenBookQA = question-answering dataset modeled after open book exams for assessing human understanding of a subject. 5,957 multiple-choice elementary-level science questions

ANLI-A3 = adversarial benchmark designed to be challenging to current state-of-the-art models

I've messaged you the links. Basically MLPs.

There have been a few papers with architectures showing performance that matches transformers on smaller datasets with scaling that looks promising. I can tell you that I've switched from attention to an architecture loosely based on one of these papers because it performed better on a smallish dataset in my project but I haven't tested it on any standard vision or language datasets, so I don't have any concrete evidence yet. Nevertheless, my guess is that indeed there is nothing special about transformers.

I think even that is overstating how useful it. For example, I think we can all agree that regularization is a huge and very important topic in ML for years. Here is the Wiki entry: https://en.wikipedia.org/wiki/Regularization_(mathematics)#Other_uses_of_regularization_in_statistics_and_machine_learning .  Or interpretability: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence . Things like layer normalization are not even mentioned anywhere. Pretty useless for learning about neural nets. 

Just a quick comment: don't use Wikipedia for machine learning topics. Unlike using it for e.g. some math topics, it's very outdated and full of poorly written articles. Instead, the intro sections of ML papers or review papers that you can find through Google Scholar are usually quite readable.

WHAT DO NLP RESEARCHERS BELIEVE? RESULTS OF THE NLP COMMUNITY METASURVEY

This new survey includes concerns about AGI among other interesting questions. 

"About a third (36%) of respondents agree that it is plausible that AI could produce catastrophic outcomes in this century, on the level of all-out nuclear war"

Load More