LESSWRONG
LW

1288
Lex Spoon
2070
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
LLM-generated text is not testimony
Lex Spoon8d10

Oh, yes, for the case of learning your area. However, the name of an advisor is important for joining the academic good-folks network, and so is the superficial appearance of a paper looking correct for the conferences and journals you want to send it to.

I dislike it all tremendously, but if I could tell my younger self something to maybe do better in a research career, it would be to pay more attention to back-scratching and ingratiation.

As an older self, though, I may say that if you like studying something and like making certain kinds of things, then just do it, rather than try to first get into the research community and then do what you like. Do what you like, right now. Each firefly will flash for its last time, and you never know when.

Reply
LLM-generated text is not testimony
Lex Spoon9d10

For a similar reason, I have largely come around to using "AI" instead of "LLM" in most cases.  What people mean by AI is the total package of a quasi-intelligent entity that you can communicate with using human natural language. The LLM is just the token predictor part, but you don't use an LLM by itself, any more than you use a red blood cell by itself. You use the whole... artificial organism. "Inorganism"?

It seems like a goed starting point to simply start from how you would treat a human intelligence and then look for differences.

The term "LLM" remains important for the historical dividing line for when AIs took the leap into effective natural language interfaces. However, an AI is still an AI. They just got a lot better all of a sudden.

Likewise, it seems fine to consider the AI of a computer game to be a plain old AI and to not need any scare quotes. It's acting like a intelligent being but is something that humans created; hence, an artificial intelligence.

Reply
LLM-generated text is not testimony
Lex Spoon9d21

Kudos for a really interesting area of inquiry! You are investigating the nature of language revealing what is happening in the mind that led to uttering it, and how this impacts our relationship to LLM-generated text. It comes from either no mind, or from a whole new kind of mind, depending on how you look at it, and it's interesting how that affects how language works and how we should engage with it.

Some parts of the article depend on which form of LLM utterance we are talking about. It's true, as the article states, if you take a Google search AI help, then there is no way to ask more questions. Each assertion is a one-off utterance that is not necessarily connected to any larger conversation. (Though don't put anything past Google's engineering!)

There are other ways to use an LLM, though, in particular chat mode. With chat mode, a conversation thread is accumulated with both your and the LLM's statements. When you use an LLM in this mode, then the later statements do reflect the earlier ones, much like a dialog between two humans. Also, if you used this mode in a court room, it would be possible to cross-examine the AI and ask it more questions.

Interestingly, an AI chat can be cloned, so someone who wanted to could develop the perfect line of questions to ask the AI. This is very different from interrogating a human, where you only get one shot at asking questions to the real person. This leads to something else that's very dangerous for a courtroom: you can practice easking qusetions to an AI until you get what you want, and then delete all your practice attempts for no one to ever see again. You can even have a separate AI drive the process and look for ways to trick the first AI into saying something you would like it to say.

A similar thing is happening on social media. We each get a miniscule fraction of all the things that anyone is uttering to each other. The messages that do get through the filter are often very interesting and convincing, but they've been cherry picked to be just that way. You shouldn't use that process for anything you care about, and I suppose you should be careful about certain kinds of AI responses in the future.

Reply
EU explained in 10 minutes
Lex Spoon17d10

I am not expert but have noticed that a lot of it is mind games. Branding. If you look at it that way, you can also see aspects of the US in a different light.

It stood out to me when the EU constitution got voted down many years ago. I thought that was sort of the end of it. And yet, the flags were still all over the place, and one person I talked to said what do you mean, the EU is already here. So the actual legal basis of--let's call it a brand--is quite separate from the reality of what legal force it has.

Curiously, the belief in something can give it a source of power all its own. When the US cut off financial access to Russia over the Ukrainian War, it's not like the country leadership really understands how the modern financial system works. There was more a "you know what I mean" aspect to the federal orders.

Back to the US, an example of this kind of thing is the mythology around Thanksgiving and around Christopher Columbus. This stuff gets taught to children in the public schools, and it serves to form a collective identity. I have always had mixed feelings about it. The government is not supposed to lie to us. It is supposed to serve us. Yet, I am very glad for anything that keeps people peaceful and getting along. We don't get to dork around in online chat groups if we fear for our physical safety. I guess I wish that kind of thing were either openly made up (like the anthem) or were based more on true history (e.g. privatizing agriculture in Plymouth).

Reply
Christian homeschoolers in the year 3000
Lex Spoon2mo20

I agree on the broad strokes but am not sure about Christians specifically coming out on top. I understand using that example in the article, for reasons of familiarity, but it is interesting to think about which specific belief  communities win and lose in this era.

Above all, it just seems like a new world, and it seems unlikely that meme-species of the past are going to be the ones that thrive in the new world. Let's be a little more specific than that, though.

First, traditional religions are just on the decline in general. Pew reserach reports that, globally, only Islam grew. If we dig into that, the US-specific data suggest that the growth of Islam is due to people leaving Christianity. Without Christian converts, Islam would also be on the decline, at least in the US.

My money is on two kinds of winners:

* Orthodox sects that have stood the test of time, largely due to successfully giving people a good, clean, fulfilling life, albeit a boring one. The AI can learn how and why a belief system achieves the results it does, teach a pastor how it works, and then double down on it with improved services, practices, and lessons. I don't feel bad about this part of the future, if it happens. If something is wrong, but it works, then it's not wrong.
* New quasi-religious strands of progress. I share Paul Graham's intution, from back in 2004, that there are things you get in massive trouble for saying, even though other words that mean the same thing are allowed or even encouraged. AIs speed this process up and make it more virulent. The "pastors" in this case will not always be present, but the bubbles that have one will tend to have a thought leader that seeks out a population but without caring what happens to them over a century or more. This part scares me.

Let me part with a bright side of it all. Humanity is special in that we make groups that explore different ideas from each other. We are now enterring an era where there are many more such groups, exploring more such ideas, than ever before.

Reply
Your LLM-assisted scientific breakthrough probably isn't real
Lex Spoon2mo10

I agree about the "finds important". Just be aware that it is slippery. Communities can and do redefine what is important in such a way that they circle around the insiders and keep out the outsiders.

An example from my life was to be in an educational technology lab where some of the professors were researching online schools. Once the Open University opened up in the UK, however, it and a few other ones were suddenly being roundly criticized by these same professors who were previously into the whole idea. The discussions struck me as a sort of search process: the professors were trying to understand how they can sideline the Open U as working on non-interesting question, and they seemed to be sort of trying out ideas with each other and seeing what might stick.

I can give other examples, but ultimately, follow Larry McNerney's advice about this kind of thing. :) If you are approaching someone cold, you have to have your first 1-2 sentences of your message be basically a threat. Tell the reader: you must read my paper, or you're going to really be made to look foolish! And you have to have a way to actually do that.

You can also just try a slower approach and chat people and/or an LLM for advince. I feel like there is a whole new territory for an individually curious person nowadays. GitHub is already amazing for this, but combining it with an LLM is bringing us a new world of personal craftsmanship that never existed before. Why not explore the new world instead of knocking on the door of the old one?

Reply
Your LLM-assisted scientific breakthrough probably isn't real
Lex Spoon2mo10

I was about to post something similar but will follow up here since your post is close, @Charlie Steiner .

@eggsyntax, the post is conflating two things: scientific validity, and community penetration. I think it will reach your target audience better to separate thes two things from each other.

I am going to imagine that most people in the scenario you picture are fantasizing that they will post a result and then all the scientists in an area are going to fawn over you and make your life easy from now on. This is what I mean by community penetration.

For that angle, Step 3 is the right way to go. Contact people in your target community. Write them a polite email, show them 1-2 brief things that you have done, and then ask them what to do next. This last part is really important. You don't want to be a threat to them. You want to be an asset to them. Your goals are going to be things like co-writing a paper with them, or redefining your paper so that they can do a companion one, or at the very, very least, adding some citations in your work to theirs or to othre people that are influential in the target community.

I don't think you have to do THAT much homework before step 3. Buidling relationships is more about a thousand little interactions than one or two ginormous ones.

I do not see a lot about related work in the post so far. I have found related work to be one of the most productive questions I can ask an LLM. Thye can show you products, papers, articles, and so on that you can go study to see what other people are already doing. This will also show you who you may want to contact for Step 3.

For Steps 1 and 2, I think another way to approach that area is to move away from teh yes/no question and over to standards of evidence. Step 2 is great for developing evidence ifi t applies, but it really depends on the area and on the nature of the idea. It is possible to ask an LLM what the standards of evidence are for an area, and it may tell you something like one of these:

* There may be a way to build a larger version of it the idea to make it less of a toy.
* There may be a variation of the problem that could be explored. A good idea will hold up under multiple contexts, not just the original one.
* There may be some kind of experiment you can try. Step 2 is terrific as written, but there are other experimental forms that also provide good evidence.

Based on what comes back here, it can be good to have a conversation with the LLM about how to go deeper on one of these angles.

OK, that's all. Thanks for the post, and good luck with it.

Reply