2959

LESSWRONG
LW

2958
AI

2

A review of MSUM's AI Innovation Summit: Day One

by Philipreal
2nd Nov 2025
11 min read
0

2

AI

2

New Comment
Moderation Log
More from Philipreal
View more
Curated and popular this week
0Comments

This past week, I attended the AI Innovation Summit in Moorhead, Minnesota, which was an event put on by MSUM's newly founded Institute of Applied AI. The summit existed as a way to introduce and integrate MSUM's new institute with the surrounding community, as training for people interested in using AI (for basic use cases), and for them to hear perspectives from the community on what's desired both from industry and educators. I hadn't attended anything similar before, and I'm glad I went, even just to get a better perspective on the general vibe towards AI among different people in my community.  I attended the first two of three days, the first focusing on AI and Business, the second on AI and Education, and the third day was for high schoolers (I'm not one of those!)

The present is a particularly interesting time for colleges, as what skills and knowledge are economically valuable is changing faster than ever before. To some extent this is true of education generally, but I think college is a particularly meaningful arena for these sorts of questions as 1) it's the first sort of schooling that most people have any sort of choice about whether and where they attend, and 2) it's the first sort of schooling where it is expected that the vast majority of attendees will be joining the workforce after graduation, yet 3) there's still the idea that college is supposed to be a place of "higher education," of instilling intellectual values, critical thinking, etc into its students rather than just a job training site. I'll discuss this tension a bit more in tomorrow's post about the second day of the summit intended for educators, "Learning in the Age of AI."

I think in most scenarios where colleges stay relevant for the near-future, it's important that they do their best to adapt to increasing AI use/value, so I feel somewhat positive about the event as a whole. At some point the question becomes "How do we prepare for unpredictable ever-increasing weirdness" and a certainly reasonable answer is to continue monitoring sources of weirdness while taking the actions you know how to. MSUM's institute is still early in its existence, with plans to be fully functional/impactful by 2027-2028, so I'll be interested to see how things will have changed by that point.


I arrived in the conference room a little before 9:00, and didn't have too much time to chat with others before the speaking begun. The 40-45 attendees in the room room tended male, with age looking to be roughly normally distributed with a mean of around 40. Attendees were unsurprisingly businesspeople from around the area, with a couple students and a couple others. I sat at a table that included a couple other young 20-somethings like me, as well as a couple other more typical attendees While I didn't personally know the other attendees, it was fun to find out that I and one of my tablemates had done some connected work on a large city project, and was somewhat of a reminder to me to be more social and put some effort into finding the different connections you have with others.

The conference didn't begin right at 9, which felt like it was by design. We had a bit of time to mill about, chat, and check out the booths set up by the summit's corporate sponsors until we returned to our seats and were greeted with some opening statements from the Institute of Applied AI's Executive Director. These remarks didn't extend much beyond welcoming us and introducing the day's keynote speaker, Shawn Riley from Bisblocks, a venture studio firm.

Shawn's presentation hit just about all of my pre-imagined stereotypes of AI-loving corporate-growth-type preaching. His main refrain that he returned to multiple times throughout was "If you are not an AI company, you will be outcompeted." The main structure was moving from general discussion about AI, how it's improved over time yet "is still a 1st grader" in terms of its prospects of continuing to grow and improve, finishing with how his company has used AI to save money/be more efficient, with a couple of asides about general societal affect, finishing with a short Q&A session.

The specific examples for how his company had used AI were fairly mundane, mostly focusing on the use of AI-generated music and video to generate a marketing campaign for much cheaper than if they needed to hire people. It did look clearly AI-generated to me, but also looked pretty acceptable, so I'll give them a win on that. I did wonder how valuable that sort of of marketing material will become when it's essentially on tap, but it's certainly a notable cost savings right now to those who are using it effectively. What interested me the most about his speech was when he began to talk about the issue of AI automating jobs, but he followed that up with just a comparison to the steam shovel and previous concerns about massive job loss not panning out, they were replaced with more jobs. No discussion of comparative advantage, no worry about the topic at all which seemed curious from someone who also stated belief that progress is continuing very fast, and that everything could be automated eventually.

After the keynote were three shorter presentations before lunch. The first was an underwhelming talk about "How AI can help you move forward." It was definitely targeted towards people who aren't me, giving a brief explainer on what an LLM is, showing AI progress on benchmarks, describing differences between the unhelpfully named ChatGPT models (culminating with "just click the 'use GPT-5' button in Copilot), and giving some basic tips for prompting, but it was bogged down with some bad ai-generated visual aids and a general feeling that I could make and give  a better presentation pretty quickly. Next!

Moving on, we had "Strategic Insights on AI’s Role in Enterprise or SMB transformation." This again had a good chunk of it being "AI is valuable and progressing very fast everyone, watch out!," but was followed up with some good advice of how to start implementing AI processes in one's business, mainly relating to "rather than buying licenses and saying 'use AI,' actually plan ways to use AI to automate specific repeatable processes in clear programs with good data governance" (people loved talking about data governance, probably because it is important.) There was a little discussion at the end of the idea of considering value of building 'AI-use muscles', which is to say it would pay off down the line to have employees that are more comfortable with AI even if it isn't immediately helpful.

Finally, there was a pretty funny talk from a person who built a unicorn chatbot to talk to his autistic daughter. I ended up disappointed by this talk because the title said it was going to be about Agentic AI, and I was interested in things like AI tool use and more autonomous capabilities but it was basically just a chatbot with some curated data sources about unicorns to talk to his daughter. The end of the presentation where he described making a second chatbot to help her with math was more interesting, where he had downloaded the school's curriculum as a data source, and connected it to her daughter's grades, so it could give specific responses potentially tailored to how she was doing in school. Unfortunately, he had just done this a day or two before the conference, so there wasn't information about how well it seemed to be working. From his description it was all really easy to do within copilot studio, so I'd be really interesting in learning more about how and in what ways chatbot interaction is good/bad for users and tailoring those experiences etc. Cool presentation.

During lunch I had a chance to talk with people a bit more in-depth. Some people started expressing their opinions a bit more after I mentioned that I'm a Poli Sci grad and interested in getting into AI Policy. I'll discuss general audience perspectives/opinions/reactions either in tomorrow's post or a post after that. I need to drag it out a little since I'm getting back into the Halfhaven swing of things.

The panel immediately following lunch was the most valuable to me, an AI ethics discussion featuring a five-person panel featuring three of the earlier presenters and the ai institute executive director. The ball got rolling early on with the initial question: "Is Skynet coming?"

The surprising result of that question to me is that nobody just said "No." Of course, nobody said 'yes' either but answers ranged from "possibilities are too far out to predict, many unknowns" to "Skynet is already here, have you seen the AI-powered cyberattacks already happening? Also the CCP has crazy surveillance AI" to "prepare to coexist with AI" and finally a classic "superintelligence is too far off, worry about human-directed AI attacks, AI is a tool." I was a little impressed with the nuance which some of them replied to a not-very-seriously presented question. It's clear that none of them really have x-risk in their mind but they gave mostly reasonable answers.

The next question was about the role of government regulation when it comes to AI. Shawn took the lead on this one, taking the maximal view against regulations, saying they stop growth, raise barrier of entry to the market, the government will be acting too slowly/poorly to implement valuable and timely regulations so they just shouldn't, maybe legislators should ask ChatGPT to generate legislations instead, might be better. Nobody else on the panel really took a direct view against that, instead turning to issues regarding possible age-gating of synthetic relationships and the usage of bots on social media, noting that we already have some regulations regarding those. After that they got away from specifics, noting that humans already don't agree on ethics, it'll be difficult to get machines to agree, and can they really see in shades of gray? How can they decide? All questions that will not be answered during this summit.

The final question they got to before giving some closing thoughts was how to ensure fairness in AI? As stated the question to me seems to allow for some ambiguity about whether it's referring to bias in AI systems or fairness in society as a result of AI, but all the panelists targeted just the bias issue. The most common answer was to treat it as mainly an issue resulting from poor (or just rushed) data collection, that better data would be less biased, but also that users should be careful to not immediately interpret AI giving unwanted answers as it being biased (Grok was not mentioned here). One panelist put forward the idea of synthetic data being used to correct bias, although that runs the risk of just kind of putting bad data in your system until it gives you the result you want. There were some worries about AI being used to generate hyper-specific polarizing information, which reminded me of the scissor statement illustration. There was also the concern that "the winners write history" and that in some cases the unbiased data corresponding to one side of an issue does not exist, in which case an AI can't be trained off it. Consensus was not formed but everyone got to say something, which brings us to their final thoughts.

To close out, most said something about keeping AI progress human-centric, that it's a tool, a reflection of us, and/or a force multiplier rather than a replacement. One said specifically that it would be bad to have AI develop itself. The data management guy once again stressed that it's important to have good data management, and one said not to be surprised if AI "goes way past us."

Unfortunately, those were all the questions we got through in the hour, I think it was a bad practice to have every person on the panel answer each question individually while sometimes adding on to other people's answers. If I ever run a panel discussion it'll be really good and more questions will get answered.

Finally, we move on to the other presentations of the afternoon, and I'll move through these quickly because they leaned much more towards the "corporate training" side than the "thoughts/opinions/ideas about AI" side of the summit.

The first one was by the data security guy telling us how to keep our data secure (thank you!). It laid out some pretty clear guidelines that seemed helpful to people who would want to set up AI pipelines within their company, and the presenter largely helps companies do this as his job, so it went smoothly. A key thing that he emphasized was that adhering to clear data security standards and transparency about how those standards are formed/met leads to increased trust which leads to better and smoother adoption.

The second was a demo of using Copilot within the Microsoft office suite of apps. Parts of this presentation were awkwardly funny when copilot made errors a couple of times after being prompted "give me X information while removing any customer information" which felt a little on-the-nose happening right after the data security presentation. Otherwise it was a fine demo, some of the spreadsheet manipulation seemed neat, none of the capabilities were very surprising, but definitely some were convenient, and could be an upgrade if implemented well at the law firm I used to work for.

The third was a presentation on how to accurately measure the ROI on a given AI investment/program, and strategies to make that program better. The main information on measuring was basically "yeah, try to use the scientific method" by clearly defining a task, measuring baseline performance beforehand and then measuring the ai-assisted improvement (if any) while tracking adoption and other outcomes. One of the tips I thought was interesting was having "AI champions," or certain employees who would know the technology very well and would be able both to demonstrate the various use cases and help coworkers with technical difficulties. Having someone like that seems like it could be very valuable towards overcoming some resistant towards adoption which is common in many workspaces.

Our final presentation was an ad-lib where the person who had a presentation on "Ethical AI with Microsoft Copilot" dropped the whole thing since the panel earlier was in-depth enough and he didn't feel he could add much to it, so it ended up being a kind of strange crowd-involved continuation on prompting strategies. It aimed towards giving the llm a persona of being an expert in some field, or being an entire marketing team and working from there, and he demonstrated prompting gpt-5 to create a prompt for further work, and copy-and-pasting previous information into more prompts, but it didn't really seem to work that well. It's possible that the crowd-provided scenario (advising the city on preventing a massive gnat infestation and creating a related marketing campaign) wasn't very good to work with, and I think the presenter at some points forgot exactly what he wanted to put in which prompt, but the results we got were basically "ChatGPT can brainstorm a bunch of options really fast" and not much beyond that.

The closing remarks were brief, and the day was done. It would have been interesting to hear from a wider variety of companies, as the presenters were mainly from companies who deal in teaching other companies how to use AI tools to some extent. However, I enjoyed hearing what people had to say both in and out of the presentations, and was glad I went. The lunch and corporate-provided goodies probably got me close to breaking even on my registration fee, so all in all I'd consider it a successful day. I'll discuss the second day, on education tomorrow.