Artificial intelligence technologies like ChatGPT have burst onto the scene in the past year, offering automated help for writing everything from emails to essays. But as AI becomes more prevalent, it also raises plenty of questions about its impact on the ways we work, learn, and live. The Henry B. Tippie College of Business recently hosted a panel of UI faculty and alumni experts to provide some insight into four major areas of concern regarding this emerging field.
Patrick Fan, Henry B. Tippie Excellence Chair and professor in business analytics, explains ChatGPT and similar tools as forms of generative AI, in which massive amounts of text data are programmed into a model, then used to answer user queries. The use and abilities of AI will continue to grow, he says, to the point that future AI could potentially perform work in place of a person. In the short term, though, we can expect the technology to advance through:
Fan also says that because the generative AI model is based on probability and prediction models, it can sometimes be wrong—so even as technologies improve, it remains the user’s responsibility to fact-check the results.
Pamela Gibbs Bourjaily, associate professor of instruction in business communication, says educators should assume students are already using AI. She adds that research suggests professors can’t reliably distinguish AI writing, and that AI programs regularly produce high C- or B-level work.
Her recommendation is that faculty handle AI differently based on what they want out of an assignment. For coursework meant to evaluate comprehension, AI use can be mitigated by designing more specific assignments, which are harder for an AI to accurately produce, and by creating less-specific rubrics, giving the AI less to work with. For assignments designed to communicate results or solutions, instructors should help students positively harness AI to make their writing process more efficient and effective. However, questions linger regarding the use of ChatGPT in the classroom, such as when and how it should be cited.
Nick Teff (13PhD), a senior data scientist at Shopify, says that while the exact impacts of AI programs on business are difficult to predict and may take time to emerge, they will likely most affect white-collar jobs that involve writing repetitive copy, such as computer code, reports, or grants. However, he emphasizes that human creativity will still be needed in these positions and notes the potential for AI to enhance the human work experience. For example, it may allow professionals to apply their skills to a wider range of activities, rather than spend time on rote, manual work.
Alicia Solow-Niederman, associate professor of law, says that regulation and legal issues concerning AI will vary by industry, business size, and the ways the technology is used. However, she anticipates the most prominent issues will be discrimination and bias, copyright, privacy, and data security, and says that existing law could provide sources to combat these problems. One big need across the board, she says, is to avoid bias in tools that use AI, including through ensuring that AI programs are trained on quality data that produce reliable and fair results.