Balancing the Possibilities and Pitfalls of AI
- As educators, we now recognize the great potential of generative artificial intelligence to support and streamline our teaching, research, and administrative tasks.
- While AI fosters greater creativity and efficiency, we must recognize its limitations, from its tendency to generate inaccurate or derivative output to the risk that we might become overreliant on the technology.
- As educational institutions refine their guidelines for AI use, they must take these limitations into account, emphasize accountability, and help students develop human skills such as strategic thinking and ethical judgment.
When OpenAI released its new chatbot on November 30, 2022, I did not pay much attention to it. I already had seen GPT-2, one of OpenAI’s first offerings, back in 2019, and that was amazing. I was familiar with GitHub Copilot, a coding assistant powered by artificial intelligence (AI) that was released in 2021. Even so, I was not starstruck by ChatGPT like so many others. I did not expect much from the technology.
That is, not until an evening in January 2023, when I presented ChatGPT to students in AI & Business, a course I teach at the Sasin School of Management in Bangkok. At the time, I also made this cautionary statement: “AI does not have emotions. Be careful when you use it with sentimental tasks.” But a student interjected, saying, “But professor, after I broke up with my boyfriend, I consulted ChatGPT. It gave me very good advice.”
Curious, I immediately typed a prompt into ChatGPT: “I broke up with my boyfriend, and I’m so sad.” Generating the letters of its answer one by one, the chatbot responded: “Breakups can be difficult, and it is normal to feel sad after ending a relationship. It is important to remember that healing takes time, and it is okay to feel a range of emotions. Here are a few things that may help you cope with your feelings: Give yourself time to grieve…”
The letters kept popping up, until the AI had given me a half page of practical advice. “I don’t have time to read this all out,” I told the class. Then I typed, “I broke up with my boyfriend, and I am so sad. Give me five words.” The web page responded: “Time will heal, stay strong.” The whole classroom gasped, and then fell silent. It was then that I realized what this technology could be capable of.
But being able to encapsulate such intense advice into just five impactful words is not the only ability of ChatGPT and other platforms powered by generative artificial intelligence (GenAI). The technology can also generate . In fact, GenAI platforms that create images have existed even before those that generated text. We have been dealing with the implications of (models that support deepfakes) since 2014. , a text-to-image platform, was launched in July 2022.
Nevertheless, the release of ChatGPT represents the first time in human history that creative power has been put fully into a machine’s hands. That power will change the world—and with it, how we teach business. But as we adapt to that change, we also must manage and mitigate AI’s inherent drawbacks.
How Many Ways Can We Use GenAI?
Over the last two years, faculty and academic leaders have come to appreciate the many ways that GenAI can support teaching, research, and administration. So far, the applications are many:
For teaching. Faculty are using GenAI to more quickly plan lessons; draft assignments; create teaching tools; and provide more timely, outside the classroom. They’re turning to AI startups such as (by Khan Academy) and to provide personalized tutoring and to to speed up the grading process. Faculty are even using GenAI to develop role-play exercises and simulations of real-world challenges, helping students gain practical experience.
The release of ChatGPT represents the first time in human history that creative power has been put fully into a machine’s hands.
Arizona State University in Tempe, for instance, has partnered with OpenAI to issue the to encourage faculty and staff to explore how GenAI can enhance teaching and learning. At Sasin, many courses are asking students to use AI tools to perform analytics in organizational contexts, write code, and support marketing activities. Sasin also runs three executive education sessions in its that introduces executives to different GenAI applications, teaching them to use the technology to support critical thinking and make the most out of AI and (LLMs).
For research. Scholars are using platforms such as , , and to act as assistants and accelerate the research process. Such platforms can conduct literature reviews, search for academic papers, identify related studies, and draft outlines of key findings. They can complete preliminary data analyses and coding tasks, draft research articles, and produce summarized meeting minutes. GenAI also can analyze existing literature and highlight underexplored areas, identifying research gaps and guiding future research directions.
For administrative tasks. Most of Sasin’s staff are using GenAI tools to write correspondence, and the school’s Learning Experience Design team extensively uses the technology to conduct research, generate ideas, and create learning assistance tools. Members of its marketing team leverage the technology not only to improve their writing and generate images, but also to support video production—creating storyboards, subtitles, AI voices, and avatars.
To support these activities, we recently launched , a dedicated space where the Sasin community can access various GenAI subscriptions for free. We hope that the resource promotes the adoption of GenAI technologies and fosters a collaborative environment for innovation.
But What Are the Tradeoffs?
Even as we explore GenAI’s benefits, we must acknowledge its inherent drawbacks. Driven by LLMs that predict the most likely next word in a sentence, the technology is merely a very sophisticated form of autocomplete. It does not (yet) have a rational understanding of the questions we ask it. Rather, it merely gives the most likely educated guess.
This guesswork presents four potential tradeoffs that can make GenAI’s widespread use problematic:
- GenAI fosters creativity but can reinforce mundanity. To create its responses, GenAI draws from the entire internet—a pool of knowledge much broader than any one individual’s experience. GenAI tools can offer people new ideas and perspectives, help them overcome creative blocks, and lead them to innovative solutions. This is particularly helpful in complex situations where humans may be limited by their attention spans, emotional bandwidth, or cognitive biases.
However, GenAI tools produce similar answers to similar questions—even if those questions are posed by different people and worded in different ways. If people use AI-generated content without refining the output, they are unlikely to produce original ideas. - GenAI creates personalized content easily, but its output can be inaccurate. We can ask GenAI to tailor its output to our specific needs or even to different audiences—from finance researchers to liberal arts undergraduates to five-year-old children. The chatbot can serve as a personal tutor that makes complex concepts accessible to learners of all levels. However, not all answers that GenAI produces are trustworthy—it can . Even if GenAI provides people with useful information, they must always cross-check its output with trusted sources.
GenAI is a wonderful tool for tasks that are hard to generate but easy to validate, such as writing a cover letter for a visa application. It is less effective for tasks that are hard to generate and hard to validate—such as drafting legal contracts. If people cannot validate the output themselves or find someone else to validate the content, it is not advisable to trust GenAI’s output. - GenAI augments human capabilities but also requires competency. GenAI makes it easier for people to grasp unfamiliar concepts, learn comprehension-based skills more quickly, and accomplish tasks that otherwise would require specialized expertise. People can use GenAI to create songs even if they have never studied music or build software products even if they lack technical backgrounds. Business students can perform complex data analyses without being experts in that area.
But while people can use AI to extend their abilities, the output will likely be only as good as those abilities. For instance, a professor who teaches Sasin’s Consumption and Marketing Management class once asked students to improve a GenAI-generated marketing strategy. She found that only three out of 20 students enhanced their answers beyond GenAI’s. The rest gave only mediocre responses.
The fact remains that people need debugging skills to fix GenAI-generated computer code that does not work; they must externally verify GenAI’s language translation or risk miscommunicating their messages. When GenAI generates images from scratch, people must use image editing skills if they want to truly control the final product. Using AI well requires close attention and a certain level of proficiency. - GenAI increases efficiency, but it can encourage overreliance. The technology can reduce the time it takes us to complete tasks from hours to mere seconds. Such efficiency enables students and faculty to focus more on critical thinking and strategic decision-making and less on administrative tasks. The danger is that many individuals might rely too heavily on these tools and be reluctant to dig deeper into a topic when AI does not offer sufficient answers.
Many of us likely have seen this overreliance at our own institutions. We know that students are using GenAI to complete their homework for them. Many of my own junior research assistants, who began using GenAI for literature reviews early on, now struggle to extract the essence of papers by themselves.
The saddest outcome I have seen is that, since the arrival of GenAI, the style of my students’ essays and reports has become much drier, despite showing big improvements in English grammar and professionalism. As a faculty member in a country where English is not the native language, I used to enjoy reading students’ assignments and connecting with them as their voices and local writing styles shined from their work.
After GenAI, their individual voices have gotten lost in formal, nicely crafted paragraphs. On the flip side, I can almost instantly discern whether students have used GenAI if their English in their assignments is too perfect.
This technology will exacerbate the weaknesses of those who rely on AI excessively. We must emphasize that GenAI should be viewed as a tool to enhance their work, not as a shortcut to replace that work.
The saddest outcome I have seen is that, since the arrival of GenAI, the style of my students’ essays and reports has become much drier. Their individual voices have gotten lost in formal, nicely crafted paragraphs.
GenAI Requires Guidelines—and Accountability
Given these drawbacks, unrestricted access to GenAI tools could compromise our educational objectives; it could enable students to circumvent the development of their critical thinking and analytical skills. Therefore, institutions need to reevaluate their policies regarding its use, establish clear guidelines and consequences for violations, and implement effective methods to monitor usage.
For example, Sasin’s policy encourages students and faculty to use GenAI but requires them to cite where and how they used it; our policy also emphasizes that students will be held accountable for the results. This not only preserves the integrity and quality of students’ learning experiences, but also enables the school to assess GenAI’s broader impact on the school’s programs, productivity, and morale.
Additionally, the advent of GenAI necessitates reimagining how we teach our classes and evaluate our students’ performance. Educators need to design curricula that emphasize uniquely human skills such as strategic thinking, ethical judgment, and interpersonal communication. Faculty also need to develop assessment methods that measure competencies beyond what AI can replicate.
Here, I can use myself as an example. In the past, if I was writing an article, it used to be a given that I would do all the work myself. But the availability of GenAI tools has changed the game. I might have been tempted, for instance, to use ChatGPT to draft this article from scratch, but I ended up not doing so. That said, I must offer the disclaimer that I used AI to help produce some ideas, suggest appropriate words, and polish my sentences.
Like many other educators, I must get used to citing how I have used GenAI to support my writing. And like my students, I must be aware of how tempting it is to allow GenAI to do the writing for me. But as I now teach my students, I must resist that temptation and recognize it as one of the pitfalls of the technology. Doing so comes with more benefits and leads to better outcomes. I guess I just want my own voice to be heard, after all.