42 OregonStater.org 6. AI IS CHANGING EDUCATION … AND THAT COULD BE A GOOD THING. JUST AS AFFORDABLE calculators changed the focus of math instruction in the 1970s and 1980s and the World Wide Web revolutionized student research in the 1990s, the advent of widely available artificial intelligence tools like ChatGPT is bringing a new wave of change to education. But generative AI doesn’t have to mean the degeneration of teaching and learning. While Oregon State doesn’t have any university-wide policies regarding AI in teaching yet, leaders with Ecampus and the Center for Teaching and Learning are among those helping instructors explore the possibilities and limitations of these tools. The issues are complex, and the answers are rarely straightforward or easy. As Sanjai Tripathi, senior instructor in the College of Business, puts it: “To AI or not to AI — that is not the question. We have to teach it, or we’ll become irrelevant. But we can’t have students outsource their thinking to AI. The thinking is a necessary part of the learning process, like you have to exercise the brain muscle to improve it. We have that fundamental tension, and we just have to deal with it.” That means students are increasingly finding a vital new topic added to their Oregon State class syllabi: How to use generative AI effectively and appropriately. Conversations during the first week of class now include AI literacy topics such as the environmental impact of AI usage or data security protection and privacy. (OSU students and employees are encouraged to use a data-protected version of Microsoft Copilot rather than ChatGPT to ensure data is kept confidential and not used to train AI models outside the university.) Instructors point out the limitations of these tools, including the fact that AI sometimes makes up information and is subject to embedded social biases. For example, an AI-generated image created to illustrate a PowerPoint presentation on international economics might portray ethnic or national stereotypes. Instructors are also training students to be transparent about their AI usage, citing the source like they would cite a book or journal. (Try the “AI in the Classroom” quiz on the next page to test your knowledge of appropriate AI use.) In addition to prompting conversations about AI use, misuse and academic integrity, generative AI is also inspiring faculty to take a new look at the way they teach. In its guidelines, the College of Business recommends that teachers “focus on the value that humans provide that AI cannot: ethics, creative thinking, problem solving and human relationships rather than memorization… . Assuming that AI will be a part of their work life, consider what specialized knowledge or content students need to ask the right questions and supplement, challenge, correct or assess AI-generated answers.” The task is to see how AI “can be used as an assistant to, rather than a replacement for, the human mind.” Thorny problems of AI ethics have captured the attention of university community members in diverse fields, but in the College of Engineering, researchers are working to build ethical behavior into AI systems themselves. A professor and a student offer two good examples. Houssam Abbas, Assistant Professor We trust AI to navigate us to new locations and one day it could drive our cars. But should we let AI make ethical decisions for us? Houssam Abbas, assistant professor of electrical and computer engineering, often shares this thought problem: A self-driving car is faced with an unavoidable accident. In the seconds it has before impact, it can choose to either plow into the car in front of it, possibly harming the occupants, or drive off the road into a ditch. What guidelines does it use to make that choice? ¶ To make ethics accessible to machines, Abbas is working to boil down the delicate balance of human decision-making into mathematical equations. He uses deontic logics, a family of mathematical languages that model how we think about our obligations and permissions. ¶ Abbas and his students work on several collaborative projects that include academic and industry partners to develop formal methods for verification of engineered systems. Eric Slyman, Ph.D. candidate in artificial intelligence; research engineer at Adobe Many artificial intelligence models are trained with information from the internet, which is steeped in stereotypes. For example, an AI image generator, when asked to produce a picture of a doctor, might return an image of a white man by default. And this can get even worse when companies remove seemingly redundant photos — through a process called deduplication — to speed up AI training. ¶ Eric Slyman, Ph.D. candidate in artificial intelligence and now engineer at Adobe — creator of Adobe Photoshop, Acrobat and other industrystandard apps — helped create a cost-effective tool with researchers there that builds in awareness of social biases that may be in training data. Called FairDeDupe, it makes it possible to instruct an AI to preserve image variety by not pitching out photos of nondominant groups . “We let people define what is fair in their setting instead of the internet or other large-scale datasets deciding that,” Slyman said. BUILDING IN ETHICS Two researchers tackling tricky issues of safety and bias OREGON STATER PG/ 42
RkJQdWJsaXNoZXIy MTcxMjMwNg==