How an elite university is approaching ChatGPT this academic year

For many people, the beginning of September, in some countries, marks the true beginning of the year. No fireworks, no resolutions, but with new notebooks, hard sneakers and packed cars. Maybe you’ll agree that back-to-school season still feels like the start of something new, even if you, like me, are well past your time at university.

The big news this year seems to be the same one that defined the end of last year: ChatGPT and other great language models. Last winter and spring brought a lot of headlines about AI in the classroom, and some panicked schools went so far as to ban ChatGPT altogether. My colleague Will Douglas Heaven wrote that now is not the time to panic: Generative AI, he says, will change education, but not destroy it. Now, with the summer months offering a little time for reflection, some schools appear to be reconsidering their approach.

To get perspective on how higher education institutions are approaching technology in the classroom, I spoke with Jenny Frederick. She is the associate dean of Yale University and the founding director of the Poorvu Center for Teaching and Learning, which provides resources for teachers and students. She also helped lead Yale’s approach to ChatGPT.

In our chat, Frederick explained that Yale never considered banning ChatGPT and instead wants to work with it. I’m sharing here some of the key takeaways and most interesting parts of our conversation, which has been edited for brevity and clarity.

Generative AI is new, but asking students to learn what machines can do is not.

When it comes to teaching, it is very important to review: What do I want my students to learn in this course?

If a robot can do this adequately, do I need to rethink what I’m asking my students to learn or raise the bar on why it’s important to know this?

How are we talking to our students about what it means to structure a paragraph, for example, or to do their own research? What do [students] gain from this work? We all learn long division, although calculators can do it. What’s the point of this?

I have a teacher advisory board for the Poorvu Center, and we have a calculus teacher in the group, and he laughed and said, “Oh, it’s a little fun for me to see you all struggling with this, because we mathematicians have had to deal with this. with the fact that machines could do the work. This has been possible for a long time — for decades.”

So we have to think about justifying the learning we’re asking students to do when, yes, a machine could do it.

It is still too early to institute prescriptive policies about how students can use technology.

At no point did Yale think about banning it. We think about how we can encourage an environment of learning and experimentation in our role as a university. This is a new technology, but it is not just a technical change; It’s a moment in society that is challenging how we think about human beings, how we think about knowledge, how we think about learning and what that means.

I brought my team together and said, “Look, we need guidance.” We don’t necessarily have the answers, but we need to have a curated set of resources for faculty to consult. We don’t have a policy that says you should use this, you shouldn’t use this, or that this is the framework for using this. Make sure your students have a sense of how AI is relevant to the course, how they can use it and how they shouldn’t use it.

Using ChatGPT to cheat is less of a concern than what led to the cheating.

When we think about what makes a student cheat, no one wants to cheat. They are paying good money for an education. But what happens is that people run out of time, they overestimate their abilities, they get overwhelmed, something ends up being too difficult. They get trapped in a corner and then make the unfortunate decision.

So I’m much more concerned about the things that contribute to this state of mental health and time management. How are we helping our students not get trapped and not be able to do what they came to do?

So yes, ChatGPT offers another way for people to cheat, but I think the path for people to get there is still the same. So let’s work on this path.

Students may be putting their privacy at risk.

I think people have been a little concerned — rightfully so — about their students putting information into a system. Every time you use [ChatGPT or one of its competitors], you are improving the system. We have ethical questions about providing labor to OpenAI or whatever the corporation is. We don’t know exactly how things are working and how inputs are maintained, managed, monitored or watched over time, so as not to sound overly conspiratorial. If you are going to ask students to do this, we are responsible for the safety and privacy of our students. Yale’s data management policies are strict – for good reason.

Teachers should seek guidance from their students.

Students in general are far ahead of the faculty. They grew up in a world where new technologies are coming and going, and they are trying things out. And of course, ChatGPT is the latest thing, so they are using it. They want to use it responsibly. They are asking themselves “What is allowed? Look at all these things I could do. Am I allowed to do this?”

So the advice I gave to the faculty was that you need to try this. You need to at least be familiar with what your students are capable of doing and thinking about in their tasks and what this tool enables. What policies or guidance will you give students in terms of permission to use it? How would you be allowed to use it?

You don’t have to do this alone. You can have a conversation with your students. You can co-create something, so why not enjoy the experience in your classroom?

I really think that if you’re teaching, you need to realize that the world now has AI. Therefore, students need to be prepared for a world where this will be integrated into industries in different ways. We need to prepare them.

Source/credit: MIT Technology Review