Professor Pramod Viswanath, an Electrical and Computer Engineering professor, calls his creation “Blockie.” It’s an Artificial Intelligence teaching assistant fed with lectures and notes from his advanced engineering class.
“Blockie’s just ChatGPT on steroids,” Viswanath told The Daily Princetonian.
While some Princeton professors have banned AI tools, Viswanath’s program highlights their pedagogical potentials in the classroom.
Since the release of ChatGPT last fall, large language models (LLMs) have been a source of debate across higher education. ChatGPT, short for Chat Generative Pre-trained Transformer, is an LLM that relies largely on a massive repository of data, scoured from all corners of the Internet, to teach itself how to construct messages that mirror human text and speech. By making probabilistic predictions of what text or information belongs together, ChatGPT can write lines of code, summaries of texts, and essays, as well as solve math problems.
In January 2023, amidst the ongoing buzz of ChatGPT’s potential, Princeton’s Office of the Dean of the College and the Office of the Dean of the Graduate School sent a memo to all teaching faculty highlighting the University’s flexibility around AI tools. The memo, titled “AI & ChatGPT Guidance for Teaching,” provided guidance for how to engage with this technology in the classroom and addressed ChatGPT’s notoriety in the media.
“Despite the copious hand wringing in the media and elsewhere about this chatbot and its implications for higher education, we and our colleagues in the College and the Graduate School remain confident that our undergraduate liberal arts and graduate education programs will remain vital, vibrant, and useful in the years ahead,” the statement read.
The memo said each faculty member has free reign regarding the use of ChatGPT, but they should bear in mind the explicit academic integrity rules and collaboration policies under the University’s Honor Code and Academic Regulations.
Shortly after the memo was written, the ‘Prince’ asked professors about their initial reactions to ChatGPT. While faculty held diverse perspectives on the novel AI, there was a general consensus that the technology itself is very limited, producing “text that initially seems convincing but is ultimately BS,” as Matt Weinberg, assistant computer science professor, phrased it.
The technology is banned in some classrooms. For example, students are largely prohibited from using it in one of Princeton’s largest introductory computer science courses, COS126: An Interdisciplinary Approach.
“Students in COS126 are learning to think logarithmically. It’s important they learn these basic concepts by themselves,” said Sebastian Caldas, a new lecturer for COS126. “If they use this tool from the beginning, they might not be able to build as much as they could if they knew what was going on.”
Under the collaboration policy in the syllabus, Princeton’s introductory computer science course permits the use of AI and ChatGPT only to “discuss concepts,” and students must “acknowledge collaboration.” It is strictly prohibited to “show your code/solutions to,” “view any code/solutions of,” or “copy any code/solutions from” AI chatbots.
Although Caldas believes reliance on chatbots in an introductory course like COS 126 can be harmful to students’ learning; he hopes the emerging generation of computer scientists he is instructing will return to ChatGPT and contribute to making it more efficient.
“There is so much you can do with this technology and for that you need to learn how to code,” Caldas said. “I want [my students] to learn ChatGPT so they can build the next generation of ChatGPT and better types of tools.”
David Malan, professor of Harvard’s COS 126 equivalent, has taken an entirely different approach to chatbots. According to reporting by the Harvard Crimson, he created an AI chatbot similar to ChatGPT that can “find bugs in [student] code, give feedback on the design of student programs, explain unfamiliar lines of code or error messages, and answer individual questions,” approximating the role of a human teaching assistant.
The Harvard AI assistant inspired Viswanath and his teaching assistant Tianle Cai to create Blockie, a similar program for ECE 470: Principles of Blockchains. The advanced course teaches the foundational design and algorithmic principles of blockchains, a tech database that allows for the secure sharing of information like data and financial transactions.
Viswanath and Cai fed ChatGPT all the lectures and assignments of the course in order to create an AI teaching assistant personalized for students taking ECE 470.
Cai explains that while ChatGPT can easily answer questions or complete assignments for introductory computer science and programming courses, it is not sophisticated enough to reliably provide help for advanced courses like ECE 470, which teaches a novel programming language called Rust.
“The main issue of ChatGPT is that it’s not familiar with our class,” Cai said. “The material and logistics of this class are not included in the knowledge of ChatGPT.”
“[In office hours] students come up with questions that are not hard to answer but they just don’t have the resources to find the answer,” he added.
Viswanath analogized the issues students were having with this “foreign” programming language with getting stuck at basic arithmetic when trying to learn complicated mathematics.
“You get stuck with lower level arithmetic; you can’t do addition or multiplication,” he explained. “Then you can’t see the big patterns, you can’t go one level up.”
Blockie has provided ECE 470 students with a personalized resource for overcoming logistical barriers in their coding assignments:assisting the human TAs significantly by simplifying office hours.
“Hardly anyone shows up to office hours anymore, and whenever they show up, they’re asking very specific, good questions instead of syntax questions,” Viswanath noted.
While Viswanath and Cai are highly supportive of integrating AI and chatbots into the classroom, they explained that the technology is most useful when students already possess background knowledge of programming.
“Once you have the basic knowledge, you have your own judgment to determine whether [ChatGPT’s] answers are logically correct,” Cai said.
Students in the course have expressed agreement with Cai’s assessment of Blockie — it can help with efficiency and productivity, but only insofar as the student can confidently judge Blockie’s accuracy using their own knowledge.
“The best thing [Blockie] has helped me with is if I’ve divided a problem into several small parts and I give it a small task, it makes it so much more efficient, because I just copy the code from the small task and copy it into mine,” said Jonathan Jeong ’25. “Otherwise, you still have to know what you are doing for it to be helpful.”
Joy Patterson ’25, another student in the course, said that Blockie has been “invaluable” for learning the programming language Rust, but that there are more efficient LLMs available.
“We tend to use [GPT-4] over Blockie because GPT-4 is very fast, while Blockie takes a while to generate feedback,” Patterson said. “Blockie is still a super fun tool, and it’s been exciting to try it out.”
Despite recognizing some of Blockie’s limitations as a novel AI classroom tool, Patterson sees Blockie as an exciting step for integrating AI into the classroom.
“I think as LLMs become increasingly powerful, they will become a larger part of work-flows — especially in computer science,” Patterson said. “Being a part of a class that embraces LLMs feels like we are leaning into the future, exploring what CS classrooms might look like in the years to come.”
While concerns about academic integrity and cheating have arisen alongside ChatGPTs growing popularity, Viswanath feels its an opportunity for improvement and change in academia.
“Some people are worried that kids will find shortcuts, but I’m sure two dozen years ago [pre-ChatGPT], someone was complaining about it,” Viswanath said. “I think ChatGPT is an opportunity for educators to reevaluate their evaluation methods. If students are taking shortcuts, then perhaps we should reevaluate evaluation methods.”
As one of the first AI teaching assistants at Princeton, Blockie still has a long way to go, but Cai hopes to expand its utility across more courses.
‘We’re exploring some interactions to make it more general — so that it can be easily integrated into other classes, as well as platforms like Canvas — so that every professor can create an AI teaching assistant[s] for their class with just one click,” Cai explained.
Viswanath believes AI now has a permanent place in academia as a pedagogical tool. He hopes to be part of this exciting new future in education.
“Believe me, there’ll be very few classes that will escape this trend,” Viswanath stated. “Every class has to have an AI assistant, I’m willing to go on record for that.”
“This is happening, there is no question about it, ” he continued. “I want to be part of it.”
While Viswanath and Cai are invested in ChatGPTs potential for engineering and computer science, the social sciences at Princeton are also building AI into the classroom.
“I’m happy for [my students] to use any kind of AI system,” said Jacob N. Shapiro, Professor of Political and International Affairs and Director of Empirical Studies of Conflict Project at Princeton. “They’re going to be part of the work environment, so you might as well get used to using them now.”
One of the assignments in his POL386: Violent Politics course requires students to feed ChatGPT a prompt to produce a short answer and then swap the responses with classmates to critique and edit them.
By encouraging students to engage with ChatGPT — and even requiring it as part of the course —- Shapiro hopes to explore ChatGPT’s capabilities in the classroom.
“I’m interested in how people are using [chatbots],” Shapiro said. “There’s many ways to use them, it’s kind of undiscovered.”
Like Viswanath and Cai, Shapiro trusts his students to responsibly engage with ChatGPT.
“Part of this is that [Princeton students] are an unusually gifted and talented set of students,” Shapiro explained. “I feel comfortable assuming a baseline competence. If there’s a new technology or tool that requires certain new competences, why not take the opportunity?”
Shapiro also sees ChatGPT as an educational opportunity for students to build their own judgments and become more comfortable with AI.
“They need to develop the skills of distinguishing which things the AI feeds are coherent and logical, and which are nonsense, because the AI will give you both,” Shapiro said. “When students get into the workforce, there are going to be a bunch of AI tools their company uses, so if anything, the situation where AI gives something that’s nonsense is a pedagogical moment.”
His only rule when it comes to using ChatGPT is citing it as a source.
“If the student uses the model in an honest way and the model — because of the data it’s trained on — essentially regurgitates some prior text, that’s not intentional plagiarism by the student,” Shapiro said. “It only becomes an issue if the student doesn’t cite it.”
As a proponent of using ChatGPT as a pedagogical tool, Shapiro is pleased with the University’s flexibility in allowing professors to decide their own policies regarding chatbots.
“I’m very happy with the way the University has approached this, by encouraging pedagogic uses and our role in preparing you to use these things when you get out into the world.”
Shapiro has been working with machine learning for a decade in his own work, and recognizes the “dramatic, radical evolution” of these technologies manifesting in highly capable tools like ChatGPT. He believes his role as an educator is to provide his students with the opportunity to learn how to use this technology productively.
“Technology can be a danger, and it can be a productive tool,” Shapiro said.
“The challenge is harnessing it, using it for good purposes,” he added. “You are all going to be more equipped to go out into the world and figure out how to do that if you’ve had exposure and experience working with these tools in your work life.”
Valentina Moreno is a staff Features writer for the ‘Prince.’
Please direct any corrections requests to corrections[at]dailyprincetonian.com.