Since ChatGPT’s inception in November 2022, the platform simultaneously unnerved and excited Cornellians regarding artificial intelligence’s academic implications.
According to the Administrative AI Task Force, which included over 30 stakeholders from across the University, Cornell should utilize AI in tasks including admissions application review, transfer credit evaluation and website accessibility, as outlined in the Jan. 5 task force report entitled Generative AI in Administration.
The report overall provided responsible and effective AI use guidelines for academic and administrative contexts. AI refers to machine-based systems that accomplish tasks typically requiring human intelligence.
Artificial Intelligence in Applications
The report supports simultaneously deploying AI in central Cornell functions and providing technologies and resources for Cornell community members to begin using AI responsibly.
“If the University fails to provide safe, broad access to AI platforms and sandboxes, it is likely community members will seek out their own, possibly unvetted AI tools that could place Cornell’s data at significant risk,” the report states.
Leaderboard 2
The Task Force notably proposes that there is significant potential for AI tools to supplement human evaluation for undergraduate and graduate applications to Cornell given the “volume of applications Cornell receives.” AI could potentially speed up the application review process to give applicants a faster response.
However, the report states that an economic analysis must be considered with AI implementation in admissions, as this application requires buying technology from a vendor or developing an AI application in-house.
Following the overturn of affirmative action, prohibiting colleges from considering race in admissions processes, using AI in college admissions review processes has raised equity concerns, according to an article published by the University of Southern California Rossier School of Education.
Newsletter Signup
The paper also notes that AI tools can potentially find talented applicants who are traditionally overlooked in admissions processes, but AI is also prone to bias embedded in the human decisions on which they are trained.
Still, Cornell is not the only University considering the benefits of AI use in applicant review. According to an Intelligent survey of 399 education professionals in September, half of higher educational institutions are already using AI in admissions.
Full Use Cases
The report divides the University’s AI technology use into three categories. Vertical/embedded AI, refers to AI features of existing technologies, such as Microsoft 365 Copilot. Large language models like ChatGPT and Meta’s Llama2 generate, summarize and translate human-sounding language. Generative AI like Azure Open AI can create content beyond text, like images and music.
The full proposed uses of each technology category are listed below:
Vertical/Embedded AI | General End-User Enterprise LLM | Generative AI Platform |
– Call trees and text-to-speech – Curriculum exploration for prospective students – Project management resource utilization – Zoom AI companion – Admissions application review – Animation generation – Applicant pool credentials review – Audio/video description – Development/coding assistant – Document creation – Enterprise-wide chatbot – Recruitment documents and market targets – Transfer credit evaluation – Web accessibility – Website content analysis |
– Animation generation – Audio/video description – Development/coding assistantDocument creation |
– Contract analysis – Grant opportunities – Internal grant review – Sponsored research proposal preparation – Admissions application review – Applicant pool credentials review – Enterprise-wide chatbot – Recruitment documents and market targets – Transfer credit evaluation – Web accessibility – Website content analysis |
Mitigating Risks and Consequences
In the report, the Task Force also emphasizes that AI should be deployed in compliance with the law and in a manner that considers potential community impacts, including discrimination and privacy breaches.
The Task Force’s outline for determining responsible AI use includes conducting system testing, consulting with diverse stakeholders, enacting contingency plans and informing the public about the use of AI in important University decisions.
The report acknowledges security concerns with AI use, including elaborate phishing campaigns — fraudulent emails or websites that attempt to obtain sensitive information. But the report also stresses the risks associated with a lack of coordination with regard to AI, such as inappropriate AI use and “Cornell’s competitive standing relative to its peers.”
According to the report, providing proper training to users and establishing a quality reporting mechanism can aid in mitigating risks to information quality and precision. The report also recommends understanding AI vendors’ carbon goals and assessing models’ energy efficiency before selecting a vendor.
As for concerns that AI may replace Cornell staff in completing routine tasks, the report identifies “integrat[ing] AI in a way that augments rather than replaces human capabilities” as a key challenge. The report suggests that “removing drudgery and enhancing individual performance can lead to a more effective and innovative workforce.”
“It is critical that AI adoption be democratized — focused on benefiting all employees,” the report states. “AI can be an asset across all Cornell service domains, embraced as an opportunity to supplement staff capabilities rather than a threat.”