Learning from higher education to unlock new affordances for education systems and institutions.
This section examines the emerging role of generative AI (GenAI), some of the techniques on which it builds and its
AI predecessors, in back-end functions of higher education, including course articulation, student transfer, advising,
admissions, and content infrastructure. Unlike instructional uses of AI, which often focus on the learner as the
end user, the systems discussed here are typically administrator- or staff-facing or are embedded into educational
platforms installed at an institutional level to produce insights, reduce task complexity, and support academic pathway
navigation. In most cases, the AI models behind the tools utilise data collected at the macro- or
meso-level as opposed to the micro-level.
Drawing on recent research, case studies, and early-stage prototypes, this chapter identifies how AI can:
• Support credit mobility and transfer prediction across institutional boundaries
• Support academic advising, such as with personalised course and major recommendations and curricular analytics
• Diagnose novel opportunities to enhance admissions and resource allocation, and
• Structure the classification, tagging, and reuse of learning content and curricular components
While not all of these tools involve generative AI as means to create content or directly interface with end- users,
many depend on machine learning, natural language processing, and representation learning (e.g. embeddings),
technology at the heart of generative AI, to support institutional decision-making. The chapter foregrounds macrolevel, institutional infrastructure as the critical site of innovation for unlocking more evidence-based, personalised,
data-informed, and ultimately student-serving higher education ecosystems. While the chapter focuses on research
carried out at the higher education level, many of the covered possibilities are relevant to the secondary school sector,
at the system rather than institutional level, as well as to support lifelong learning.
As students traverse academic pathways, their ability to have learning acknowledged when moving between segments
or systems of education can be the difference in their ultimate academic success. In the United States, when students
move from a 2-year community college to a 4-year university, for example, agreements called course articulations
dictate how much credit will come with them and which requirements it will satisfy. Similarly, prior learning gained
in industry from a professional certificate that is then attempted to be counted as equivalent to institutional course
credit is referred to as Credit for Prior Learning (CPL). In other countries the
same issue can arise when individuals want to change study paths, to transition from a 2-year study programme to a
bachelor’s degree, change higher education institution outside traditional study paths, etc. This can also happen for
the international recognition of foreign degrees in the frame of international student mobility - or just professional
mobility. Demonstrating mastery of a skill from one taxonomy and then seeking acknowledgement of mastery in a
similar skill from another taxonomy requires mapping, or cross-walk between taxonomies.
These variations on credit and learning acknowledgement scenarios are critical to student success in higher education;
however, they have historically been constructed and maintained by hand, often with missing or inequitably distributed
pathways for mobility that favour credit from institutions with higher socioeconomic standing. Generative AI and the natural language processing technology behind
it could and is increasingly being used to address these deficits in ways that have the potential for equitable scaling.
One promising direction to better map different types and levels of educational programmes is to identify closeness across courses with AI techniques. This involves representing course content as AI vector embeddings (see Box 11.1), enabling semantic similarity comparisons across thousands of courses. This representation can be informed both by natural language signals, such as a title and course catalogue description, but also by historic enrolment data, that is, the actual choices courses that individual students made within their higher education programmes. Using the latter, Pardos and Nam (2020) visualised the semantic topology of courses offered at a large public university (Figure 11.1) and queried the underlying course vector representation to reveal differences between courses. For example, when asked what the difference was between the Econometrics and Advanced Econometrics courses, the model correctly responded with “Linear Algebra.” It perhaps would not be surprising for contemporary LLMs to be able to answer this question, given their access to troves of data; however, the model from this example used only course enrolment histories, showing the effectiveness of these models with a better defined amount of data. This effectiveness of course vectors was later demonstrated in successfully predicting student course workload perceptions and also performed strongly on prerequisite prediction (Recall@10 = 0.70) and average-enrolment prediction (RMSE = 42.48) as described in Jiang and Pardos (2020)
Historic enrolment data within institutions can also be leveraged by AI to learn and provide course recommendation
pathways using the same type of neural networks which power generative AI. Much
like a Generative model could complete your sentence, this similar model applied to course enrolments can complete
a student’s course sequence to include necessary requirements and nurturing budding personal interests to satisfy
electives.
The projections were produced by reducing course vectors to 2-D using t-SNE. The space may suggest to a Dean or
other administrator, where a department may have a concentration of topical strength and in what areas “neighbouring”
departments may be collaborated with to fill in gaps of a major or work together on a major. If another institution’s
course vectors were to be overlaid on to this one, it can suggest where the institutions complement one another,
where they are aligned, and where expected alignment could be improved.
Beyond single institution course planning and recommendation, AI can also support the development and maintenance
of cross-institutional course equivalency models - an important enabler for student transfer between degree programs
in higher education and a facilitator for lifelong learning and the recognition of prior learning. For example, in the
United States, starting at local, more affordable community college and then transferring to a bachelor’s programme
has been the greatest source of upward social and economic mobility. Similarly, in the European
Union, the European Credit Transfer and Accumulation System (ECTS) is designed to promote student mobility by
standardising how learning achievements are measured and recognised across institutions. In practice, however, the reality of exchange programs and institutional transfers often involves negotiations
for the accreditation of specific courses toward degree requirements. In both cases, course equivalency agreements
between institutions are required to allow for transfer to work as intended. Across
many higher education systems, articulation and credit transfer remain time-consuming, manual processes. Faculty
or articulation officers typically review syllabi and catalogue information to determine equivalency across institutions.
AI, particularly natural language models and course embeddings, has begun to offer data-driven alternatives and
support structures.
The same embedding models that underpin generative AI-used to represent meaning – rich relationships between
words, images, and multimodal content – can also represent relationships between courses at different institutions.
Pardos, Chau and Zhao (2019) demonstrate that machine translation techniques, built on these embeddings, can
“translate” between the course vector spaces of different colleges. These vectors, learned from students’ historical
course or programme-level enrolment patterns and course catalogue descriptions, capture latent curricular structures,
enabling the prediction of equivalences and surfacing gaps in transfer agreements. These gaps correspond to
equivalences that could have been offered given course contents and pathways but that had not been identified
or considered yet. In their proof-of-concept, the approach successfully matched courses between a two-year and a
four-year US institution and validated 65 pre-established articulations. This methodology is being piloted with 59 US
higher education institutions and four systems of higher education to explore its feasibility and utility in practice1.
Methodologically, the educational data mining community has explored additional neural course representations
outside the transfer context. Khan and Polyzou (2024) evaluated session-based methods such as CourseBEACON
and CourseDREAM (neural architectures that recommend well-suited course bundles based on enrolment sessions)
and showed improved performance of these methods over traditional factorisation or association models. These
session-based models recommend full next-semester sets of courses by modelling (1) which courses pair well
together and (2) semester-to-semester orderings using RNN/LSTM encoders (CourseBEACON uses an explicit
co-occurrence matrix; CourseDREAM learns latent basket vectors). They improve accuracy over popularity and
sequential baselines (CourseDREAM achieves the best Recall@k on test of about 0.30). Similarly, Kim et al. (2025)
demonstrate that deep embeddings of course descriptions, coupled with traditional classifiers, can automate
equivalency judgments with near-perfect performance (as measured by F1 scores). Both works demonstrate how
representations of courses can be used for various institutional tasks guiding student pathways and transfer
mobility.
While AI-assistive equivalency models can dramatically speed up articulation, adoption depends on
trust-especially among domain experts that ultimately hold the keys to unlock credit approvals. Xu et al. (2023)
studied algorithm aversion in higher education administrators tasked with course credit decisions. Using
a 2×2 experiment with an AI-based matching platform. One factor was whether low-confidence or outlier AI
recommendations were inserted into the results or not. The other factor was whether the interface prompted users
to flag inappropriate AI recommendations or not. They found that not including outlier recommendations improved
acceptance and productivity; however, asking users to flag recommendations reduced administrators’ acceptance
of the suggestions unless outlier recommendations were turned on. While the literature suggested user flagging as a means to increase adoption, it may have led to a negative mindset in this case, unless users were given
clear results worth flagging. The takeaway is perhaps that without the right implementation, even accurate AI
recommendations may risk being undervalued or outright rejected in a socio-technical system.
These findings show that embedding-based methods, central to modern generative AI, are not only useful for
producing language or images, but that they can also map complex academic structures across institutions. When
combined with careful human-AI collaboration design, they can accelerate equivalency mapping, reveal hidden
curricular alignments, and reduce administrative burden while keeping human experts in control.
While many AI applications focus on student-facing outcomes, others operate behind the scenes to
power the discoverability, classification, and reuse of educational content. A growing area of impact is the
back-end organisation of large-scale resource libraries through tagging, aligning, and curating learning materials to
match institutional or state-wide taxonomies or support transition and alignment between them. The significance
of using AI for annotating and grouping educational content is that content standards are often changing over time
(e.g. the U.S. Common Core or the Finnish National Core Curriculum for Basic Education). Recategorising content
and aligning it with new standards is expensive, but could be made substantially more cost-effective and efficient
using AI. For example, this can work for Open Educational Resources (OER). While many countries and international
organisations like the OECD and UNESCO have supported the development of Open Educational Resources, the
COVID-19 pandemic highlighted that identifying those aligned with country or jurisdictions’ curricula was not trivial,
because of a lack of domestic or international taxonomy. The strength of generative AI techniques at classifying and mapping text (with embeddings) may help to solve or at
least mitigate that problem.
Another example is the conversion of a mastery profile from the proprietary skill taxonomy of an intelligent tutoring system into the US state common core standards, a set of agreed educational standards in English language arts and mathematics across a large number of States in the United States. Intelligent tutoring systems supporting the acquisition of content and procedural knowledge in language arts, maths, science, etc., have their own “knowledge maps” that are often not specific to any particular curriculum or standards. Allowing these translations is essential to help teachers and educators using particular systems to align them with local curricula. These infrastructure-oriented uses of AI may be invisible to learners, but they are foundational for enabling efficient resource retrieval, supporting instructional planning, and ensuring alignment with evolving curricular goals (Figure 11.2). Recent advances apply techniques central to generative AI, particularly embedding models, to create rich vector representations of learning resources that capture semantic relationships between content items. These embeddings support clustering, that is, the grouping of content according to common characteristics, similarity search, and cross-walking between taxonomies, while classification algorithms, often fine-tuned on top of embeddings, map resources to categories in both established frameworks and newly defined skill taxonomies. For example, when a new mathematical curriculum or taxonomy, like the US Common Core Standards, are introduced, these methods can aid in re-mapping the estimated millions of existing open educational resources to the new taxonomy. Such methods have been deployed to support initiatives like common course numbering, enhance tutoring systems’ ability to link resources to specific knowledge components, and keep course catalogues aligned with rapidly changing programme requirements. Research suggests that these AI-assisted systems can approach or even match non-expert human tagging performance with relatively small, labelled datasets, and in some cases rival expert performance at scale. For example, Li et al. (2024) found that their approach combining embedding and classification could achieve nonexpert accuracy with as few as 100 labelled examples, and near-expert accuracy with 5 000. Importantly, these models incorporated multimodal features from text, images, and videos-mirroring the multi-input capabilities of contemporary generative AI systems-and were publicly released for use with both the US Common Core and novel taxonomies. Ren et al. (2024) extended this line of work to study human-AI collaboration in taxonomy alignment. Compared to humans working alone, AI suggestions reduced tagging time by roughly 50% (p ≪ 0.01) but led to a modest decline in recall, that is, the identification of all relevant resources of a specific category (- 7.7%, p = 0.267), and substantial decline in accuracy, that is, the overall correctness of tagging suggestions (-35%, p = 0.117). Notably, the AI-alone condition performed worst, while the human-alone condition performed best for accuracy-placing the collaborative condition in between. These findings highlight a trade-off between efficiency, here speed of tagging, and precision, and suggest that while AI can accelerate large-scale taxonomy updates, quality assurance remains essential – as of now, performed by humans. As educational taxonomies continue to evolve, whether through new competency frameworks, new curricula or standards, or institutional redesigns, embedding-powered alignment tools offer a scalable way to re-tag resources, identify content gaps, and maintain interoperability across systems. In doing so, they extend the same representational methods powering generative AI into the critical, though less visible, infrastructure that underpins curriculum management.
Generative AI in higher education can involve helping personalise the guidance students receive, not just from a recommender system, but from human academic advisors. In Lekan and Pardos (2025) an advisor-facing, GPTdriven model was tested whereby first year college students (n = 33) were asked questions about course preferences and career goals, typical of a human advising session. These responses were fed to a GPT model that, instead of giving advice directly to the students, gave major recommendations and justifications to an advisor (n = 25). The study found that academic advisors rated the suggestions of the GPT model favourably and exactly agreed with the model’s major recommendation 33% of the time. In this case, participating advisors were positive on this type of human-AI collaboration, as providing assistance and leaving them as the point of contact to students, rather than supplanting them. Research collaborations with registrars and admissions offices have begun to explore how advisors can use analytics to better support student course selection. For example, nascent work (Borchers, (n.d.)) with undergraduate advising integrates Big Five personality traits, such as conscientiousness and neuroticism, with multi-semester course enrolment data and found that students high in conscientiousness or self-efficacy tend to perform well even under heavy workloads, while those lower on these traits are more likely to struggle. This suggests that advisors could move beyond general heuristics (e.g. “don’t overload”) and instead offer more individualised recommendations based on a student’s likely capacity to manage challenging course schedules. The same form of advising could also be used in high school to help students choose their higher education study programmes, or later on in their life, to support their choices of lifelong learning options. While generative AI is also explored as means to provide or facilitate other advising and counselling services to college students (e.g. mental health), these applications are not without controversy and have shown growing pains in their current stage of development (Moore et al., 2025).
Assessment represents one of the most resource-intensive components of higher education. Generating high-quality items for standardised tests in some subjects requires significant faculty time, while evaluating and calibrating those items demands large respondent pools and psychometric expertise. Recent advances in generative AI offer institutions the opportunity to refresh assessment practices by accelerating both the production and the evaluation of large-scale item pools. Importantly, these processes often occur in the institutional “backend”-funded, managed, and maintained by campuses or system-level services-in addition to being driven by faculty or students directly. Developing assessment items is also relevant at the school level in countries with national assessments, whether they are developed by public evaluation agencies or by private companies. LLMs offer novel perspectives on automating the creation of multiple-choice and short-answer items, particularly when anchored in existing curricular material. Studies comparing LLM-generated questions to textbook-sourced questions find comparable psychometric properties. For example, Bhandari et al. (2024) report that ChatGPT-generated algebra items demonstrated difficulty and discrimination parameters statistically indistinguishable from traditional textbook items when evaluated with item response theory. Notably, the LLM-generated items exhibited slightly stronger differentiation between high- and low-ability respondents, suggesting that GenAI can produce assessment content of similar or even superior quality under controlled conditions. This holds particular promise for large lecture courses and general education programs where instructor time is scarce and item demand is high (e.g. test banks must be regularly refreshed to ensure continued assessment validity). To retain the instructor’s agency over course assessments while decreasing the time it takes them to create an assessment is not just an algorithmic matter but a human-computer interaction design issue. New human-computer interaction research, such as work on the PromptHive tool, provide examples of placing subject matter experts in the driver’s seat of generative AI to integrate their expertise into the workflow of assessment creation (Reza et al., 2025). An instructor, for example, provides her existing assessments as a style reference as well as the new learning objectives she wants additional assessments to cover. PromptHive creates a pool of assessment items covering the learning objectives and allows the instructor and TAs to instruct PromptHive on the types of hints that should be produced to scaffold learning of the related content. The instructional team can then preview the generated hints and assessments on a subset of items or all items. The limitation here is that generative AI still hallucinates in most topic areas. If hallucination rates are not evaluated to be 0% in the topic area, this necessitates the instructional staff to check every problem and hint being produced before it is seen by a student. Relatedly, generative models can also address known limitations in traditional test banks. One persistent challenge is the overexposure of items, in cases where repeated use narrows the effective variance of assessments and introduces unwanted correlations between items. For example, many high-stakes, summative tests are strictly proctored with test items that are not released to the public; however, over time, test takers may socialise the contents of the test to future test takers, or test prep companies, leading to overexposure of the items if they are not changed frequently. By creating novel assessments and even well-crafted multiple-choice distractors, large language models can diversify item pools, reducing the risk of distributional shifts when the same questions are repeatedly deployed. Yet producing new distractors at scale raises the question of quality control. Poorly written distractors-those that are implausible, misleading, or inadvertently cue the correct answer-can reduce both the fairness and the psychometric value of multiple-choice questions. Here, automated evaluation methods are beginning to complement generative approaches. Moore et al. (2023) provide evidence that such methods can systematically detect flaws in student- and AI-generated multiple-choice questions. The authors evaluated undergraduate students in introductory courses who were prompted to generate multiplechoice questions on recently learned material. Comparing a rule-based system to GPT-4 on 200 student-generated questions across four domains, they found that the rule-based approach identified 91% of item-writing flaws flagged by human annotators, compared to 79% for GPT-4. Many of these flaws involve distractor design-such as benefits of automated generation at scale. Hence, human expertise and quality control will remain important pillars as LLMs are used for content generation at larger scale. Beyond production, GenAI is emerging as a tool for evaluation. Item calibration - the process of estimating psychometric properties such as difficulty and discrimination - typically requires thousands of student responses. Liu et al. (2025[19]) demonstrate that multi-agent AI models bringing together ensembles of LLMs can serve as “synthetic respondents,” producing response distributions with psychometric properties closely aligned to those of college students. While a single LLM was not measured to exhibit abilities similar enough to the target human population, ensembles of different LLMs expand variance, yielding item parameter estimates highly correlated (> 0.8) with human-calibrated values. Augmentation strategies, such as adding LLM responses to even a small set of human respondent data further improves alignment with exclusively human responses. These findings suggest a new institutional workflow: LLMbased calibration can complement limited student response data, reducing costs and accelerating item validation cycles. While human responses remain essential for final benchmarking, AI-assisted evaluation can substantially shorten development timelines for new assessments. Beyond opportunities and feasibility, a practical open challenge is instituting policies for the use of generative AI in assessment. Drawing from emerging frameworks proposed by Corbin et al. (2025), open issues include:
• where to set meaningful limits on AI assistance for different outcome types;
• what disclosure, attribution, and provenance practices are sufficient (e.g. prompts, drafts, and model/version logs);
• how to handle discipline-specific variation without sacrificing consistency; and
• how to mitigate workload burdens for staff while maintaining validity and fairness.
A challenge in these policies is that AI models and their capabilities will continue to change. Therefore, policies will need to be adaptive rather than static and focused on guiding principles and review mechanisms rather than fixed prohibitions. These considerations are especially important as AI is deployed in assessing high-stakes scenarios for learners, such as for university admissions. For example, von Davier and Burstein (2024) discuss several practices toward human involvement in AI decisions to ensure ethical, accountable, and valid use. These include ongoing human oversight of automated scoring and item generation, systematic review of algorithmic outputs for fairness and bias, engagement of diverse stakeholder groups in test development and validation, and transparent communication of AI roles and limitations to test-takers and institutions. There is also a tension between the continued adoption of rule-based approaches (e.g. college degree audits) and AI evaluation approaches, with hybrid approaches being a fruitful area for future exploration.
In the last 10 years, the field of learning analytics has increasingly expanded from student-facing dashboards and systems to analytics that improve programme evaluation, curriculum design, and course delivery in higher education (Greer et al., 2016). Although a recent review of the literature concluded that there is a lack of curriculum analytics studies investigating how these AI systems influence higher education stakeholders (De Silva et al., 2024), we summarise case studies that offer clear perspectives into how curriculum can be designed using machine learning models and AI trained on enrolment, course, and other institutional data. As generative AI becomes increasingly capable in explaining learning analytics to stakeholders (Yan et al., 2025), innovation informed by AI will increasingly shape institutional workflow practices. For instance, recent work has demonstrated how curriculum analytics can be enhanced with statistical and psychometric techniques to identify inequities in course difficulty and monitor changes over time. Baucks et al. (2024) introduced Differential Course Functioning (DCF), an Item Response Theory (IRT)-based method that controls for overall student performance while detecting systematic differences in course-specific success rates between student groups. Applied to data from over 20 000 undergraduates, the Differential Course Functioning method revealed patterns linked to disciplinary alignment and preparedness, guiding targeted interventions for students taking courses outside their major and transfer students. In a complementary study, Baucks et al. (2024) applied IRT to quantify temporal shifts in course difficulty, finding a marked downward trend during the COVID-19 pandemic and proposing IRTadjusted pass rates to mitigate the confounding effects of fluctuating cohort performance. Both approaches provide actionable evidence for policymakers, accreditation bodies, and student advisors aiming to improve fairness and consistency in academic programs. Analytics have also been used to address mismatches between credit hours and actual student workload, offering an actionable basis for curriculum analytics. Credit hours, while central to degree requirements and course planning, explained only 6% of the variance in how students perceive their course workload in Pardos et al. (2023), whereas learning management system (LMS) features based on forum, assignment, and submission activity explained six times more variance (36%) in measures of time load, mental effort, and psychological stress. LMS indicators such as number of assignments and late-semester course drop ratios as well as historical course GPAs provided a more accurate reflection of the student experience, giving institutions greater confidence in these measures as a basis for action. Building on this, Borchers and Pardos (2023) developed Course Load Analytics (CLA), a predictive model that integrates LMS and enrolment features to estimate perceived workload at the course and semester level. Applied across an entire university catalogue across a full undergraduate degree duration, CLA revealed that first-semester students-particularly in STEM fields-often carry some of the heaviest predicted workloads despite low credit-hour counts (Figure 11.3), a hidden load linked to higher attrition. Such meso-level insights position CLA as a practical tool within curriculum analytics, enabling institutions to redesign programme structures, adjust course sequencing, and align workload expectations with student capacity. Because CLA’s modelling approach generalises to new courses and contexts, institutions can deploy it broadly to monitor and balance workloads, improving retention and the overall first-year experience.
Looking ahead, curriculum analytics research continues to span a wide range of curriculum-related areas, including
programme structures, course sequencing, competency attainment, workload measurement, and curriculumemployability alignment. Studies examine curriculum components from multiple angles, such as mapping prerequisite
networks, identifying instructional bottlenecks, tracking competency coverage, modelling student progression
pathways and analysing elective course selection strategies.
Recent work has also expanded curriculum analytics beyond student modelling and workload analysis to include the
automation of academic record processing through direct application of generative AI in institutional workflows. For
instance, Bhaskaran and Pardos (2025) conducted a comparative study of Optical Character Recognition (OCR) and
vision-language model pipelines for transcript evaluation, a critical task in credit transfer and course articulation. They
showed that combining OCR with semantic reasoning through multimodal models such as GPT-4o1 and Claude 3.7
achieved extraction accuracies above 90 percent, reducing the manual effort needed to align courses and grades
across institutions, and thus facilitate transfer and degree/credit recognition across institutions domestically (and
potentially internationally).
By converting unstructured transcript data into structured formats suitable for downstream analysis, these methods
help connect administrative processes with analytical insights in curriculum design and enable scalable pipelines for
workload modelling and course equivalency mapping. In the future, Curriculum Analytics research is likely to close
the gap between research applications and practical impact gained through sustainable deployment of tools, as
identified by De Silva et al. (2024), by improving efficiency in syllabus, learning objective, and content generation
using large language models (Sridhar et al., 2023) and by supporting the wider deployment of university-level
initiatives such as course load analytics.
While currently being developed in higher education, where course diversity is much greater than at the school level,
those techniques could also be used for middle and upper secondary education in education systems where students
can choose different tracks, majors or options to get their high school diploma. This may either highlight the relative
difficulty and workload of choosing different study paths or change their perception or design and provide more
equal opportunities to all students to study in higher education, regardless of their subject preferences.
We see three practical reasons for why AI will be increasingly adopted by relevant stakeholders in institutional
educational workflows.
First, content generation can yield strong increases in authoring efficiency that come with substantial economic
cost reduction. For instance, Reza et al. (2025) found that PromptHive-an open-source collaborative promptauthoring interface for the OATutor adaptive tutoring system-enabled subject-matter experts to produce AIgenerated maths hints of comparable instructional quality to exclusively human-authored hints with no AI support.
The tool also reduced perceived cognitive load by half, shortened authoring time by more than twenty-fold, and
was found to be substantially more usable than the legacy authoring interface. In a controlled study with over
350 learners, the AI-assisted hints achieved student learning gains statistically indistinguishable from those of
expert-written materials, demonstrating that human-centred prompt-engineering workflows can preserve expert
control and quality standards while dramatically increasing the scalability of educational content creation. These
improvements in content generation can be useful to both (a) higher education course instructors seeking to
revise practice problems and assessment and (b) primary and secondary education instructors and content
vendors seeking to revise curricular sequences.
Second, as many higher education institutions compete for the most capable and promising students, pressure
to adopt AI that enables better course offerings and equivalences may increase. As transfer pathways improve,
students will become more likely to select institutions that recognise a greater share of their prior learning and
minimise credit loss, directly affecting enrolment and completion outcomes. Studies have found that seamless
credit transfer is a strong predictor of degree completion among community college transfer students, and that
improved articulation coverage can reduce time-to-degree and attrition. This is also a key development to make lifelong learning a reality.
Third, AI-augmented advising can directly improve other outcomes that education institutions care
about-retention, on-time graduation, and advisor capacity-thereby improving cost effectiveness and institutional
appeal. Advisor-facing recommender and triage tools can help tailor course plans, while course workload
analytics enable programs to audit sequences, rebalance hidden load, and reduce course withdrawal and
failure where overload is detected.
In parallel, machine-learning approaches to curriculum analytics can surface when nominally identical offerings
fluctuate in difficulty over time and across groups, informing targeted redesign and quality assurance. Across domains as varied as credit transfer, advising, admissions, and curriculum management, artificial intelligence
is becoming part of the institutional infrastructure of higher education and beyond. These applications signal a
shift in how institutions manage complexity, not by automating human decisions, but by introducing new forms of
prediction, representation, and adaptive support. Properly designed, such tools augment institutional and education
system judgment, surfacing course equivalencies that might otherwise be missed, helping advisors tailor guidance
to individual students, and enabling administrators to detect inequities or inefficiencies across programmes.
With this integration comes a need for governance and privacy frameworks that can sustain trust. New privacypreserving and open-source large language models, such as the Swiss-developed LLaMA-Open series, illustrate how
innovation can proceed while respecting data sovereignty and transparency. Beyond regulatory frameworks, research
can contribute frameworks for AI-supported analytics, as demonstrated in seminal models for responsible data use, offering templates for institutions developing AI policies today. Rapidly evolving
technologies merit adaptive rather than static governance, that is, principles and review mechanisms that evolve
alongside technology.
What distinguishes this emerging wave of institutional AI is less the technology itself than the collaborative ecosystems
that enable it. Many of the most promising examples discussed in this chapter arose from partnerships among
researchers, administrators, and platform developers. Such collaborations are critical for aligning technical design
with institutional and end-user values and for empirically studying the consequences of AI adoption on efficiency,
equity, and educational quality.
Although the focus of this chapter has been on higher education, many of the same institutional affordances
extend naturally to lower levels of education as well as adult learning, where advising, assessment, and curriculum
alignment face similar pressures. Generative models can assist teachers by creating or reviewing assessment items, tagging resources across standards, and generating formative feedback, thereby reducing workload and enhancing
standardisation. It is likely that some institutional AI will merge toward a connected, data-informed educational
ecosystem that links primary and secondary education and postsecondary systems, improving personalisation,
equity, and mobility across educational pathways.
Comments
Post a Comment