Module Management AI Agents: A targeted approach to study support in higher education
Abstract: This article explores the development and implementation of Module Management AI Agents to enhance student support and administrative efficiency in higher education. In response to the increasing complexity of module delivery and the demand for timely, accessible information, the Year One EAP team at XJTLU created three tailored agents: the Y1 EAP Module Guide, Online Lesson AI Assistant, and Y1 EAP Resit Helper. Built using XIPU AI’s retrieval-augmented generation (RAG) framework and carefully calibrated technical parameters, these agents provide accurate, document-verified responses to student queries and offer a range of other benefits to the study experience. The design process followed a cyclical model of prompt-writing, standardization, collaborative testing, and revision to ensure clarity, reliability, and alignment with institutional goals. This article outlines the value of targeted AI solutions in improving educational experiences and highlights opportunities for future refinement and integration within higher education.
 
Keywords: Module Management, AI Agents, Higher Education, Student Support

1. Introduction

The integration of artificial intelligence (AI) into higher education is transforming how institutions deliver learning, support students, and streamline administrative processes. AI’s potential to enhance personalization, efficiency, and scalability aligns with the growing demands of modern education, where adaptability and innovation are key. Universities worldwide are leveraging AI to create dynamic learning environments ensuring that both educators and students benefit from this cutting-edge technology.

XJTLU has welcomed this trend. As outlined in its Education + AI Strategic Framework 2025-2028, the university emphasizes a symbiotic relationship between human and digital intelligence, through AI-enhanced education, research, and operations (XJTLU, 2025). For instance, XJTLU has already deployed administrative AI tools like the LibAI Chatbot for library inquiries (https://lib.xjtlu.edu.cn/, 2025), the E-Support AI Chatbot for IT services (https://esupport.xjtlu.edu.cn/, 2025), and an AI-Driven Final Year Project Allocation system to improve efficiency (Lim, 2025). 

At the School of Languages, Year One English for Academic Purposes (EAP) courses at XJTLU serve over 5,000 students, employing a blended curriculum with onsite seminars and asynchronous online lessons. The complexity of managing multiple assessments, diverse task requirements, and frequent student inquiries about course content creates significant administrative demands. Here, AI Agents offer a scalable solution: providing instant, accurate responses to student queries, acting as an always-available "teacher" and ensuring efficient access to critical information. This not only reduces the burden on staff but also empowers students with timely support.

To address these needs, the Year One EAP team has developed three Module Management AI Agents using XJTLU’s XIPU AI platform: the Y1 EAP Module Guide, Online Lesson AI Assistant, and Y1 EAP Resit Helper. The development of these AI Agents aligns with XJTLU’s strategic pillars, particularly in enhancing education and operational excellence through targeted AI solutions. This article explores the theoretical foundations, methodological approach, and practical implementation of these agents, demonstrating how tailored AI solutions can elevate student support and optimize module management in higher education.

2. Literature Review
 
2.1. Student Support
 
AI agents are reshaping academic support by offering personalized, real-time assistance (Kshetri, 2025). Rather than waiting for office hours, students can pose routine questions such as those about course concepts and deadlines, and receive immediate, tailored feedback. 

Studies indicate that these AI assistants can boost academic performance. For example, Kshetri (2025) notes significant improvements in GPA among students using agentic support. By automating repetitive tasks and clarifying key principles, agents free instructors to focus on higher-order teaching activities. Ramirez and Fuentes Esparrell (2024) describe AI tutors as “self-correcting” systems that collect feedback on student interactions, enabling incremental course improvements and facilitating peer-to-peer exchanges that turn learning into a collaborative process of knowledge construction.

Moreover, the 24/7 availability of agents extends support beyond the traditional classroom (Tophel et al., 2025). Given flexibility of time, students benefit from the structured, step-by-step problem-solving guidance offered, as agents encourage deeper conceptual reinforcement rather than merely supplying answers. Process automation and machine learning enable agents to make pedagogical decisions, such as selecting the next exercise, based on programmed data and ongoing performance (Ramirez and Fuentes Esparrell, 2024). This adaptive feedback loop supports knowledge acquisition more effectively than static course materials. By leveraging individual learner profiles, these systems anticipate needs and streamline workflows, resulting in increased student satisfaction during independent study time (Triberti et al., 2024).

Furthermore, personalized recommendations extend beyond academics to learner engagement, with agents suggesting supplementary readings or interactive simulations calibrated to a student’s demonstrated interests and skill gaps. Triberti et al. (2024) emphasize the role of AI in reducing cognitive load by anticipating learners’ needs, thereby fostering greater autonomy in self-directed study.

2.2. Administrative Efficiency

These very same AI agents are equally impacting higher education administrative processes by automating repetitive but resource-intensive tasks and improving service accessibility. Existing research demonstrates the impact on various institutional functions, ranging from general administrative tasks to enrollment support and module-level assistance.

One prominent application of factual AI is the automation of institutional administrative tasks. For instance, at Universidad del Magdalena, "Tashi-Bot" was developed to automate responses to frequently asked questions regarding campus services, wellbeing, careers and enrollment procedures, demonstrating how chatbots can achieve 92% user satisfaction in handling routine administrative queries (Carlos et al., 2021). Beyond handling basic queries, Tahvildari’s (2025) GPT-based robo-advisor, implemented at Germany's International University of Applied Sciences, further illustrates AI’s capability in processing structured tasks, such as leave applications, while ensuring compliance with policies.

For admissions and enrollment support in particular, AI agents streamline processes that traditionally require significant staff intervention. An agent implemented at Universidad de Las Américas shows how automating academic advising by addressing FAQs about course details and career prospects, scheduling meetings via calendar integration, and sharing documents such as course syllabi can help private universities attract new students while freeing staff time for higher-value tasks (Villegas-Ch et al., 2021). A slightly different example is Georgia State University’s AI agent "Pounce", which assists incoming students on challenges such as financial aid and class registration. Results show that it reduced summer dropout rates by 22% and ensured smoother high-school-to-university transitions (Kshetri, 2025). 

While most studies focus on institution-level applications, emerging research has started to explore the role of agents in module-specific administrative support. Q-Module-Bot (Allen et al., 2024), for instance, automates module-level routine tasks like answering course-related FAQs and providing course materials. It significantly reduces instructors’ administrative burdens without compromising support quality. The University of Sydney’s chemistry-specific AI agent further exemplifies this trend. It assists students with unit organization, deadlines, and administrative queries by delivering accurate information from course materials and directing students to support teams when needed, thereby improving responsiveness in asynchronous learning environments (Kshetri, 2025).

2.3.Technical Parameters

Not all agents are built the same. A key distinction between Module Management AI agents and many agents used for other purposes is the approach taken towards the various resources and technical parameters used with the Large language model (LLM). For example, the use of a retrieval-augmented generation (RAG) system is a necessary component of module management agents, where developers compile a list of tailored documents and share these with the LLM. This allows the LLM to retrieve pertinent documents from this knowledge base (i.e. module handbook and assessment task sheets) in response to a particular query and then generate an appropriate reply (Ren, Ma and Zheng, 2025). 

The larger dataset that the LLM was trained on can then complement the specific documents. A user asking a non-tailored chatbot about an upcoming essay, for instance, may receive misleading advice lacking in module particularities such as word-length and other genre criteria. Conversely, a RAG-supported model offers a tailored alternative as part of the larger system of student support mentioned above.
A key parameter related to handling requests for such information is temperature, i.e. the degree of randomness (Boonstra, 2025). Essentially, altering such a parameter can lead to a more deterministic response, and therefore increased accuracy. The ability to determine accuracy has powerful consequences for AI agents in higher education. In many situations, agents aim to provide variety in order to simulate a range of possible outcomes (e.g. practice reading texts with different content) but others must rely on pinpoint accuracy and consistency to ensure user confidence (e.g. in something such as assessment dates). Tophel et al. (2025) have demonstrated that the difference between a temperature of 0.1 and 0.5 can show a decline in accuracy from 95% to 82.5%, a margin of error unacceptable for many module management applications. 

This has serious ethical implications. Student users are often at the whims of developers owing to their ignorance of course content. These users are placing confidence in a system providing information that in some cases is difficult access or completely unverifiable. The aforementioned RAG system in combination with an appropriate temperature goes some way towards curtailing any issues related to misinformation (Thoeni and Fryer, 2025).

3. Method

In the following section, we outline the process taken to ensure our agents offer the aforementioned benefits for students and administrators while avoiding any potential technical pitfalls. We then describe three example agents and their key features.

3.1. Development Process

1. Prompt-writing

The first stage in building an AI agent involves precisely defining its role and scope. This is essential for ensuring the AI interacts effectively with users. For instance, the agent might be designed to act as a professional yet approachable guide for Year 1 English for Academic Purposes (EAP) students. Given that these students are non-native English speakers, the AI's responses must be structured in a way that is both clear and supportive. A sample prompt for defining the agent's role would look like:

"You are an AI assistant designed to help students with answering assessment-related questions. You act as a friendly yet professional guide for Year 1 English for Academic Purposes students. English is not their first language."

Another critical component is how the RAG-supported AI will use its predefined knowledge base. The AI must be able to retrieve answers from various course documents, such as syllabi and guidelines. The temperature should be adjusted low enough that there is predictability in the information being provided without limiting the agent’s flexibility in how it words its response. If the agent is unable to provide a direct answer, it should politely direct students to contact their Lecturer or Module Leader for further assistance. An example of this prompt would be:

"Use the knowledge base to retrieve answers to factual questions. If the information is unavailable, suggest contacting the EAP Lecturer or Module Leader."

2. Standardization

The standardization stage focuses on ensuring consistency in the prompts used across different AI agents. This process involves aligning the language, tone, and structure of prompts. To achieve this, a template with standardized prompts should be created, ensuring that each AI agent, regardless of its specific function, follows the same set of rules and guidelines for interaction. This helps in maintaining a coherent user experience. For example, the prompts should adhere to a specific format for answering questions, such as always providing concise responses with key information highlighted in bold. 

3. Testing

Testing is a collaborative phase where lecturers, working in an AI working group, test each other’s AI agents and provide constructive feedback. This stage is essential for evaluating the functionality of the agents in real-world scenarios, ensuring that they perform as expected.

Lecturers should simulate typical student queries and assess the agents' ability to respond accurately, concisely, and with the appropriate tone. They should focus on whether the AI can retrieve correct information from the knowledge base, follow the standardized prompts, and provide responses that align with course objectives. The feedback collected from this testing process is invaluable for identifying areas of improvement, such as unclear language or inconsistent responses.

4. Revising

Following the testing phase, the AI agent’s design should be revised based on the feedback provided by the lecturers. This revision process focuses on refining both the prompts and the knowledge base, ensuring that the agent’s responses are accurate and meet the desired standards. Revisions may include updating the prompts to clarify instructions, expanding the knowledge base to cover additional topics, or modifying other settings to improve functionality. This iterative process allows the designer to refine the AI agent’s performance, ensuring it meets the needs of students and maintains consistency across all interactions. 

3.2. Example AI Agents

1. Y1 EAP Module Guide

Students often lament the difficulty of finding course information across several documents and webpages. To combat this, the Y1 EAP Module Guide AI agents were created to support students with module specific queries related to assessments (e.g. task requirements), onsite study materials (e.g. skills learned in Week 8) and policy documents (e.g. academic integrity). The agents have been tailored to each specific Year 1 EAP module containing more than 30 student-facing, knowledge base items, ensuring that there is no conflicting information. These agents aim to improve student confidence in verifying information placing all key documents in one place and providing an efficient platform for digesting complex course content leaving academic staff.

Key features (see Figure 1):

·   Providing accurate information about specific course content
· Summarizing and simplifying complex high-stakes policies and key learning outcomes
·   Directing students to the specific file(s) the information is located in
·   Offering appropriate suggestions, such as when to speak to a member of staff, or how to schedule exam preparation time
 
 
Figure 1: Example of specific information from course material provided by the Y1 EAP Module Guide
 
2. Online Lesson AI Assistant

The Online Lesson AI Assistant is an intelligent support tool designed to help students maximize the resources in the Year 1 EAP blended learning curriculum.
As a key component of the program, asynchronous online lessons are delivered twice weekly via Learning Mall Core (LM), featuring two-hour sessions that complement onsite seminars. These lessons break down key language skills and concepts into bite-sized activities, ensuring focused learning while preparing students for that week’s onsite classes.
 
This agent provides accurate lesson details, including topics, weekly schedules, and skill coverage, while offering guidance on which online lessons align with specific onsite sessions or assessments. It also grants direct access to relevant LM pages across all six Year 1 EAP modules. By delivering instant, AI-driven support, this assistant helps students navigate their online lessons more effectively.

Key Features (See Figure 2):

· Locating topics and skills from all lesson activities based on student inquiries
· Supplying direct URLs to the corresponding LM lesson pages
· Encouraging students to begin reviewing after receiving lesson information
· Ensuring responses are tailored to Y1 EAP modules
 
 
Figure 2: Example of information provided by Online Lesson AI Assistant
 
3. Y1 EAP Resit Helper

This agent was designed to support Year 1 EAP students preparing for resit assessments. Past students, who had already failed their module once, often struggled to access, interpret, and act on essential information related to resits, such as deadlines, exam formats, and appeal policies. Academic and administrative staff, particularly Resit Module Leaders, typically had to respond to a high volume of repetitive questions, leading to considerable administrative burden. 

This agent handles these recurring challenges by addressing three core needs commonly faced by students during the resit period: clarification of logistics (e.g. assessment formats and required materials); understanding of administrative procedure (e.g. eligibility and appeal rules); access to support resources (directing students to existing revision materials on LM and academic support services available).

By handling FAQs, this agent not only enhances students’ understanding of resits, alleviating their anxiety and encouraging proactive preparation, but also significantly reduces staff workload, enabling them to focus on more pedagogical and individualized support.

Key Features (See Figure 3):

·  Providing accurate, up-to-date and document-verified information about resit logistics
·   Explaining university assessment policies in clear language
·   Bridging the gap between institutional resources and student access
·   Encouraging proactive preparation
 
 
Figure 3: Example responses offered by the Y1 EAP Resit Helper for different queries
 
4. Implementation

To introduce these AI Agents to students, two parallel methods are employed on the module’s LM page: an Information Section and a Quick Access Section (Figure 4).
A dedicated section titled “ELC AI Agents” on the module’s LM page serves as a centralized hub for all AI Agents designed for Year 1 EAP students. Each agent is presented with a concise description of its purpose and functionality, along with a direct portal to access its interface.

Additionally, a block drawer section has been implemented for quick access. Unlike the main information section, this drawer appears as a sidebar on every page within the module’s LM site. This ensures that students can instantly access any agent at any time, regardless of where they are on the platform, simply by checking the right-hand side of the page.
 
 
Figure 4: AI Agents on Module Page

While the use of these agents is typically encouraged for self-study periods, a key strategy for implementation and engagement with these resources is to build them into learning materials. For example, the Online Lesson AI Assistant is easily demonstrated as a practical component of weekly study routines. As seen in Figure 5, Semester 2 Week 2's onsite lessons require students to apply multiple language skills acquired in Semester 1. To ensure thorough preparation, Week 2's online lesson includes a specially designed activity. This activity directs students to relevant lessons covering these essential skills and actively prompts them to review the material.
 
 
Figure 5 Integration of AI Agent and Online Lessons

5. Conclusion

In conclusion, the development and implementation of Module Management AI Agents at XJTLU represents a targeted, iterative approach to enhancing student support and administrative efficiency in higher education. The cyclical process involving prompt-writing, standardization, collaborative testing, and revising demonstrates a robust methodology that enables continuous improvement and adaptability of AI solutions. Future developments could further refine the accuracy and personalization of these agents by expanding their knowledge bases and optimizing response capabilities, potentially integrating advanced analytics for improved performance evaluation. Moreover, considering the broader 'constellation' of institutional AI tools already deployed, strategic marketing and clearer integration pathways could boost awareness and engagement among students and staff. Ultimately, maintaining this dynamic cycle of development and improvement will ensure AI Agents continue to deliver impactful, responsive, and ethically sound support tailored specifically to educational contexts.
 
 
 
 
 

References: 

Allen, M., Naeem, U. and Gill, S. S. (2024). Q-Module-Bot: A Generative AI-Based Question and Answer Bot for Module Teaching Support, IEEE Transactions on Education, 67(5), 793-802. https://doi.org/10.1109/TE.2024.3435427 (Accessed: 27 May 2025) 

Boonstra, L. (2025). Prompt engineering (Whitepaper). Google. https://www.kaggle.com/whitepaper-prompt-engineering (Accessed: 27 May 2025)

Carlos, H., German, S., & Dixon, S. (2021). Tashi-Bot: An intelligent personal assistant for users in an educational institution. Language. https://doi.org/10.20944/preprints202108.0380.v1 (Accessed: 27 May 2025)

Kshetri, N. (2025). Revolutionizing Higher Education: The Impact of Artificial Intelligence Agents and Agentic Artificial Intelligence on Teaching and Operations. IT Professional, 27(2), 12–16. https://doi.org/10.1109/MITP.2025.3550697 (Accessed: 27 May 2025)

Lim, E.G. (2025) ‘AI-Driven Educational Equity: Revolutionizing Project Allocation through Intelligent predicative Models and Adaptive Algorithms’, 2025 International Conference on Artificial Intelligence and Education. Suzhou 

Ramirez, E.A.B., & Fuentes Esparrell, J.A. (2024). Artificial Intelligence (AI) in Education: Unlocking the Perfect Synergy for Learning. Educational Process: International Journal, 13(1): 35-51. https://doi.org/10.22521/edupij.2024.131.3 (Accessed: 27 May 2025)

Ren, R., Ma, J., & Zheng, Z. (2025). Large language model for interpreting research policy using adaptive two-stage retrieval augmented fine-tuning method. Expert Systems with Applications, 278, 127330.

Tahvildari, M. (2025). Implementing robo-advisory systems in virtual universities for smart student counselling. Proceedings of the International Conference on Virtual Learning, 203-218. https://doi.org/10.58503/icvl-v20y202517 (Accessed: 27 May 2025)

Thoeni, A., & Fryer, L. K. (2025). AI Tutors in Higher Education: Comparing Expectations to Evidence. https://doi.org/10.31219/osf.io/24tg7_v1 (Accessed: 27 May 2025)

Tophel, A., Chen, L., Hettiyadura, U., & Kodikara, J. (2025). Towards an AI tutor for undergraduate geotechnical engineering: a comparative study of evaluating the efficiency of large language model application programming interfaces. Discover Computing, 28(1), 76-. https://doi.org/10.1007/s10791-025-09580-8 (Accessed: 27 May 2025)

Triberti, S., Di Fuccio, R., Scuotto, C., Marsico, E., & Limone, P. (2024). “Better than my professor?” How to develop artificial intelligence tools for higher education. Frontiers in Artificial Intelligence, 7, 1329605–1329605. https://doi.org/10.3389/frai.2024.1329605 (Accessed: 27 May 2025)

Villegas-Ch, W., García-Ortiz, J., Mullo-Ca, K., Sánchez-Viteri, S., & Roman-Cañizares, M. (2021). Implementation of a Virtual Assistant for the Academic Management of a University with the Use of Artificial Intelligence. Future Internet, 13(4), 97. https://doi.org/10.3390/fi13040097 (Accessed: 27 May 2025)

XJTLU (2025) XJTLU Education + AI Strategic Framework 2025-2028. Suzhou: Xi’an Jiaotong-Liverpool University. Available at: https://www.xjtlu.edu.cn/wp-content/uploads/2025/03/XJTLU-Education-AI-Strategic-Framework.pdf (Accessed: 27 May 2025)

XJTLU E-Support (2025, May 27). Retrieved from: https://esupport.xjtlu.edu.cn/

XJTLU Library (2025, May 27). Retrieved from https://lib.xjtlu.edu.cn/
 
 
 
 
Appendix
 
Y1 EAP Module Guide: https://xipuai.xjtlu.edu.cn/v3/agent/10968

Online Lesson AI Assistant: https://xipuai.xjtlu.edu.cn/v3/agent/10564

Y1 EAP Resit Helper: https://xipuai.xjtlu.edu.cn/v3/agent/10866
 

AUTHOR
Oliver Jarvest
Language Lecturer
English Language Centre, School of Languages, XJTLU

Kun Li
Senior Language Lecturer
English Language Centre, School of Languages, XJTLU

Jinyang Song
Associate Language Lecturer
English Language Centre, School of Languages, XJTLU

Minyue Zhou
Associate Language Lecturer
English Language Centre, School of Languages, XJTLU

DATE
11 July 2025

Related Articles