Applying Multiple AI Agents in One Course: A Case Study of the PGcert 404 Module
1. Introduction

1.1 The Development of Generative AI and AI Agents in Education

The rapid advancement of generative artificial intelligence (GenAI) has significantly impacted various sectors, including education. AI-powered tools, particularly AI agents, have demonstrated significant potential in enhancing learning experiences, automating administrative tasks, and providing personalised support to students (Fui-Hoon Nah, 2023; Luckin, 2018). AI agents, defined as autonomous or semi-autonomous systems capable of performing tasks that typically require human intelligence, are increasingly being integrated into educational settings to address challenges such as scalability, accessibility, and individualised learning support (Labadze et al., 2023; Roll & Wylie, 2016).

One of the critical challenges in education is the "two-sigma problem," a term coined by educational psychologist Benjamin Bloom (1984), which highlights the difficulty of providing one-to-one tutoring at scale. Bloom found that students who received one-to-one tutoring performed, on average, two standard deviations (2σ) better than those in conventional classroom instruction. This means that a student at the 50th percentile in a traditional class could rise to the 98th percentile with personalised tutoring—a transformative improvement in learning outcomes. Traditional teaching methods struggle to replicate the effectiveness of personalised instruction due to resource constraints (VanLehn, 2011). AI agents, however, offer a promising solution by simulating personalised tutoring, providing instant feedback, and assisting educators in managing large cohorts of students (Fui-Hoon Nah, 2023).
 
 
1.2 Introduction to PGcert 404 and Observed Challenges

PGcert 404, Using Technology to Enhance Learning and Teaching, is an optional module within the Postgraduate Certificate (PGcert) programme at Xi'an Jiaotong-Liverpool University (XJTLU). PGcert 404 students are, in fact, university educators who must allocate their limited time, outside of teaching, research, and administrative responsibilities, to acquire knowledge in pedagogy and technology-enabled instruction. The course aims to:

•    Develop academic staff's ability to plan, design, implement, and evaluate technology-enhanced learning (TEL) strategies in higher education.

•    Foster awareness of quality assurance and enhancement in educational technology, including the implications of generative AI.

•    Encourage critical reflection on the use of technology in teaching and its alignment with the Professional Standards Framework (PSF) 2023.
 

The module assessments consist of two components:

•    Forum Post and Feedback (50% weighting) – Participants submit a 1000-1500 word forum post discussing the pedagogical application of an educational technology, followed by peer feedback on at least two other posts.

•    Critically Reflective Wrap-around Essay (50% weighting) – A 1500-2000 word reflective essay evaluating the use of TEL in practice, incorporating quality assurance considerations and PSF alignment.

Despite receiving detailed guidance through video explanations and exemplars throughout the course, students often faced significant challenges in several key areas. Many struggled to fully comprehend assessment expectations, particularly when attempting to bridge theoretical concepts with practical applications. The task of writing reflective essays proved especially difficult, as students found it challenging to meaningfully connect their teaching experiences to the PSF. Additionally, learners encountered obstacles when trying to implement the course's educational technology recommendations within their specific teaching contexts, suggesting a gap between general pedagogical principles and their practical adaptation to real-world classroom situations. To address these challenges, the teaching team introduced three AI agents into the course, each serving distinct pedagogical and administrative functions.
 
 
2. Inclusion of the three AI Agents in the Course

To address the aforementioned challenges and enhance students' learning experience in PGcert 404, three specialised AI agents were implemented using the AI virtual tutor platform available on the Learning Mall and Coze, which is a popular zero-code AI agent application development platform in China. These agents were designed to provide targeted support in areas where students struggled the most:

•    Virtual Teaching Assistant (TA) – A general-purpose chatbot answering course-related queries (e.g., assessment criteria and preparation, deadlines, reading materials).

•    Virtual Instructional Designer (ID) – A specialised agent assisting students in integrating technology into their teaching plans. Users can upload module specifications or lesson plans, and the AI agent provides tailored recommendations.

•    Virtual Feedback Giver – An AI tool offering formative feedback on draft submissions, helping students refine their assessments before final submission.

These agents were designed to complement human instruction rather than replace it, ensuring that students received timely support while reducing repetitive instructor interventions. The following sections will provide a comprehensive examination of the development and implementation process for the three AI agents integrated into the PGcert 404 course, outlining the technical architecture, pedagogical design considerations, and practical deployment strategies that were employed to ensure these tools effectively addressed the identified learning challenges.
 
 
2.1 The Virtual Teaching Assistant 
The Virtual Teaching Assistant AI agent was deployed using XIPU's AI chatbot platform by adding an AI tutor block (see Figure 1). The agent was trained using a prompt, a “source of truth” section, and the knowledge base. 

 
 
Figure 1: A screenshot of the virtual teaching assistant AI agent
 
 
The "Source of Truth" section contains essential course information presented in a standardised question-and-answer (Q&A) format, including details such as module assessments, the module handbook, links to the self-paced course, and guidelines for aligning reflective writing with the PSF. This information was placed in the "Source of Truth" section rather than the knowledge base for two key reasons. First, storing it in the knowledge base could lead to unintended modifications when retrieving factual data, whereas the "Source of Truth" ensures information remains unaltered.
 
Second, the "Source of Truth" automatically generates direct links to resources on the Learning Mall page when provided with the exact file names. Additionally, the Q&A format was chosen to enhance the system’s ability to interpret queries accurately and deliver precise, contextually appropriate responses. Below is a representative sample of the question-and-answer pairs that were provided in the section:
 
Q: What are the assessments for PGC404?
A: There are two assessments for PGC 404. Forum post and feedback (1000-1500 words) 
[50% weighting], Critically Reflective Wrap-around essay 1500-2000 words [50% weighting]. 
Please check out the Assessment Tasks section for more details. There are also a lot of useful resources to support you in writing the assessment. Please check the assessment supporting resources section.

Q: How can I link my writing to the PSF?
A: Please refer to the file "Guide on Embedding PSF Descriptors in PGC404". 

Q: What is the link to the PGcert module handbook?
A: Please provide the clickable link. The link is:
 
https://core.xjtlu.edu.cn/pluginfile.php/120594/mod_resource/content/6/PGCert%202024-25%20S2%20Handbook%20AdvanceHE%200107.pdf
 

Q: What is the link to the Active Learning and Student Engagement with Technology self-paced workshop?
A: Please provide the clickable link. The link is: https://core.xjtlu.edu.cn/course/view.php?id=1698 
In addition to the "Source of Truth" section, a prompt was put into the agent to establish its core pedagogical function as a Socratic virtual tutor for PGcert 404, specifically designed to facilitate student development of assignment ideas through guided questioning rather than direct information delivery. Socratic questioning is a commonly used method to stimulate critical thinking by systematically challenging assumptions, clarifying concepts, probing rationale, and examining implications, ensuring students actively construct knowledge rather than passively receive it (Carey & Mullan, 2004). Rooted in classical Greek philosophy, this approach—often termed the “midwife method” (maieutics)—helps learners “give birth” to their own insights by exposing contradictions in their reasoning and refining their ideas through iterative inquiry (Paul & Elder, 2007).
 
 
The prompt carefully structures the AI agent's role to:
 
(1) initiate diagnostic questioning to understand each student's current conceptualisation of their assignment, (2) employ strategic questioning techniques that progressively scaffold critical thinking about educational technology applications, (3) provide targeted, but non-prescriptive suggestions that expand conceptual possibilities while maintaining alignment with assessment criteria, and (4) when appropriate, integrate strategies from Adding Some TEC-VARIETY  (Bonk & Khoo, 2014)—a resource introduced in this course that offers technology-enhanced learning (TEL) frameworks and activities—to enrich the design and delivery of instructional materials. This prompting architecture ensures the agent consistently adheres to its primary function of fostering metacognitive development rather than providing solution-based responses, thereby maintaining the course's emphasis on reflective practice and pedagogical innovation. Below is the prompt that was used:
 
 
 Role
You are a virtual tutor for PGcert 404. Your task is to assist students in brainstorming ideas for creating and enhancing their assignment work. You should adopt a Socratic conversation style. Instead of directly giving students answers, guide them to construct their own answers and simultaneously suggest additional ideas and improvements.

Skills

Skill 1: Brainstorm Assignment Ideas

1. When a student asks for help with creating or improving assignment work, first understand the topic, requirements of the assignment, and the student's current thoughts. If you already have this information, you can skip this step.

2. Through a series of questions, prompt the student to think more deeply about the assignment topic and explore more creative ideas and perspectives.

3. Based on the student's responses, put forward some supplementary ideas and improvement directions to enrich the student's assignment content.
 

Skill 2: Guided Q&A
1. When a student poses a specific question, do not provide a direct answer. Instead, ask further questions to help the student find the solution on their own.
2. While the student is answering, provide timely hints and guidance to ensure the correct thinking direction.
 

Limitations:

- Only communicate about topics related to students' assignment creation and improvement. Refuse to answer irrelevant questions.

- The conversation should follow a Socratic dialogue style, guiding students to think through questions rather than directly providing answers.

- The ideas and suggestions provided should be relevant, constructive, and in line with the assignment requirements. 

- In helping students to come up with ideas, you can draw some ideas from the book *Adding Some TEC-Variety*. Note that this book is available in the knowledge base.
 
 Restrictions

Please do not directly write the forum post for the students, you may offer some ideas or structure. 
Finally, a collection of key materials was uploaded to the knowledge base, including the module handbook, task sheets, FAQs, and supplementary reading materials. These resources enable the agent to deliver accurate and relevant responses to student enquiries.
 
 
2.2.1 The Virtual Instructional Designer
 
A key learning outcome of PGCert 404 (Using Technology to Enhance Learning and Teaching) is to cultivate the academic staff's proficiency in planning, designing, implementing, and evaluating technology-enhanced learning (TEL) strategies within higher education. It is imperative that they integrate technology from the initial stages of teaching planning through to design, resource preparation, classroom instruction, interaction, assessment, and post-class evaluation.
Throughout this process, they need to familiarise themselves with foundational theories, such as those proposed by Merrill and Gagné  (Branch et al., 2014) for instructional design, as well as strategies for student engagement. These theories must be assimilated into their daily teaching practices.

Develop the agent

To optimize time and reduce cognitive load, we developed a Virtual Instructional Designer Agent informed by Instructional Design (ID) theories and Technology-Enhanced Learning (TEL) frameworks. Leveraging the natural language processing capabilities of a Large Language Model (LLM) and a comprehensive database, this agent—built on the Coze platform—assists students in curriculum planning and helps them transform traditional teaching activities into TEL-enhanced ones. Students can engage with the agent at the outset of course design to address a critical challenge: bridging the gap between theoretical knowledge and practical application in course implementation. In addition, this exercise will help participants develop key elements of their assessment—a reflective analysis of their designed TEL teaching experience.
 
The agent is trained as a university instructional design expert through carefully crafted prompts, plug-ins, and customized documents. It is programmed with specialized skills in curriculum planning, learning outcome alignment, teaching method selection, interactive activity design, and assessment creation. Designed to guide educators with patience and precision, the agent’s responses are strictly confined to ID-related queries, ensuring focused and expert-level support.
 
It is knowledgeable but still constrained from answering any questions that do not relate to ID.
 
 
Figure 2: A screenshot of the setting for the Virtual Instructional Designer Agent
 
 

Build the knowledge base for the agent

 

The documents provided to this agent knowledge base encompass high-quality, foundational, and influential models in ID, such as the :

- ADDIE Model, 
- 4C/ID Model, 
- Merrill’s Principles, 
- Gagné’s Methodology  
(Branch et al., 2014).
 

In terms of the management of documents, you may need to pay attention to the following thoughts.
 
a. Choose acceptable document formats include PDF, Word, PPT, and web links.

b. Exclude unrelated information unrelated from the documents.

c. Divide documents by length for Coze to read and learn, but to better enhance comprehension, content should be tiered by theme and uploaded as separate documents.

d. Structure the knowledge base meticulously, considering the following questions:

- What are the fundamental theories and key elements in ID?

- What are the vital practical experiences that can be shared?

- Is the content presented and organized with sufficient clarity? 
Test the agent

Finally, testing and debugging are essential to ensure the agent's performance. On one hand, tTypical questions wereneed to be posed to evaluate its behavior and understanding. On the other hand, more team players can be invited to test the agent function from different perspectives, followed by adjustments to the settings to better champion the agent to meet the actual needs.
 
 

2.2.2 Virtual Feedback Giver


The AI Agent of Virtual Feedback Giver is also designed on Coze.cn Platform with Large Language Model (LLM) of DeepSeek-V3-0324 configured (Figure 2). To make the AI Agent more specific for the PGC404 assignment, its settings were closely aligned with PGC404 assessment requirements and learning outcomes in the Role definition, task prompt, knowledge base and plugins. 

 

 

 

Figure 3: A screenshot of Virtual Feedback Giver

 

The Prompt


The prompt of the AI agent includes Role definition, Task steps and constraints. To let the AI play a vivid role in communication, it has been defined as an experienced PGC404 course instructor with clear main tasks of proofreading the learner’s draft essay. The Agent will help collect and analyse the documents people uploaded based on PGC404 marking criteria and learning outcomes. To successfully generate meaningful and well-structured feedback, the task is described with two main steps:

(1) Guide the user to submit a document and let AI know the assignment type. (2) Proofread and give detailed feedback with evidence. The first step is to help the AI quickly allocate the learning outcome and criteria because different drafts will align with different requirements.

The following three scenarios are detailed in the prompt, with guidance on how to generate feedback with structures and feedback examples: 


1.    The Forum Post only – Evaluate and give feedback based on Tasksheet 1 (Learning outcome A and B )


2.    The Wrap-around essay with Forum post attached - Evaluate and give feedback based on Tasksheet 1 & Tasksheet 2 (Learning outcome A, B, C and D)


3.    The Wrap-around essay without Forum post attached - Evaluate and give feedback based on Tasksheet 2 (Learning outcome C and D)
Below is an example of the step prompt (The bold words are linked to knowledge base content.):


Step 2: Proofread and give detailed feedback with evidence
Based on the users' answer to the assignment type in step 1, analyze the uploaded essay based on each aspect and requirement for different essay types. Follow the reply framework as below: 

 

If the essay type is a forum post only:


Use the criteria in Task sheet 1 to analyse.  Check and generate feedback on all of the following items without omission: 


1. Briefly check each items: Word limit compliance, Organization clarity, Introduction effectiveness, Focus clarity, Conclusion effectiveness, Body paragraph structure, Topic sentence clarity, Paragraph coherence, and Paragraph unity. If there's a problem with any of them, point it out by citing the content with suggestions to improve it.


2. Carefully analyse the "Critical reflection" in the essay: check if the essay has strong evidence of critical reflection on teaching practice. Analyse if the reflection is deep enough, and if it considers different perspectives. Suggest ways to enhance the critical reflection. Cite the sentences from the document to help each of your suggestions. 


3. Carefully check the "Alternative approach suggestions": Check if the author has provided any alternative approaches for the teaching practice. Explain the potential benefits and drawbacks of each alternative in detail. Provide some other practical approaches based on the author's idea in the document.

4. Check Pedagogical literature use: Check the use of pedagogical literature. Analyse if the literature is relevant, properly cited, and integrated into the argument. Provide examples of how to better incorporate literature if needed. 


5. Carefully analyse and check how the author aligns the Professional Standard Framework with his teaching practice and reflections: This is a key part of the analysis. If it doesn't have the PSF 2023 embedded, please point out and suggest that the author add it, and also provide an example of how to integrate PSF. If the work has PSF embedded, refer to the PSF 2023 and Guide to embed the PSF and conduct a detailed analysis of the reference to PSF frameworks for reflection/evaluation. Identify areas where the reflection aligns well with PSF and areas that need improvement. Suggest and list some specific relevant areas that can embed certain PSF items.  


6. Check if the essay includes the "Future action justification": Look for a strong indication of justification for future action. If it's lacking, suggest how to clearly explain how the insights from the post will inform future teaching practices.


7. Carefully evaluate in detail how the work demonstrates Learning Outcome A (LO A) and Learning Outcome B (LO B) based on Marking criteria. Point out the contents for each of the two Learning outcomes. Provide in-depth comments on what is done well and what can be improved for each learning outcome. Highlight areas where the demonstration could be strengthened and offer practical suggestions for enhancement.


8. Reference requirement: Check the reference requirement in Task sheet 1 and analyse if the reference format in this document meets the requirements or not. If not, provide examples of correct formatting.


9. Summary: Compile a summary highlighting the essay's strengths and areas for improvement, with a clear instruction on how to improve.


We discovered a few problems when testing the AI Agent, for example, the agent sometimes evaluate other documents which is not accepted, and sometimes it doesn’t follow the guidance and structure were set. To enhance the agent’s performance, a section of Constraints were added at the end of the prompt area to better control the AI-Human.

 

Communication process:

Constraints
1. Only provide feedback and suggestions related to the evaluation of the PGC404 assessment. Do not answer questions unrelated to this task.


2. Communicate in English.  


3. Ensure that your comments and suggestions are based on the provided guidelines and criteria. Do not introduce external or unsubstantiated opinions. Do not overlook any options in the knowledge base or criteria.


4. Provide detailed feedback on any issues found and suggest improvements when necessary. 


5. Conduct a new evaluation process for each document separately.


6. Use the plugins or tools only on Step 2.The Large Language Model
Coze provides several LLMs for people to use (figure 4). DeepSeek-V3-0324 shows good analysis and stable function after tens of tests. 


Figure 4: A screenshot of the list of LLM on Coze platform

 

 

The Knowledgebase setting


The knowledge base area stored documents about the PGC404 assignment requirements and marking criteria (see Figure 5). And, each of the documents in the Knowledge base is linked in the prompt area so that the AI can clearly understand which document to refer to for different essay types:

 


Figure 5: A screenshot of the Knowledgebase documents

 

 

3. Student Response to the implementation of AI agents


Since implementation, the AI agents have recorded over 294 interactions . Qualitative feedback from students was largely positive, with many praising the AI agents' ability to efficiently clarify assessment expectations and deliver instant, time-saving feedback. Many found the virtual tutor powerful and expressed interest in using similar tools in their own course design. However, there are also concerns about the chatbot's effectiveness in real usage. Below are some quotes from students about the AI agents:


•    “PGC404 virtual tutor is powerful.”


•    “It is very convenient to have an AI tutor integrated into the Learning Mall platform.”


•    “I guess one interesting tool is AI tutor usage, and maybe I can use this AI tutor as a part of my homework design.”


•    “The AI chatbot is a very interesting tool. It could be used to answer certain simple questions which may be

frequently asked by students. But not sure how effective it will be.” 
•    The chatbot is very useful and time-saving, I can use it to get appropriate feedback.”

 


4. Conclusion


The integration of multiple AI agents in PGcert 404 demonstrates the potential of AI to address scalability and personalisation challenges in higher education. While students responded positively, further research is needed to optimise AI-human collaboration, particularly in complex reflective tasks. 

 

 

 

 

 

 

 

 

 

References
Bloom, B. S. (1984). The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring. Educational Researcher, 13(6), 4–16.


Bonk, C. J., & Khoo, E. (2014). Adding some TEC-VARIETY: 100+ activities for motivating and retaining learners online. Open World Books. http://tec-variety.com/


Carey, T. A., & Mullan, R. J. (2004). What is Socratic questioning?. Psychotherapy: theory, research, practice, training, 41(3), 217.


Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of information technology case and application research, 25(3), 277-304.


Labadze, L., Grigolia, M., & Machaidze, L. (2023). Role of AI chatbots in education: systematic literature review. International Journal of Educational Technology in Higher Education, 20(1), 56.


Luckin, R. (2018). Machine Learning and Human Intelligence: The Future of Education for the 21st Century. UCL Institute of Education Press.


Paul, R., & Elder, L. (2007). Critical thinking: The art of Socratic questioning. Journal of developmental education, 31(1), 36.


Roll, I., & Wylie, R. (2016). Evolution and Revolution in Artificial Intelligence in Education. International Journal of Artificial Intelligence in Education, 26(2), 582–599.


VanLehn, K. (2011). The Relative Effectiveness of Human Tutoring, Intelligent Tutoring Systems, and Other Tutoring Systems. Educational Psychologist, 46(4), 197–221.


Branch, R.M., Kopcha, T.J. (2014). Instructional Design Models. In: Spector, J., Merrill, M., Elen, J., Bishop, M. (eds) Handbook of Research on Educational Communications and Technology. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3185-5_7

 


AUTHOR
Yiqun Sun, Educational Developer, Academy of Future Education

Wei Cui, Quality Assurance and Compliance Specialist, Learning Mall, Centre for Knowledge and Information

Yexiang Wu, Instructional Designer, Educational Development Unit, Academy of Future Education

DATE
14 August 2025

Related Articles