Proposta de tesi |
Investigadors/es |
Grup de recerca |
Visual learning analytics for virtual learning environments Virtual learning environments generate huge amounts of interaction data that can be analysed and visualized in order to better understand both the teaching/learning process and users' behaviour. This analysis can be done at different levels of detail, combining data from multiple sources (services, learners' profiles, etc.) coming from one or more educational scenarios (a virtual classroom, blog, repository, etc.). Research on this topic is meant to build robust models that can be used to help learners, teachers and managers to fulfil their goals, and to detect and resolve bottlenecks in virtual learning environments, as well as identifying and explaining the most relevant reasons for these, by means of visual learning analytics (both methodologies and tools). |
Mail: jminguillona@uoc.edu
|
|
Authorship and authentication through activities in e-assessment Even though online education is a key factor for lifelong learning, institutions are still reluctant to wager for a fully online educational model. At the end, they keep relying on on-site assessment systems, mainly because fully virtual alternatives do not have the deserved social recognition or credibility. Thus, the design of virtual assessment systems that are able to provide effective proof of student authenticity, authentication and authorship and the integrity of the activities in a scalable and cost efficient manner would be very helpful. This research line proposes to analyse how online assessment in distance learning environments is performed (using tools and resources through continuous e-assessment activities) based on authentication and authorship trustability and the systems used. The activities and their evaluation are core pieces to obtaining evidences about user authentication and authorship. Then the e-assessment approach will be based on a continuous trust level between students and the institution across curricula. This line of research proposes to analyse a virtual assessment approach and systems based on a continuous trust level evaluation between students and the institution by analysing too the current online certification processes. |
Mail: aguerreror@uoc.edu Mail: mrodriguezgo@uoc.edu Mail: dbaneres@uoc.edu |
|
Conversational Agents and Learning Analytics for MOOCs Higher Education Massive Open Online Courses (MOOCs) introduce a way of transcending formal higher education by realizing technology-enhanced formats of learning and instruction and by granting access to an audience way beyond students enrolled in any one Higher Education Institution. However, although MOOCs have been reported as an efficient and important educational tool, there is a number of issues and problems related to the educational aspect. More specifically, there is an important number of drop outs during a course, little participation, and lack of students’ motivation and engagement overall. This may be due to one-size-fits-all instructional approaches and very limited commitment to student-student and teacher-student collaboration. This thesis aims to enhance the MOOCs experience by integrating: • Collaborative settings based on Conversational Agents (CA) both in synchronous and asynchronous collaboration conditions • Screening methods based on Learning Analytics (LA) to support both students and teachers during a MOOC course CA guide and support student dialogue using natural language both in individual and collaborative settings. Moreover, LA techniques can support teachers’ orchestration and students’ learning during MOOCs by evaluating students' interaction and participation. Integrating CA and LA into MOOCs can both trigger peer interaction in discussion groups and considerably increase the engagement and the commitment of online students (and, consequently, reduce MOOCs dropout rate). |
Mail: scaballe@uoc.edu Mail: jconesac@uoc.edu |
SMARTLEARN |
Tools for automatic assessment of technical exercises This research line has the aim to develop new solutions to automatically correct technical exercises from elementary computer science subjects such as programming and databases. These set of tools will open the possibility to provide better feedback and leverage assessment efforts from tutors in these mass courses. Automatic assessment of algorithmic exercises is a vibrating research line. Since the beginning, research efforts have been focus on assessing the exercises based on the testing of a set of input and compare the obtained outputs with the expected values (Boada et al., Prados et al.). The workflow has two steps. First, we compile the code, and if it is successful, the obtained executable is tested against a set of expected input and output datasets. The input can come from text or formatted data files, and the output is saved in files or printed on the screen. Using this paradigm, we can assess the response of a huge variety of exercises, whilst we give instant feedback to the student. However, in some situations despite the output is correct, the algorithm is wrong (i.e.: find against) or in others, the students spend more time trying to format their output to obtain the expect output that to learn to programming. Little effort has been done in the algorithm structure comparison for automatic assessment. Our final goal is generate a tool able to compare the internal structure of two algorithms based on graph matching techniques. From this comparison, we expect to generate a more precise feedback that helps to reinforce the student’s learning experience. Structural algorithm comparison will avoid the assimilation of wrong concepts from early stages of their learning process. Furthermore, this automatic tool will help to focus the teacher daily work since it will be able to provide individual and group reports. Prados, F. et al., Automatic generation and correction of technical exercises. In International Conference on Engineering and Computer Education 2005. ICECE 2005. Boada, I. et al., A teaching/learning support tool for introductory programming courses. In Information Technology Based Proceedings of the FIfth International Conference on Higher Education and Training, 2004. ITHET 2004. |
Mail: mmarcog@uoc.edu Mail: fpradosc@uoc.edu |
|
Enhancing educational support through an adaptive virtual educational advisor Nowadays, many systems help students to learn. Some of them aid students in finding learning resources or recommending exercises. Others aim to help the student in the assessment phase by giving feedback. Furthermore, others monitor the student's progress during the instructional process to recommend the best learning path to succeed in the course. Depending on the objectives/competencies of the subject, some features are more suitable than others.
|
Mail: dbaneres@uoc.edu Mail: aguerreror@uoc.edu Mail: mrodriguezgo@uoc.edu Mail: iguitarth@uoc.edu Mail: mserravi@uoc.edu |
|
AI, Ethics and eLearning: risks and opportunities
Artificial Intelligence (AI) driven technologies can be used to automate pedagogical behaviours within online education environments in order to provide support to large cohorts of online students and human instructors. However, different studies have shown how, in some cases, AI-driven systems have been making unfair and biased decisions that have had unexpected and detrimental effects. Since then, the field of AI and Ethics (AIE) explores how to take into account ethical considerations in the design and integration of AI technologies in order to ensure that their use does not bear unexpected outcomes. These considerations are particularly important in those sectors that are meant to provide universal services to our society, such as public healthcare and education.
Beyond ensuring that ethically-undesirable effects are avoided during the design and implementation of AI-driven learning environments, ethical AI can explore the opposite direction as well: how can AI-enhanced systems be used to achieve ethically-desirable outcomes? Throughout the history of technological advances, the deployment of new tools have created new affordances that affected how their users interacted with each other. By keeping in mind ethically-desirable goals, such as increasing collaboration and cooperation in learning activities, or fostering a sense of community among peer students in online studies, learning environments can be designed and deployed in such a way that the affordances and interactions created in that environment foster those ethically-desirable behaviors. In this sense, the field of AI Ethics can be used not only to prevent and minimize harm, but also to foresee and foster good practices.
Nevertheless, assessing ethical outcomes within an AI-based system can be a challenging task to accomplish. Even though ethical design can help foresee certain outcomes, the huge combination of potential situations makes it practically impossible to anticipate, by design and in advance, every possible effect. Furthermore, the more autonomous AI systems get, the more we need to ensure that those systems have a way of taking into account the potential moral consequences of their choices. Common examples of systems that exhibit a high degree of autonomy and which may need to face complex ethical decisions include self-driving vehicles, adaptive instructional systems, or healthcare robots. The field of Artificial Morality (AM) explores, precisely, how to integrate ethical awareness and moral reasoning within the AI’s decision-making procedures –just as other markers, like utility, or performance, are used to guide the system’s behavior.
This research line will delve into the fields of AI and Ethics (AIE) and learning technologies to respectively explore both the ethical risks and opportunities behind the integration of AI in education, as well as to explore the design of AI technologies that can take ethical considerations into account when making decisions in learning environments. Addressing the challenges behind this research line is a highly interdisciplinary project in nature; it requires the combination of technical engineering skills, knowledge representation, reasoning and modeling about complex scenarios, a holistic understanding of the ethical and social challenges behind the field of online learning and the application of AI tools in it, as well as a way of foreseeing and understanding how changing technological affordances in learning environments will change the available interactions among their participants.
[1] Casas-Roma, J., & Conesa, J. (2021). A literature review on artificial intelligence and ethics in online learning, in Intelligent Systems and Learning Data Analytics in Online Education (eds. Caballé, S., Demetriadis, S., Gómez-Sánchez, E., Papadopoulos, P., Weinberger, A.), Elsevier. ISBN: 9780128234105, pp. 111-131. https://doi.org/10.1016/B978-0-12-823410-5.00006-1
[2] Casas-Roma, J., Conesa, J., Caballé, S. (2021). Education, Ethical Dilemmas and AI: From Ethical Design to Artificial Morality. Proceedings of the 23rd International Conference on Human-Computer Interaction, pp. 167-182. https://doi.org/10.1007/978-3-030-77857-6_11
[3] Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. (2016) The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679
[4] Misselhorn, C. (2018) Artificial morality. Concepts, issues and challenges. Society, 55(2), pp.161-169. https://doi.org/10.1007/s12115-018-0229-y
|
Mail: jcasasrom@uoc.edu Mail: jconesac@uoc.edu Mail: scaballe@uoc.edu |
SMARTLEARN |
Interactive recommendation systems for higher education enrollment
Higher education students at open / distance universities enjoy from a high degree of flexibility during enrollment, which allows them to choose from a long list of subjects to complete their degree. Although this can be seen as a success of enrollment flexibility measures, it may be also the source of one of the most well-known problems in open / distance education: high dropout rates, partly caused by inadequate enrollment. In this research line we will analyze and adapt state-of-the-art recommendation systems to the particularities of the enrollment procedure, taking into account enrollment data and academic results from previous semesters but also students’ preferences and personal interests. Our goal is to design and evaluate interactive recommendation systems that provide students and their mentors with support during enrollment, following a user-centered design approach.
|
Mail: jminguillona@uoc.edu
|
LAIKA |
Tools for supporting the teaching-learning process in fully online programming courses
Learning introductory programming is considered difficult for novice students. As a result, drop-out rates in programming courses are usually high. This fact is worse when the learning environment is fully online. Therefore, teaching programming online is a great challenge.
The main goal of this research is to design, develop and test e-learning tools that support students and instructors throughout teaching-learning process in fully online programming courses. The topics of interest of this research include, but are not limited to, the following:
• To generate automatic feedback in order to support students during their learning process. Feedback can include design, functionality and quality aspects, among others.
• Give teachers mechanisms that allow them be able to provide effective proof of student authenticity and authorship of the programming activities in a cost efficient manner.
• Tools that allow teachers and students to do programming assignments/activities in the cloud, e.g. Web IDE, collaborative tools, etc.
• Automatic or manual assessment tools that support teachers while they grade programming assignments, e.g. dashboards, static analyzers, rubrics, etc.
• Tools that help students to understand programming concepts more easily, e.g. tracing/debugging tools, compiler with easy messages, contextual hints, text-based screencasts, etc.
• A new instructional design (i.e. schedule, activities, assignments, tools, etc.) that helps students to acquire programming skills.
|
Mail: dgarciaso@uoc.edu
|
LAIKA |
Cognitive-affective chatbots and automated feedback for online programming courses
In online learning environments, when students study science or technical subjects, they often get stuck trying to solve a mathematical problem or they are unable to identify the source of an error in the program that are developing. This affects the pace of their learning whereas they may feel isolated. Having the opportunity to interact with instructors or with other online students helps them to partially solve the lack of immediateness while helping them face loneliness and letting them feel part of an academic community. However, a lot has to be done yet in order to face individual problems and to provide personalized learning.
In that sense, we propose PhD proposals around the following topics:
• to improve automatic feedback in programming assignments to help students accomplish these assignments more effectively. The idea is to extend DSLab - a self-assessment tool for programming assignments developed by our research group - with several interactive features and to validate them via authentic online learning experiences from different subjects of different degrees and levels (undergraduate and master).
• to improve the pace of student's learning by means of a cognitive-affective chatbot integrated into a chat service. This chat service is integrated into DSLab tool and is used to promote communication between instructors and students and among students who are involved in the same assignment. The chatbot should automatically answer questions related to the assignment taking into account their emotional states.
|
Mail: adaradoumis@uoc.edu
Mail: jmarquesp@uoc.edu
|
DPCS-ICSO |