#

Advances in Intelligent Systems and Technologies

Book Series

About the Book
About the Author
Table of Contents

Buy this Book

eBook
  • • Included format: Online and PDF
  • • eBooks can be used on all reading devices
  • • ISSN : 2959-3042
  • • ISBN : 978-9914-9946-0-5


Hardcover
  • • Including format: Hardcover
  • • Shipping Available for individuals worldwide
  • • ISSN : 2959-3034
  • • ISBN : 978-9914-9946-3-6


Services for the Book

Download Product Flyer
Download High-Resolutions Cover

First International Conference on Machines, Computing and Management Technologies

Artificial Intelligence for Web-based Educational Systems

Wang Dong, School of Computing, University of Washington, Seattle, WA.


Online First : 30 July 2022
Publisher Name : AnaPub Publications, Kenya.
ISSN (Online) : 2959-3042
ISSN (Print) : 2959-3034
ISBN (Online) : 978-9914-9946-0-5
ISBN (Print) : 978-9914-9946-3-6
Pages : 055-065

Abstract


Due to the global COVID-19 epidemic in the preceding two years, there has been a significant debate among different academics about how learners may be lectured through the web while maintaining a higher degree of cognitive efficiency. Students may have problems concentrating on their work because of the absence of teacher-student connection, but there are benefits to online learning that are not offered in conventional classrooms. The Adaptive and Intelligent Web-based Educational Systems (AIWES) is a platform that incorporates the design of students' online courses. RLATES is an AIWES that uses reinforcement learning to build instructional tactics. This research intends the aggregation and evaluation of the present research, model classification, and design techniques for integrated functional academic frameworks as a precondition to undertaking research in this subject, with the purpose of acting as an academic standard in the related fields to aid them obtain accessibility to fundamental materials conveniently and quickly.

Keywords


Adaptive and Intelligent Web-based Educational Systems (AIWES), Machine learning (ML), Reinforcement Learning.

  1. M. T. Barros, H. Siljak, P. Mullen, C. Papadias, J. Hyttinen, and N. Marchetti, “Objective supervised machine learning-based classification and inference of biological neuronal networks,” Molecules, vol. 27, no. 19, 2022.
  2. P. Zhao, S. Zhao, J.-H. Xue, W. Yang, and Q. Liao, “The neglected background cues can facilitate finger vein recognition,” Pattern Recognit., vol. 136, no. 109199, p. 109199, 2023.
  3. A. E. J. Bulstra and Machine Learning Consortium, “A machine learning algorithm to estimate the probability of a true scaphoid fracture after wrist trauma,” J. Hand Surg. Am., vol. 47, no. 8, pp. 709–718, 2022.
  4. M. S. Hossain and M. S. Miah, “Machine learning-based malicious user detection for reliable cooperative radio spectrum sensing in Cognitive Radio-Internet of Things,” Machine Learning with Applications, vol. 5, no. 100052, p. 100052, 2021.
  5. M. Crosby, “Building thinking machines by solving animal cognition tasks,” Minds Mach. (Dordr.), vol. 30, no. 4, pp. 589–615, 2020.
  6. S. M. AlAli and J. M. Al Smady, “Validity and reliability of a Jordanian version of the Adaptive Behavior Assessment System (ABAS-II) in identifying adaptive behavior deficits among disabled individuals in Jordan,” J. Educ. Psychol. Stud. [JEPS], vol. 9, no. 2, pp. 248–261, 2015.
  7. F. A. Dorça, L. V. Lima, M. A. Fernandes, and C. R. Lopes, “Comparing strategies for modeling students learning styles through reinforcement learning in adaptive and intelligent educational systems: An experimental analysis,” Expert Syst. Appl., vol. 40, no. 6, pp. 2092–2101, 2013.
  8. S. Prasomphan, “Toward fine-grained image retrieval with adaptive deep learning for cultural heritage image,” Comput. Syst. Sci. Eng., vol. 44, no. 2, pp. 1295–1307, 2023.
  9. C. Qu, Q. Yu, P. Houston, R. Conte, A. Nandi, and J. Bowman, “Many-body Δ-Machine Learning brings the accuracy of conventional force field to coupled cluster: application to the TTM2.1 water force field,” Research Square, 2022.
  10. Y. Cho and KDI국제정책대학원, “Effects of AI-based personalized adaptive learning system in higher education,” J. Korean Assoc. Inf. Educ., vol. 26, no. 4, pp. 249–263, 2022.
  11. R. L. Blomeyer Jr, “Instructional policy and the development of instructional computing: Maintaining adaptive educational programs,” Educ. consid., vol. 13, no. 3, 1986.
  12. X. Xiang and S. Foo, “Recent advances in Deep Reinforcement Learning applications for solving partially observable Markov Decision Processes (POMDP) problems: Part 1—fundamentals and applications in games, robotics and natural language processing,” Mach. Learn. Knowl. Extr., vol. 3, no. 3, pp. 554–581, 2021.
  13. A. Nahhas, A. Kharitonov, and K. Turowski, “Deep reinforcement learning techniques for solving hybrid flow shop scheduling problems: Proximal policy optimization (PPO) and asynchronous advantage actor-critic (A3C),” in Proceedings of the Annual Hawaii International Conference on System Sciences, 2022.
  14. A. A. Untila, ITMO University, N. N. Gorlushkina, and ITMO University, “Сonceptual models of computer games in the tasks of managing the involvement of students in the learning process,” Economics. Law. Innovaion, pp. 48–55, 2022.
  15. H. Apriyanto et al., “The development of real-time monitoring and managing information system for digitalization of plant collection data in Indonesian Botanical Garden,” aisthebest, vol. 7, no. 1, pp. 16–30, 2022.
  16. L. Xu, X. Han, K. Jiao, and T. Gao, “Research on the integration and optimization of MOOC teaching resources based on deep reinforcement learning,” Int. J. Contin. Eng. Educ. Life Long Learn., vol. 1, no. 1, p. 1, 2023.
  17. P. W. Cardon, H. Ma, and C. Fleischmann, “Recorded business meetings and AI algorithmic tools: Negotiating privacy concerns, psychological safety, and control,” Int. J. Bus. Commun., p. 232948842110370, 2021.
  18. H. Curiel and A. Poling, “Web-based stimulus preference assessment and reinforcer assessment for videos: Web-based preference and reinforcer assessment,” J. Appl. Behav. Anal., vol. 52, no. 3, pp. 796–803, 2019.
  19. J. E. de Aguilar-Nascimento, “Fundamental steps in experimental design for animal studies,” Acta Cir. Bras., vol. 20, no. 1, pp. 2–8, 2005.
  20. R. Young, “Discriminative stimulus effects of an imidazolidine-derived appetite suppressant,” Med. Chem. Res., 2022.
  21. H. D. Kimmel and H. Lachnit, “The Rescorla-Wagner theory does not predict contextual control of phasic responses in transswitching,” Biol. Psychol., vol. 27, no. 2, pp. 95–112, 1988.
  22. A. Sharma, S. Tokekar, and S. Varma, “Actor-critic architecture based probabilistic meta-reinforcement learning for load balancing of controllers in software defined networks,” Autom. Softw. Eng., vol. 29, no. 2, 2022.
  23. I. N. Yazid and E. Rachmawati, “Autonomous driving system using proximal policy optimization in deep reinforcement learning,” IAES Int. J. Artif. Intell. (IJ-AI), vol. 12, no. 1, p. 422, 2023.
  24. M. Böck and C. Heitzinger, “Speedy categorical distributional reinforcement learning and complexity analysis,” SIAM Journal on Mathematics of Data Science, vol. 4, no. 2, pp. 675–693, 2022.
  25. S. Tufenkci, B. Baykant Alagoz, G. Kavuran, C. Yeroglu, N. Herencsar, and S. Mahata, “A theoretical demonstration for reinforcement learning of PI control dynamics for optimal speed control of DC motors by using Twin Delay Deep Deterministic Policy Gradient Algorithm,” Expert Syst. Appl., vol. 213, no. 119192, p. 119192, 2023.
  26. Y. T. Kim and S. Y. Han, “Cooling channel designs of a prismatic battery pack for electric vehicle using the deep Q-network algorithm,” Appl. Therm. Eng., vol. 219, no. 119610, p. 119610, 2023.
  27. C. Wernz, “Multi-time-scale Markov decision processes for organizational decision-making,” EURO j. decis. process., vol. 1, no. 3–4, pp. 299–324, 2013.
  28. C. A. Duncan, M. T. Goodrich, and E. A. Ramos, “Efficient approximation and optimization algorithms for computational metrology,” Comput. Stand. Interfaces, vol. 21, no. 2, pp. 189–190, 1999.
  29. J. Bradley, D. E. Pooley, and W. Kockelmann, “Artifacts and quantitative biases in neutron tomography introduced by systematic and random errors,” J. Instrum., vol. 16, no. 01, pp. P01023–P01023, 2021.
  30. S. K. Tiwari, L. A. Kumaraswamidhas, and N. Garg, “Time-series prediction and forecasting of ambient noise levels using deep learning and machine learning techniques,” Noise Control Eng. J., vol. 70, no. 5, pp. 456–471, 2022.
  31. Q. Li, X. Meng, F. Gao, G. Zhang, and W. Chen, “Approximate cost-optimal energy management of hydrogen electric multiple unit trains using double Q-learning algorithm,” IEEE Trans. Ind. Electron., vol. 69, no. 9, pp. 9099–9110, 2022.
  32. C. Lee, J. Jung, and J.-M. Chung, “Intelligent dual active protocol stack handover based on double DQN deep reinforcement learning for 5G mmWave networks,” IEEE Trans. Veh. Technol., vol. 71, no. 7, pp. 7572–7584, 2022.
  33. N. V. Varghese and Q. H. Mahmoud, “A hybrid multi-task learning approach for optimizing deep reinforcement learning agents,” IEEE Access, vol. 9, pp. 44681–44703, 2021.
  34. S. Antunović and D. Vukičević, “Evaluating topological ordering in directed acyclic graphs,” Electron. J. Graph Theory Appl., vol. 9, no. 2, p. 567, 2021.
  35. A. Kushwaha and T. J. Dhilip Kumar, “Benchmarking PES‐Learn’s machine learning models predicting accurate potential energy surface for quantum scattering,” Int. J. Quantum Chem., vol. 123, no. 1, 2023.
  36. S. Lohani, J. Lukens, R. T. Glasser, T. A. Searles, and B. Kirby, “Data-Centric Machine Learning in Quantum Information Science,” Mach. Learn.: Sci. Technol., 2022.
  37. V. Sethi and S. Pal, “FedDOVe: A Federated Deep Q-learning-based Offloading for Vehicular fog computing,” Future Gener. Comput. Syst., vol. 141, pp. 96–105, 2023.
  38. D. Jiménez, A. Angulo, A. Street, and F. Mancilla-David, “A closed-loop data-driven optimization framework for the unit commitment problem: A Q-learning approach under real-time operation,” Appl. Energy, vol. 330, no. 120348, p. 120348, 2023.
  39. H. S. Yaseen and A. Al-Saadi, “Q-learning based distributed denial of service detection,” Int. J. Electr. Comput. Eng. (IJECE), vol. 13, no. 1, p. 972, 2023.
  40. G. Shi et al., “Risk-aware UAV-UGV rendezvous with Chance-Constrained Markov Decision Process,” arXiv [cs.RO], 2022.
  41. C. Wu, W. Bi, and H. Liu, “Proximal policy optimization algorithm for dynamic pricing with online reviews,” Expert Syst. Appl., vol. 213, no. 119191, p. 119191, 2023.
  42. Y. Liu, Q. Ye, J. Escribano-Macias, Y. Feng, E. Candela, and P. Angeloudis, “Routing planning for last-mile deliveries using mobile parcel lockers: A Hybrid Q-Learning Network approach,” arXiv [cs.AI], 2022.

Cite this article


Wang Dong, “Artificial Intelligence for Web-based Educational Systems”, Advances in Intelligent Systems and Technologies, pp. 055-065. 2022. doi:10.53759/aist/978-9914-9946-0-5_7

Copyright


© 2023 Wang Dong. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.