Neural-symbolic computing: A step toward interpretable AI in Education


 

Bulletin of the Technical Committee on Learning Technology (ISSN: 2306-0212)
Volume 21, Number 4, 2-6 (2021)
Received September 12, 2021
Accepted October 13, 2021
Published online November 14, 2021
This work is under Creative Commons CC-BY-NC 3.0 license. For more information, see Creative Commons License


Authors:

Danial Hooshyar 1, 2email, Yeongwook Yang *3email

*: Corresponding Author
1: Department of Computer Science and Engineering, Korea University, Seoul, Republic of Korea
2: School of Digital Technologies, Tallinn University, Tallinn, Estonia
3: Division of Computer Science and Engineering, Hanshin University, Osan, Republic of Korea

Abstract:

The application of machine learning in education has some challenges, including data bias, sparsity, dependency on large amounts of data, and a lack of interpretability. Further, many learning methods cannot obtain additional information out of the scope of the training data. While the application of neural-symbolic computing (NSC) has been investigated in different domains, by addressing the ambiguities in data and reasoning behind the learning outputs, these frameworks have never been studied in the educational context. We argue that integrating NSC frameworks into educational contexts can be greatly important. Thus, we propose a general framework for embedding NSC paradigm into the educational context, offering a valuable starting point for a framework of interpretable AI in education. The proposed framework can facilitate providing better transparency in learning, leading to nurturing educators, practitioners, and learners’ trust in data-based decisions. Besides, it can open a door to many scholars dealing with the issue of the lack of data and data inconsistency for obtaining accurate predictive results by improving learning and reasoning.

Keywords: Interpretability, Machine learning in education, Neural-symbolic computing, Transparency in AI

I. INTRODUCTION

Machine learning (ML) has proven to be successful in developing models for pattern recognition in engineering. Aside from its classical application, (deep) neural-network-based models have shown great success in domains such as speech recognition (e.g., [1]), computer vision (e.g., [2]), and natural language processing (e.g., [3]). Education has also benefited greatly from the application of such techniques. While prediction of students’ performance (e.g., [4]), procrastination behaviors (e.g., [5]), and grades in online courses (e.g., [6]) are interesting of application of classical machine learning techniques in education, educational researchers have begun applying (deep) neural network techniques to improve previous state-of-the-art results (e.g., for student modeling). Thus, Piech et al. [7] modeled student learning through knowledge tracing problems using Recursive Neural Networks (RNNs) and achieving higher predictive capability compared to Bayesian knowledge tracing models. The reason for such success is the data-driven nature of these approaches, with models learning from examples. Nonetheless, these data-based methods can also result in unsatisfactory outcomes or reach their limits. A lack of enough data to train generalized and well-performing models is the most obvious scenario. On the other hand, there might be many heterogeneous data in some areas, which requires further knowledge description sources. More importantly, purely data-driven models may fail to meet necessary constraints, such as being dictated by learning principles, or educational guidelines. During the past decade, many governmental institutes have funded the promotion of machine learning and AI, although the effectiveness of educational programs and research results can be difficult to assess due to poor explainability and limited reproducibility of ML-based computations [8], [9]. This results in some limitations when it comes to the systems’ accountability, and can lead to challenges like poor reliability and an overall lack of trustworthiness. Thus, there is an urgent need for such models to be explainable and interpretable.

These challenges have resulted in growing research around improving the accuracy of ML-based models and making them interpretable. A prospective solution is neural-symbolic computing (NSC), which aims to integrate robust neural learning and sound symbolic reasoning. The NSC framework revolves around grounding symbolic knowledge into the learning process of neural networks and translating implicit knowledge from the networks to explicit, interpretable symbolic knowledge [10], [11]. This insertion of domain knowledge aims to provide the network with an alternative or additional pre-training, whereas extraction of knowledge after the training offers an explanation of the decision. For instance, Diligenti, Roychowdhury, and Gori [12] and Karpatne et al. [13] have added logic rules and algebraic equations as constraints to the loss function of neural networks.

While the application of NSC has been thoroughly investigated in different domains (such as biology, game AI, engineering) to deal with issues related to a lack of data, data inconsistency, and improving reasoning and interpretability, such frameworks have not been studied in the educational context. Given that educational studies have frequently employed (deep) neural networks in their predictive models (e.g., [14]) and that some challenges can have a serious negative impact on the prediction power of educational predictive models, we argue that integrating NSC frameworks into an educational context can be greatly important. Thus, we propose a general framework for integrating the NSC paradigm into an educational context (e.g., predictive models), which could potentially offer a valuable starting point for the framework of interpretable AI in education. Application of this framework in education would not only lead to nurturing educators, practitioners, and learners’ trust in such data-driven decisions, but also it assist researchers dealing with the issue of the lack of data and data inconsistency for obtaining accurate predictive results.

The structure of this article is as follows: Section two presents related research on machine learning in education and neural-symbolic computing; Section three puts forward the proposed framework for applying neural-symbolic computing to education; and Section four provides discussion and future work.

Fig. 1. Overall framework for integration of neural-symbolic computing in Education.

II. LITERATURE REVIEW

A. Machine Learning in Education

The use of data science in education has eased the collection and mining of educational data to unveil learning needs and patterns. This can further help to offer personalized feedback and learning paths to learners or provide them with tailored interventions and learning strategies [15], [16]. To achieve such goals, advanced computational technologies, such as machine learning and artificial intelligence, are key. Machine learning (ML), considered as among the most rapidly growing methods at the core of data science and AI, has shown to be among the most effective techniques for optimizing teaching and learning (e.g., [17]). ML aims to build computer systems that can automatically learn from past experiences without explicit programming [18]. Employing ML in education provides new insights, recommendations, and predictions regarding individualized learning, classifying patterns and profiles, etc. Some examples of ML application in education are modeling students’ emotional reaction (e.g., [19]–[21]), knowledge, and abilities (e.g., [22]), automatic generation of hints and feedback (e.g., [23], [24]), prediction of students performance and grades (e.g., [25], [26]), early detection of students at risk of failure (e.g., [27]), prediction of procrastination behaviors (e.g., [5]), etc.

Nonetheless, there are two main challenges in applying ML in education: unavailability of enough training data and a lack of interpretability. Depending on the used approach and the parameter that is being predicted, some ML methods require different data amounts in order to make a highly accurate prediction. To develop highly performed predictions, many existing ML-based methods suggest using a lot of data for training their models. However, a large amount of data does not ensure a highly accurate prediction. The main issue is that, regardless of the amount of data provided for training ML models, purely data-driven models may fail to meet necessary constraints, such as being dictated by learning principles or educational guidelines. Another challenge is the inability of most ML-based models to explain their decisions. Similar to other studies (e.g., [8]), we argue that these techniques and other methods in educational context should be featured with interpretability, so that educators, learners, and decision-makers can understand the reasoning behind the actions or decisions. Such explanation would improve pedagogical effectiveness by helping learners understand why the system decided to recommend a specific topic to be taught, or why their answer is incorrect. Further, it can result in nurturing educators, practitioners, and learners’ trust in such decisions, and bringing about their willingness to follow such decisions (e.g., [28]).

B. Neural-Symbolic Computing

Neural-symbolic computing (NSC), an active branch of AI research, combines the two most important cognitive abilities of learning from experiences and reasoning from what has been learned. To this goal, NSC reconciles the symbolic and connectionist paradigms of AI by presenting knowledge in a symbolic form and using neural networks for learning and reasoning [29]. In other words, NSC allows robust learning and inferences in neural networks, accompanied by reasoning with logical systems and interpretability offered by symbolic knowledge extraction.

Recently, researchers have developed numerous NSC methods to address the challenges of ML and AI, including explainability and a lack of training data. For instance, Tran and Garcez [30] introduced a new method, confidence rules, representing propositional formulas in deep neural networks and restricted Boltzmann machines aimed at an easier learning and reasoning process. In their novel approach of injection of the prior knowledge to deep networks, knowledge in the form of rules was translated to the network as weights. This allowed regulating parameters or the network’s structure, aimed at guiding learning. The results of their experiment on image-processing tasks revealed that the approach can improve the accuracy of deep belief networks. Hu et al. [31] presented a general distillation framework for transferring knowledge (in the form of first-order logic rules) to neural networks, wherein first-order logic constraints are integrated via posterior regularization. Their approach was similar to Tran’s and Garcez’s [30], as it injects the prior knowledge into neural networks’ weights. However, it was different to previous works in that it used a process of iterative rule distillation to systematically embed structured knowledge to neural networks’ parameters. In a different attempt, Serafini and Garcez [32] proposed a framework for learning and reasoning which translated logical statements into the loss function rather than the network architecture. Their experiments show that it can be efficiently used for knowledge completion and data prediction tasks. Li and Srikumar [33] introduced constraints to the architecture of neural networks through the translation of first-order logic into differentiable components of networks without additional learnable parameters. Experimenting with their framework of natural language processing tasks revealed that it successfully equips neural models with additional domain knowledge during learning and prediction, especially when training data is limited. A general overview of natural-symbolic computing methods is presented by Garcez et al. [11]. Although NSC methods have been extensively studied in different fields and their effectiveness in handling a lack of interpretability and data has been frequently reported, their application in education is limited.

III. NEURAL-SYMBOLIC COMPUTING FOR EDUCATION: A FRAMEWORK

According to Garcez et al. [11] and Bader and Hitzler [34], neural-symbolic systems can be constructed through the integration of four components, namely knowledge representation, learning, reasoning, and knowledge extraction. Knowledge representation deals with the translation of symbolic (background) knowledge into the network; learning and reasoning revolve around gaining additional knowledge from examples (and generalization) by the network and executing the network: and knowledge extraction involves symbolic knowledge extraction from the network. While there might not be a solid consensus on the crucial components of NSC, several researchers agree that the knowledge representation is the main cornerstone of NSC, and successful reasoning (or problem-solving) is dependent on an appropriate knowledge representation (e.g., [35]).

Inspired by the ML framework introduced by von Rueden et al. [36], this research proposed a generalizable framework for the integration of NSC in the educational context. Fig. 1 shows the proposed framework’s overall architecture. Aside from the available training data, prior knowledge of different forms—such as logic rules, knowledge graphs, etc.—has been integrated into the ML (process). Conventional ML approaches merely feed training data into a machine learning process. Accordingly, this process produces the final hypothesis, solving the task using a model, wherein the knowledge is used at pre-processing stages, particularly in feature engineering. Unlike this, in NSC, represented by dashed arrows in Fig. 1, the domain knowledge is pre-existent and separate from training data, constituting a second source of information that is usually translated to the machine learning process through explicit interfaces. This knowledge can be integrated through regulation of a neural network’s architecture and hyper-parameters (e.g., selecting a model structure), or guiding loss function (e.g., by designing an appropriate regularizer). Note that a mere integration of knowledge without training data or using prior knowledge to augment training data cannot be considered as knowledge representation in NSC.

First, a translation algorithm is used to inject the symbolic knowledge into the loss function or initial architecture of neural networks. For instance, bottom-clause propositionalization can be used to convert symbolic knowledge into propositional clauses, which are then embedded into neural networks through setting weights and biases in the network. Second, using a neural learning algorithm, the neural network is trained with training data, revising the theory via the prior background knowledge. Inductive logic programming is among the methods in neural-symbolic learning that benefits from NSC’s learning capability to automatically build a logic program from examples (e.g., bottom-up type of logic programming). Thereafter, the reasoning can take place within neural networks through different techniques of model-based or theorem proving, enabling the network to act as a powerful parallel system for computing the logical consequences of the knowledge or theory. The information gained through a computation that takes place during the learning and reasoning process could be employed to optimize the network and have a better representation of the problem domain. Thus, inconsistencies between the training examples and domain knowledge can be resolved. Third, the extraction of revised symbolic knowledge represents the result of the training. Similar to the insertion of rules/knowledge, the extraction algorithm must be demonstrably correct, so that the extracted rules or knowledge is guaranteed to be a rule of the network. Finally, an expert can analyze the extracted knowledge and decide if it can be fed to the system to close the NSC cycle.

An example of the application of the proposed framework in education is the prediction of students’ risk of failure in online learning. In such systems, many data on students’ patterns of online learning behavior, frequency, intensity, their change over time, etc. are employed to model students’ learning behavior. Once the model is trained using these data, the system can predict students’ risk of failure (hopefully on time), so that educators and parents can intervene when necessary. However, there are still concerns regarding the predictions by such models: “why should I follow these specific interventions”, “why I am the one who’s at risk of failure”, and so forth. If the trained model can show the path to the decisions, e.g., decision tree, one may observe that a specific student is tagged as the risk of failure mainly because he or she was an introvert and not engaged in chatting with peers in the learning environment. Such inaccurate decisions or predictions could have deteriorating effects on some students and their parents. By including symbolic knowledge into the model (e.g., on students’ personality traits), the application of neural-symbolic computing would eliminate the need for the big load of data that may fail to learn the necessary educational guidelines or some impractical rules. Therefore, it offers highly accurate predictions that take into account the desired educational rules or patterns with minimal domain-specific training, also offering explanations of its predictions.

Another example could be when a teacher tries to address the problem of identifying suitable students for entering the Physics Department. After the training, some models may give more weight to students’ grades on different courses, ignoring the relationship between the courses and the guidelines for entering the Physics Department. In this situation, the relationship between the subjects can be given to the model as symbolic knowledge so that the model can have better reasoning over the knowledge to make more accurate decisions or predictions.

Finally, Fig. 2 illustrates the application of NSC to solve a visual question answering problem (taken from [37]). In this example, the system requires a huge amount of supervision and training data to answer the question. More specifically, the predictive model takes the question as an input and maps it to an answer using end-to-end reasoning, ignoring that concepts and reasoning are entangled. This may make it hard to completely solve the task, but also to create transfer to similar tasks. On the other hand, NSC can relax such challenges and learn to solve the task by bridging the gap between symbolic and sub-symbolic methods (in other words, by marrying learning and reasoning). It firstly employs neural learning (convolutional neural network or CNN) to parse the scene into a sequence of characteristics of the objects like shape and color (the structured representation of the scene). Thereafter, it uses a recurrent neural network (RNN) to semantically parse the question and generate a symbolic program (i.e., filter and query). Finally, it implements symbolic reasoning by running the program on the structured representation, filtering the red and querying its shape to get an answer to the question.

Fig. 2. Solving a visual question answering task using NSC.

IV. DISCUSSION AND FUTURE WORK

Machine learning plays an important role in educational technology through cognitive and non-cognitive student modeling. However, the application of machine learning-based models in education, possesses two main challenges, namely unavailability of enough training data and a lack of interpretability. Regarding the former, although many existing ML-based methods suggest using a big load of educational data for training their models, a mere data-driven model might not meet specific constraints, such as being dictated by learning principles or educational guidelines. The latter stresses the fact that a lack of interpretability and explainability could result in losing educators, practitioners, and learners’ trust in decisions made by machine learning-based models. Borrowing ideas from other fields, we introduce the concept of neural-symbolic computing (NSC) to the educational context. NSC systems combine both the symbolic and neural models to relax the weakness of symbolic systems by neural learning and inference under uncertainty. They also bring about better reasoning and explanation to neural networks by solving their main issues, which are forgetting or having difficulties with extrapolation. Employing NSC has proven to relax both above-mentioned challenges, as prior knowledge can be translated and injected into the network so that it has better learning and reasoning with fewer data. On the other hand, extraction techniques can be employed to translate the existing knowledge in the network to interpretable and explainable rules. Such explainability would help educators, decision-makers, and learners understand the computation behind the decision and evaluate the system’s outputs. Employing NSC in education would also facilitate providing knowledge description for some application areas that might have many heterogeneous data (e.g., when video and audio data are tagged with ontological metadata and multimodal data fusion for information retrieval).

NSC can be employed in the educational context to solve tasks, such as image and video processing, natural language processing, sequential tasks, etc. In terms of natural language processing tasks, such methods can be employed for improving procedural text comprehension or relationship extraction (e.g., [38]). Regarding video or image processing tasks, NSC approaches can be used for emotional understanding or explainable video action reasoning (e.g., [39]). Regarding sequential tasks, these methods can improve the existing (deep) knowledge tracing approach, as some educational principles can be fed to the model while monitoring and tracing learners’ knowledge through sequential models such as RNNs.

Further, NSC can contribute to the educational data mining and learning analytics that often employ machine learning models for prediction of students’ performance, achievement in online courses, procrastination, and failure risk in courses; for student modeling, that is employed in educational recommender systems and knowledge tracing; in open-learner models (OLMs) that provide information on learners’ level of knowledge, motivation, and competencies (e.g., [40]), and many more. For instance, NSC has the potential to improve both accuracy of the educational prediction models and their interpretability by providing a separate source of domain knowledge to the network while being trained on the available training data and extracting hidden knowledge from the predictive models. In a similar vein, NSC can be employed in knowledge transfer learning—by extracting symbolic knowledge from a domain and transferring it to another domain—when there is a lack of domain knowledge [41].

NSC methods can also relax the long-standing issue of OLMs, namely their incapability to transparently explain their development process concerning the assessment mechanism by which learners’ knowledge is evaluated. As reported by Hooshyar et al. [40], aside from opening learner models to feedback information on learners’ knowledge and competency level, transparency or explainability could be educationally valuable. For instance, it would be pedagogically useful to transparently explain the means by which the model derives information and the assessment mechanism employed for inferring respective information in the domain model.

In conclusion, more research is required to provide a comprehensive framework for interpretable machine learning and AI. However, our proposed framework—which is based on the empirical findings gleaned from the work on NSC in other fields—has implications for addressing interpretability of AI models, and, thus, can be considered as a step toward interpretable machine learning and AI in education. As our future work, we plan to apply an NSC-based method to educational data and evaluate its effectiveness in the educational context.

References

[1] L. Deng, G. Hinton, and B. Kingsbury, “New types of deep neural network learning for speech recognition and related applications: An overview,” in 2013 IEEE international conference on acoustics, speech and signal processing, 2013, pp. 8599–8603, DOI: 10.1109/ICASSP.2013.6639344.

[2] L. Hongtao and Z. Qinchuan, “Applications of deep convolutional neural network in computer vision,” Journal of Data Acquisition and Processing, vol. 31, no. 1, pp. 1–17, 2016.

[3] S. Lee, D. Lee, D. Hooshyar, J. Jo, and H. Lim, “Integrating breakdown detection into dialogue systems to improve knowledge management: encoding temporal utterances with memory attention,” Information Technology and Management, vol. 21, no. 1, pp. 51–59, 2020, DOI: 10.1007/s10799-019-00308-x.

[4] M. Kumar and Y. K. Salal, “Systematic review of predicting student’s performance in academics,” Int. J. of Engineering and Advanced Technology, vol. 8, no. 3, pp. 54–61, 2019.

[5] D. Hooshyar, M. Pedaste, and Y. Yang, “Mining Educational Data to Predict Students’ Performance through Procrastination Behavior,” Entropy, vol. 22, no. 1, pp. 1–24, Jan. 2020, DOI: 10.3390/e22010012.

[6] D. Hooshyar and Y. Yang, “Predicting Course Grade through Comprehensive Modelling of Students’ Learning Behavioral Pattern,” Complexity, vol. 2021, pp. 1–12, 2021, DOI: 10.1155/2021/7463631.

[7] C. Piech et al., “Deep Knowledge Tracing,” Jun. 2015, Available: http://arxiv.org/abs/1506.05908v1

[8] C. Conati, K. Porayska-Pomsta, and M. Mavrikis, “AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling,” Jun. 2018, arXiv preprint arXiv:1807.00154.

[9] H. Hagras, “Toward human-understandable, explainable AI,” Computer, vol. 51, no. 9, pp. 28–36, 2018.

[10] A. S. d’Avila Garcez, K. B. Broda, and D. M. Gabbay, Neural-symbolic learning systems: foundations and applications. London: Springer London, 2002, DOI: 10.1007/978-1-4471-0211-3.

[11] A. d’Avila Garcez, M. Gori, L. C. Lamb, L. Serafini, M. Spranger, and S. N. Tran, “Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning,” May 2019, Available: http://arxiv.org/abs/1905.06088v1

[12] M. Diligenti, S. Roychowdhury, and M. Gori, “Integrating prior knowledge into deep learning,” in 2017 16th IEEE international conference on machine learning and applications (ICMLA), 2017, pp. 920–923, DOI: 10.1109/ICMLA.2017.00-37.

[13] A. Karpatne, W. Watkins, J. Read, and V. Kumar, “Physics-guided neural networks (PGNN): An application in lake temperature modeling,” Oct. 2017. arXiv preprint arXiv:1710.11431.

[14] A. Hernández-Blanco, B. Herrera-Flores, D. Tomás, and B. Navarro-Colorado, “A systematic review of deep learning approaches to educational data mining,” Complexity, vol. 2019, 2019, DOI: 10.1155/2019/1306039.

[15] C. R. Cook, S. P. Kilgus, and M. K. Burns, “Advancing the science and practice of precision education to enhance student outcomes,” Journal of School Psychology, vol. 66, pp. 4–10, 2018. DOI: 10.1016/j.jsp.2017.11.004.

[16] O. H. Lu, A. Y. Huang, J. C. Huang, A. J. Lin, H. Ogata, and S. J. Yang, “Applying learning analytics for the early prediction of Students’ academic performance in blended learning,” Journal of Educational Technology & Society, vol. 21, no. 2, pp. 220–232, 2018.

[17] C. A. Bacos, “Machine learning and education in the human age: A review of emerging technologies,” in Science and Information Conference, pp. 536–543, 2019, DOI: 10.1007/978-3-030-17798-0_43.

[18] M. I. Jordan and T. M. Mitchell, “Machine learning: Trends, perspectives, and prospects,” Science, vol. 349, no. 6245, pp. 255–260, 2015, DOI: 10.1126/science.aaa8415.

[19] C. Conati and H. Maclaren, “Empirically building and evaluating a probabilistic model of user affect,” User Modeling and User-Adapted Interaction, vol. 19, no. 3, pp. 267–303, 2009, DOI: 10.1007/s11257-009-9062-8.

[20] N. Bosch et al., “Detecting student emotions in computer-enabled classrooms,” in the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), 2016, pp. 4125–4129.

[21] H. Monkaresi, N. Bosch, R. A. Calvo, and S. K. D’Mello, “Automated detection of engagement using video-based estimation of facial expressions and heart rate,” IEEE Transactions on Affective Computing, vol. 8, no. 1, pp. 15–28, 2016, DOI: 10.1109/TAFFC.2016.2515084.

[22] M. Mavrikis, “Modelling student interactions in intelligent learning environments: constructing bayesian networks from data,” International Journal on Artificial Intelligence Tools, vol. 19, no. 06, pp. 733–753, 2010. DOI: 10.1142/S0218213010000406.

[23] J. Stamper, M. Eagle, T. Barnes, and M. Croy, “Experimental evaluation of automatic hint generation for a logic tutor,” International Journal of Artificial Intelligence in Education, vol. 22, no. 1–2, pp. 3–17, 2013, DOI: 10.3233/JAI-130029.

[24] L. Fratamico, C. Conati, S. Kardan, and I. Roll, “Applying a framework for student modeling in exploratory learning environments: Comparing data representation granularity to handle environment complexity,” International Journal of Artificial Intelligence in Education, vol. 27, no. 2, pp. 320–352, 2017, DOI: 10.1007/s40593-016-0131-y.

[25] Y. Yang, D. Hooshyar, M. Pedaste, M. Wang, Y.-M. Huang, and H. Lim, “Predicting course achievement of university students based on their procrastination behaviour on Moodle,” Soft Computing, vol. 24, no. 24, pp. 18777–18793, 2020, DOI: 10.1007/s00500-020-05110-4.

[26] Y. Yang, D. Hooshyar, M. Pedaste, M. Wang, Y.-M. Huang, and H. Lim, “Prediction of students’ procrastination behaviour through their submission behavioural pattern in online learning,” Journal of Ambient Intelligence and Humanized Computing, pp. 1–18, 2020, DOI: 10.1007/s12652-020-02041-8.

[27] O. Sukhbaatar, T. Usagawa, and L. Choimaa, “An artificial neural network based early prediction of failure-prone students in blended learning course,” International Journal of Emerging Technologies in Learning (iJET), vol. 14, no. 19, pp. 77–92, 2019.

[28] V. Kostakos and M. Musolesi, “Avoiding pitfalls when using machine learning in HCI studies,” Interactions, vol. 24, no. 4, pp. 34–37, 2017, DOI: 10.1145/3085556.

[29] T. R. Besold et al., “Neural-Symbolic Learning and Reasoning: A Survey and Interpretation,” Nov. 2017, Available: http://arxiv.org/abs/1711.03902v1

[30] S. N. Tran and A. S. d’Avila Garcez, “Deep logic networks: Inserting and extracting knowledge from deep belief networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 2, pp. 246–258, 2016, DOI: 10.1109/TNNLS.2016.2603784.

[31] Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. Xing, “Harnessing deep neural networks with logic rules,” Aug. 2020,  arXiv preprint arXiv:1603.06318.

[32] L. Serafini and A. S. d’Avila Garcez, “Learning and reasoning with logic tensor networks,” in Conference of the Italian Association for Artificial Intelligence, 2016, pp. 334–348, DOI: 10.1007/978-3-319-49130-1 25.

[33] T. Li and V. Srikumar, “Augmenting neural networks with first-order logic,” August 2020, arXiv preprint arXiv:1906.06298.

[34] S. Bader and P. Hitzler, “Dimensions of neural-symbolic integration-a structured survey,” Nov. 2005, arXiv preprint cs/0511042.

[35] J. Townsend, T. Chaton, and J. M. Monteiro, “Extracting relational explanations from deep neural networks: A survey from a neural-symbolic perspective,” IEEE transactions on neural networks and learning systems, vol. 31, no. 9, pp. 3456–3470, 2020, DOI: 10.1109/TNNLS.2019.2944672.

[36] L. von Rueden et al., “Informed Machine Learning–A Taxonomy and Survey of Integrating Knowledge into Learning Systems,” May 2021, arXiv preprint arXiv:1903.12394.

[37] M. Jiayuan et al., “The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision,” Apr. 2019, arXiv preprint arXiv:1904.12584.

[38] X. Du et al., “Be consistent! improving procedural text comprehension using label consistency,” Jun. 2019, arXiv preprint arXiv:1906.08942.

[39] T. Zhuo, Z. Cheng, P. Zhang, Y. Wong, and M. Kankanhalli, “Explainable Video Action Reasoning via Prior Knowledge and State Transitions,” Aug. 2019, Available: http://arxiv.org/abs/1908.10700v1

[40] D. Hooshyar, M. Pedaste, K. Saks, Ä. Leijen, E. Bardone, and M. Wang, “Open learner models in supporting self-regulated learning in higher education: A systematic literature review,” Computers & Education, vol. 154, Sep. 2020, DOI: 10.1016/j.compedu.2020.103878.

[41] D. L. Silver, “On Common Ground: Neural-Symbolic Integration and Lifelong Machine Learning,” in Proceedings of the 9th International Workshop on Neural-Symbolic Learning and Reasoning NeSy13, 2013, pp. 41–46.


This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1C1C2004868), and the Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques).


Authors


Danial Hooshyar  

Received his Ph.D. degree from artificial intelligence department at University of Malaya, Malaysia, in 2016. He had worked as a research professor in department of computer science and engineering at Korea University for nearly four years. He is currently an associate professor of learning analytics. His research interest includes artificial intelligence in education, adaptive educational systems, and educational technology.

 

 

 


Yeongwook Yang 

Received his master degree from computer science education, and Ph.D. from the department of computer science and engineering at Korea University, Seoul, South Korea. He had worked as a research professor in the department of computer science and engineering at Korea University for one year. He was a Senior Researcher at University of Tartu, Tartu, Estonia. He is currently an assistant professor in the division of computer engineering at Hanshin University. His research interests include information filtering, recommendation system, educational data mining, and deep learning.