Towards Encoding Background Knowledge with Temporal Extent into Neural Networks

Han The Anh, Nuno C. Marques

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Neuro-symbolic integration merges background knowledge and neural networks to provide a more effective learning system. It uses the Core Method as a means to encode rules. However, this method has several drawbacks in dealing with rules that have temporal extent. First, it demands some interface with the world which buffers the input patterns so they can be represented all at once. This imposes a rigid limit on the duration of patterns and further suggests that all input vectors be the same length. These are troublesome in domains where one would like comparable representations for patterns that are of variable length (e.g. language). Second, it does not allow dynamic insertion of rules conveniently. Finally and also most seriously, it cannot encode rules having preconditions satisfied at non-deterministic time points – an important class of rules. This paper presents novel methods for encoding such rules, thereby improves and extends the power of the state-of-the-art neuro-symbolic integration.
Original languageEnglish
Title of host publicationKnowledge Science, Engineering and Management. KSEM 2010
EditorsY Bi, MA Williams
ISBN (Electronic)9783642152801.
Publication statusPublished - 2010
EventKnowledge Science, Engineering and Management 2010 - Belfast, United Kingdom
Duration: 1 Sept 20103 Sept 2010

Publication series

NameLecture Notes in Computer Science


ConferenceKnowledge Science, Engineering and Management 2010
Abbreviated titleKSEM 2010
Country/TerritoryUnited Kingdom


Dive into the research topics of 'Towards Encoding Background Knowledge with Temporal Extent into Neural Networks'. Together they form a unique fingerprint.

Cite this