Semantic and Episodic Learning to Integrate Diverse Opportunities for Life-Long Learning
2016; MODSIM; Folsom-Kovarik, J.T.; Jones, R.M.; Schmorrow, D.
The Advanced Distributed Learning (ADL) Initiative has developed the Training and Learning Architecture (TLA) with the goal of using information technology to change the paradigm of education from occasional classroom study and training to pervasive and lifelong activity. TLA views education content providers as services that produce educational content relevant to particular learning needs and contexts. A content brokering service assesses an individual learner’s current learning needs, and recommends content that is suitable to those needs but also appropriate to the situation the learner is in (e.g., recommending listening to a particular podcast if the learner is driving somewhere). Key technologies for the TLA vision are a Learning Record Store (LRS), which stores a continuously updated record of learning activity and outcomes, as well as a content meta-tagging language that enables the mapping of particular educational tools and content to specific situations. One challenge associated with content meta-tagging is that it requires significant manual effort, especially as content and technologies change. TLA also does not yet have a capability to identify new meta-tags or relationships between existing tags, implying that it may miss some opportunities for effective instruction. We describe a new research effort called FLUENT (Fast Learning from Unlabeled Episodes for Next-generation Tailoring), which will learn new tags and relationships to improve the overall coverage and effectiveness of content delivery in TLA. FLUENT will use a hybrid machinelearning approach that includes episodic learning, heuristic search based on analogical mapping, and an explanationbased learning capability that uses a background knowledge base of causation in instruction to discover relationships from examples. The knowledge-based learning approach will allow effective learning in a domain where statistical learning methods would suffer from the sparse data available.