All Lessons

All Lessons

Learning Management System Algorithm

 

 The Learning Management System Algorithm is the process used to deliver the most Actionable “Learning Packets” to the learner.

1.0 INTRODUCTION
Many educational websites have a lot of different libraries, but without any integration with other learning communitiies.  

As a result, the learners have to search on their own for learning packets related to their learning goals.  This becomes a problem, when they find thousands of results that are not suitable and/or not related to their needs.  

To solve this problem, the Atlantis Learning Network Learning Management System (ALMS LMS) uses an “Open Standard Algorithm” curates information from multiple sources and adds additional meta-data in order to presents interesting, relevant, and personalized learning nuggets to the learner.

The following is technical specifications of how we do it.

Atlantis Learning Network Process

  1. Action #1 – Set Intent
  2. Decision #1 – Is it a question or an answer?
    1. If question – Action #2 – Construct Question and submit a Search Query to the Cloud(s).
    2. If answer – Action #3 – Construct Answere and submit a upload request to the Cloud(s).
  3. Action #4 – “Push” or “Pull” from one of the three Clouds – Personal Cloud, Internet/Public Cloud, or the Atlantis Learning Management System LMS Cloud.
  4. Result #1 – Feedback from Action #4
    1. Result #1.1 – Response to Action #2 – A free form answer resulting from the Search Query.
    2. Result #1.2 – Comfirmation from Action #3 – Your upload was completed.
  5. Decision #2 – Did the Action from Decision #1 achieve the Intent set in Action #1?
    1. If response from Result #1 achieved the Intent established in Action #1 – Action #4 – go to End
    2. If the response from Result #1 did not achieve the Intent – Action #5 – Go back to Action #1.

Each Learning Community comes up with their own data, facts, conclusions, recommended actions, and Learning Tracks.

As each Learning Community comes up with new data, facts, conclusions, recommended actions, and/or Learning Tracks, they will can choose to make that information available to other Learning Communities.

Every time the Personal Learning Community learns something they believe can help others they “Push” the packets into the Cloud.  Then each learning community can monitor the Cloud and determine, based on the Meta-data, to download the information to their learning Community, for further analysis or storage.  Then the Learning Community puts notices to the members of the community that there are new data, facts, conclusions, recommended actions, and/or learning tracks.

The key here is that along with data, facts, conclusions, and/or learning tracks, is the “meta-data” that provides the transparency to the information so each Learning Community can determine what to do with it.

ALMS Algorithm

2.0 Foundation

 

The suitability of approaches:

  1. Content-Based System (CBS): ILNs are selected by having correlation between the content the user is looking at and other similar content. Examples: Infofilter (Elkhalifa, 2004) and InfoFinder. 
  2. Collaborative Filtering Systems (CFS): Recommends items or objects to a target user based on similar users’ preferences and on the opinions of other users with similar tastes. It employs statistical techniques to find a set of users known as neighbours to the target user, examples: Amazon.com and ebay.com. CFS has some methods to calculate the likeliness from the rating matrix, the suitable one to our Learning Nuggets is Memory-Based Algorithm (also known as k-Nearest Neighbour Method), because it is suitable to environments where the user preferences have to be updated rapidly.  http://www.cs.carleton.edu/cs_comps/0607/recommend/recommender/memorybased.html
  3. Demographic-Based System (DBS): It uses prior knowledge on demographic information about the users and their opinions for the recommended items as basis for recommendations (Nageswara and Talwao, 2008). It aims to categorize the user based on personal explicit attributes and make recommendations based on demographic group that a user belongs to such as (income, age, learning level or geographical region) or a combination of these clusters/groups. Examples: Grundy,  where people’s descriptions of themselves were used to build a user model and then predict characteristics of books that they would enjoy (Rich, 1979) and the Free e-mail suppliers put advertisements based on the user demographic information such Hotmail and Yahoo. The DBS could be used in the process of recommending digital objects as a complementary approach. 
  4. Rule-Based Filtering (RBF): It is filtering information according to set of rules expressing the information filtering policy (Terveen and Hill, 2001). These rules may be part of the user or the system profile contents and it may refer to various attributes of the data items.
    1. Censorship: RBF is useful in the protection domain e.g., the protection of kids from accessing some materials, e.g., Cyberpatrol.com and Cybersitter.com (Itmazi and Gea, 2006).
    2. Spam filtering: RBF is useful to be used against the Spam e-mails, e.g., Spam Assassin <spamassassin.apache.org/> and MailEssentials <http://www.gfi.com>. In RS, RBF could be used to filter the recommendations list of digital objects upon some rules of system and student. 
    3. Hybrid Recommender System (HRS): It combines two or more recommendation techniques to gain better performance with fewer of the drawbacks of any  individual one (Robin and Burke, 2002). Examples of systems: Tapestry (Goldberg et al., 1992), which mixed CBS and CFS, hybrid algorithm system (Vozalis and Margaritis, 2004which mixed CFS and DBS and Information lens, which combines the CBS with the RBF (Mackay et al., 1989).

 

 

3.0 Flow

 

A general RS proposal:

We list some consideration of this proposal structure:

CBS is used as a primary approach because it can give comprehensive, related and sufficient recommendations by using the objects attributes in the recommendation process
CFS is not used as a primary approach to begin because this approach becomes useful only after a critical mass of opinions, which means less numbers of recommendations or null recommendations.  However, it will be the primary method once critical mass is achieved.
DBS and RBF used as complementary approaches, because the demographic information of DBS and the rules of RBF are not useful to be a primary approach
The recommendations will appear at the ALMS Dashboard

 

  1. PURPOSE or GOAL.
  2.  attempts to settle a QUESTION or solve a PROBLEM.
  3. ASSUMPTIONS
  4. POINT OF VIEW
  5. DATA, INFORMATION and EVIDENCE.
  6. Biases
  7. INFERENCES or interpretations by which we draw CONCLUSIONS and give meaning to data.
  8. IMPLICATIONS and CONSEQUENCES.

Algorithm of CBS: The general steps of the ALMS are:

Process #Process Name               
1MEMBER IDENTIFIES INTENT    The member states if they wish to “ADD” information to the Library or “Receive” information from the Library.  This is called either a “PUSH” or a “Pull.”  It is the same as deciding to take a course or teach a course.  Or deciding to check a book out of the Library or write a book and have them put it in the Library.  Or, finally, it is like deciding if you are asking a Question or providing an Answer.
1.1IF Push
1.1.1Begin to construct the Learning Packet (LP)  that will be added to the Library.
1.1.2

Query the Personal Profile Database (PPD) to assist in constructing the LP.

1.1.3

Finish constructing the LP.

1.1.4

Publish the Learning Packet to the Library

1.1.5

Output list of Learning Nuggets (LN).

1.1.6Ask if this achieved learners intent?
1.1.7If Yes – End
1.1.8If No – Return to #1
1.2If Pull
1.2.1Begin to construct the Learning Packet (LP)  that will be added to the Library.
1.2.2Query the Personal Profile Database (PPD) to assist in constructing the LP.
1.2.3

Finish constructing the LP.

1.2.4Query the Library
1.2.5Output list of Learning Nuggets (LN).
1.1.6Ask if this achieved learners intent?
1.1.7If Yes – End
1.1.8If No – Return to #1

 


The stage of collaborative filtering:
We use CFS as a complementary approach to organize the priorities of the recommendations. The general mechanism of CFS based on defining subgroups (every subgroup known as the nearest neighbours) whose preferences are similar to the active user, so the nearest neighbours of the active student are those students who share the same institute (department, school). Then this stage calculates the average of the subgroups rating to order the recommendations upon the high rates.

 

The rating matrix: The target LMS, must have a way to capture the rating by explicit, implicit methods or mixture of them. These ratings, of the digital objects, are saved in the LMS database as a table of two dimension matrix where the row represents all the rates of one member on all LNs, while the column represents all the rates of all the members on the LNs. (Table 1).

 

 

Table 1:Rating matrix

 

 

Fig. 5:Algorithm of the CFS stage

 

The stage of demographic-based filtering: Theoretically, the role of DBF in a LMS is to filter the incoming recommendations from the previous stage upon the Member’s demographic (and personal) data that related to their learning goals. For example, the following demographic-personal data could be related to the education issues: preferred language, student specialization, study level year faculty and department.

 

 

Fig. 6:Algorithm of the CFS stage

 

The language filtration as an example, means that the active student needs all the recommended digital objects in his preferred language, so any language of digital objects in the recommendations list defer from his preferred language will be deleted.

 

Algorithm of the demographic-based filtering: DBF could be work as follow
(Fig. 6):

 

Receiving the list of the recommended digital objects from the previous
stage
Reading the related demographic and personal data of the active student
profile
Matching the related fields of each digital object from the list with
the fields of the active student profile, so if the matching process is
not positive; the digital object will be deleted from the list
Finally, the recommended digital objects are passed to the next stage

 

The stage of rule-based filtering: RBF will filter the incoming recommended digital objects upon a set of rules, which could be found in the Member profile and in the system profile. The system administrator put some rules in the system profile, while the Member can put his own rules in his profile.

 

We suggest that the following types of rules that could be used in the Member profile and the system profile to filter the listed LNs. (Fig. 7):

 

 

Fig. 7:Student and system rules

 

Link: The system will filter out any digital object whose link found in the rules profiles.

 

Phrase or word: The system will filter out any digital object which his name, keywords or abstract match any phrase or word found in the rules profiles.

 

Date: The system will not show any digital object does not fit the date criteria.

 

Size: The system will not show any digital object does not fit the size criteria.

 

Type: The system will not show any digital object does not fit the type criteria.

 

Algorithm of the rule-based filtering: RBF could be work as follow (Fig. 8): Receiving the list of the recommended digital objects from the previous stage. Reading the following fields of the system rules:

 

Field which contains link of digital object
Field which contains keywords
Fields of maximum and minimum dates
Field which contains the allowed size
Field which contains the forbidden types

 

The system deletes from the recommendations list every digital object that matches any link or keywords as well as any digital object whose dates are out
of the minimum-maximum dates.

 

 

Fig. 8:Algorithm of the RBF stage

 

It also deletes any digital object, whose size is larger than the allowed size and whose type matches the forbidden types. Reading the same fields of rules
from the student profile and repeating the filtration process. Finally, the recommended digital objects are prepared to be presented in a suitable way on
the windows of active student eCourse.

 

Other keywords Content-Based System, Collaborative Filtering, Rule-Based Filtering and Demographic-Based System.

 

 

4.0 Meta-Data

 

Meta-Data about the LPs helps Members determine the value of the LP for them.

According to Paul and Elder (1997 ,2006), the ultimate goal is for the standards of reasoning to become infused in all thinking so as to become the guide to better and better reasoning.

The Meta-Data provides information designed to help the Member better value the LP for them”

  • Clarity – Is the LP Consistent and Concise?
  • Accuracy – Is the LP Accurate?  Has it been checked to see if it is true
  • Precision – Is it precise enough for the goal?  Is it specific enough?  Could you give me more detailsCould you be more exact?
  • Relevance – Does it relate to the argument one is making?  How does that help us with the issue?
  • Depth/Breath – Is it too narrow or too broad?
  • Logic – Does all of this make sense togetherDoes what you say follow from the evidence?
  • Significance – Is this the most important problem to considerIs this the central idea to focus onWhich of these facts are most important?
  • Fairness – Is the LP reasonable in context? Does the LP take into account the thinking of othersIs my purpose fair given the situation? Is the LP using an educated usage, or is it the LP to get a specific argument?

    Universal Intellectual Standards

    Paul, R. and Elder, L. (2010). The Miniature Guide to Critical Thinking Concepts and Tools. Dillon Beach: Foundation for Critical Thinking Press.

    Externally Generated CI

    This is the practice of datasets from multiple third-party end users and comparing the results to one’s own experience of reality. In such cases, as much as possible is known about the third-party, so the best valuation can take place.  Data collection is normally proactive and automated once consent has been given.

    Internally Generated CI

    This is the practice of aggregating CI data internally within one’s own Learning Community. In such cases, the need for anonymized data is reduced, since information gathered may not be shared with external third parties.

    Members connect to the @lantis Library and share relevant information. This data is normalized and aggregated to enable other members to benefit from one another’s experiences.

    The data is analyzed using our @lantis algorithms to learn members best practices, sharing these globally in an anonymized fashion.

    The tool alerts members to potentially impending issues, as well as when their learning strays from global best practices.

    Learning monitoring tools incorporating CI do so by collecting and tagging learning events for specific members before parsing and enriching them with additional metadata.

    The learning events are then indexed into a big data platform.

    Traditional pattern analysis is forsaken in favor of alternative approaches that look at what searches members are running when encountering similar learning events. The focus is on guided learning via suggestions, as opposed to attempting to identify the specific needed learning through exhaustive analysis.<[/et_pb_text]

    Open Standards

    Open Standards allow people to share all kinds of data freely and with perfect fidelity. They prevent lock-in and other artificial barriers to interoperability, and promote choice between vendors and technology solutions. FSFE pushes for the adoption of Open Standards to promote free competition in the IT market, as they ensure that people find it easy to migrate to Free Software or between Free Software solutions.

    Starting from the definition contained in the original version of the European Commission’s European Interoperability Framework (EIF), we engaged in a dialogue with various key players in industry, politics and community. In this process, the definition was reworked into a set of five points that found consensus among all the involved. The definition has subsequently been adopted by the SELF EU Project, the 2008 Geneva Declaration on Standards and the Future of the Internet or the Document Freedom Day. A very similar set of “Open Standards Principles” was adopted by the UK Government in July 2014.

    Definition

    An Open Standard refers to a format or protocol that is

    1. subject to full public assessment and use without constraints in a manner equally available to all parties;
    2. without any components or extensions that have dependencies on formats or protocols that do not meet the definition of an Open Standard themselves;
    3. free from legal or technical clauses that limit its utilisation by any party or in any business model;
    4. managed and further developed independently of any single vendor in a process open to the equal participation of competitors and third parties;
    5. available in multiple complete implementations by competing vendors, or as a complete implementation equally available to all parties.

    Comment on Emerging Standards

    When a new format or protocol is under development, clause 5 cannot possibly be met. FSFE believes this is the correct behaviour in cases where technological maturity is required. In several scenarios, e.g. governmental deployment, the cost of failure can be very high.

    In scenarios that seek to promote the growth of Open Standards, strict application of the clause could prevent new Open Standards. From the view of the definition, such standards would compete directly against vendor-driven proprietary formats. In such cases, it can make sense to allow failure of clause 5 for “Emerging Standards.”

    Which treatment such “Emerging Standards” receive is largely dependent on the situation. Where cost of failure is high, only fully Open Standards should be used. Where promotion of Open Standards is wanted, Emerging Standards should receive special promotion.

    Generally speaking: Open Standards are better than Emerging Standards and Emerging Standards are better than vendor-specific formats. The closer a format comes to meeting all points of the definition, the higher it should be ranked in scenarios where interoperability and reliable long-term data storage is essential.