Learning Management System Algorithm
The Learning Management System Algorithm is the process used to deliver the most Actionable “Learning Packets” to the learner.
1.0 INTRODUCTION
Many educational websites have a lot of different libraries, but without any integration with other learning communitiies.
As a result, the learners have to search on their own for learning packets related to their learning goals. This becomes a problem, when they find thousands of results that are not suitable and/or not related to their needs.
To solve this problem, the Atlantis Learning Network Learning Management System (ALMS LMS) uses an “Open Standard Algorithm” curates information from multiple sources and adds additional meta-data in order to presents interesting, relevant, and personalized learning nuggets to the learner.
The following is technical specifications of how we do it.
Atlantis Learning Network Process
- Action #1 – Set Intent
- Decision #1 – Is it a question or an answer?
- If question – Action #2 – Construct Question and submit a Search Query to the Cloud(s).
- If answer – Action #3 – Construct Answere and submit a upload request to the Cloud(s).
- Action #4 – “Push” or “Pull” from one of the three Clouds – Personal Cloud, Internet/Public Cloud, or the Atlantis Learning Management System LMS Cloud.
- Result #1 – Feedback from Action #4
- Result #1.1 – Response to Action #2 – A free form answer resulting from the Search Query.
- Result #1.2 – Comfirmation from Action #3 – Your upload was completed.
- Decision #2 – Did the Action from Decision #1 achieve the Intent set in Action #1?
- If response from Result #1 achieved the Intent established in Action #1 – Action #4 – go to End
- If the response from Result #1 did not achieve the Intent – Action #5 – Go back to Action #1.
Each Learning Community comes up with their own data, facts, conclusions, recommended actions, and Learning Tracks.
As each Learning Community comes up with new data, facts, conclusions, recommended actions, and/or Learning Tracks, they will can choose to make that information available to other Learning Communities.
Every time the Personal Learning Community learns something they believe can help others they “Push” the packets into the Cloud. Then each learning community can monitor the Cloud and determine, based on the Meta-data, to download the information to their learning Community, for further analysis or storage. Then the Learning Community puts notices to the members of the community that there are new data, facts, conclusions, recommended actions, and/or learning tracks.
The key here is that along with data, facts, conclusions, and/or learning tracks, is the “meta-data” that provides the transparency to the information so each Learning Community can determine what to do with it.

2.0 Foundation
The suitability of approaches:
- Content-Based System (CBS): ILNs are selected by having correlation between the content the user is looking at and other similar content. Examples: Infofilter (Elkhalifa, 2004) and InfoFinder.
- Collaborative Filtering Systems (CFS): Recommends items or objects to a target user based on similar users’ preferences and on the opinions of other users with similar tastes. It employs statistical techniques to find a set of users known as neighbours to the target user, examples: Amazon.com and ebay.com. CFS has some methods to calculate the likeliness from the rating matrix, the suitable one to our Learning Nuggets is Memory-Based Algorithm (also known as k-Nearest Neighbour Method), because it is suitable to environments where the user preferences have to be updated rapidly. http://www.cs.carleton.edu/cs_comps/0607/recommend/recommender/memorybased.html
- Demographic-Based System (DBS): It uses prior knowledge on demographic information about the users and their opinions for the recommended items as basis for recommendations (Nageswara and Talwao, 2008). It aims to categorize the user based on personal explicit attributes and make recommendations based on demographic group that a user belongs to such as (income, age, learning level or geographical region) or a combination of these clusters/groups. Examples: Grundy, where people’s descriptions of themselves were used to build a user model and then predict characteristics of books that they would enjoy (Rich, 1979) and the Free e-mail suppliers put advertisements based on the user demographic information such Hotmail and Yahoo. The DBS could be used in the process of recommending digital objects as a complementary approach.
- Rule-Based Filtering (RBF): It is filtering information according to set of rules expressing the information filtering policy (Terveen and Hill, 2001). These rules may be part of the user or the system profile contents and it may refer to various attributes of the data items.
- Censorship: RBF is useful in the protection domain e.g., the protection of kids from accessing some materials, e.g., Cyberpatrol.com and Cybersitter.com (Itmazi and Gea, 2006).
- Spam filtering: RBF is useful to be used against the Spam e-mails, e.g., Spam Assassin <spamassassin.apache.org/> and MailEssentials <http://www.gfi.com>. In RS, RBF could be used to filter the recommendations list of digital objects upon some rules of system and student.
- Hybrid Recommender System (HRS): It combines two or more recommendation techniques to gain better performance with fewer of the drawbacks of any individual one (Robin and Burke, 2002). Examples of systems: Tapestry (Goldberg et al., 1992), which mixed CBS and CFS, hybrid algorithm system (Vozalis and Margaritis, 2004) which mixed CFS and DBS and Information lens, which combines the CBS with the RBF (Mackay et al., 1989).
3.0 Flow
A general RS proposal:
We list some consideration of this proposal structure:
|
- PURPOSE or GOAL.
- attempts to settle a QUESTION or solve a PROBLEM.
- ASSUMPTIONS
- POINT OF VIEW
- DATA, INFORMATION
and EVIDENCE. - Biases
- INFERENCES or interpretations by which we draw CONCLUSIONS and give meaning to data.
- IMPLICATIONS and CONSEQUENCES.
Process # | Process Name | |
1 | MEMBER IDENTIFIES INTENT | The member states if they wish to “ADD” information to the Library or “Receive” information from the Library. This is called either a “PUSH” or a “Pull.” It is the same as deciding to take a course or teach a course. Or deciding to check a book out of the Library or write a book and have them put it in the Library. Or, finally, it is like deciding if you are asking a Question or providing an Answer. |
1.1 | IF Push | |
1.1.1 | Begin to construct the Learning Packet (LP) that will be added to the Library. | |
1.1.2 | Query the Personal Profile Database (PPD) to assist in constructing the LP. | |
1.1.3 | Finish constructing the LP. | |
1.1.4 | Publish the Learning Packet to the Library | |
1.1.5 | Output list of Learning Nuggets (LN). | |
1.1.6 | Ask if this achieved learners intent? | |
1.1.7 | If Yes – End | |
1.1.8 | If No – Return to #1 | |
1.2 | If Pull | |
1.2.1 | Begin to construct the Learning Packet (LP) that will be added to the Library. | |
1.2.2 | Query the Personal Profile Database (PPD) to assist in constructing the LP. | |
1.2.3 | Finish constructing the LP. | |
1.2.4 | Query the Library | |
1.2.5 | Output list of Learning Nuggets (LN). | |
1.1.6 | Ask if this achieved learners intent? | |
1.1.7 | If Yes – End | |
1.1.8 | If No – Return to #1 |
The stage of collaborative filtering: We use CFS as a complementary approach to
The rating matrix: The target LMS, must have a way to capture the rating by explicit, implicit methods or mixture of them. These ratings, of the digital objects, are saved in the LMS database as a table of two
|
|
The stage of demographic-based filtering: Theoretically, the role of DBF in
|
The language filtration as an
(Fig. 6):
|
The stage of rule-based filtering: RBF will filter the incoming recommended digital objects upon a set of rules, which could be found in the Member profile and in the system profile. The system administrator put some rules in the system profile, while the Member can put his own rules in his profile.
We suggest that the following types of rules that could be used in the Member profile and the system profile to filter the listed LNs. (Fig. 7):
|
Link: The system will filter out any digital object whose link found in the rules profiles.
Phrase or word: The system will filter out any digital object which his name, keywords or abstract match any phrase or word found in the rules profiles.
Date: The system will not show any digital object does not fit the date criteria.
Size: The system will not show any digital object does not fit the size criteria.
Type: The system will not show any digital object does not fit the type criteria.
|
The system deletes from the recommendations list every digital object that matches any link or keywords as well as any digital object whose dates are out
of the minimum-maximum dates.
|
It also deletes any digital object, whose size is larger than the allowed size and whose type matches the forbidden types. Reading the same fields of rules
from the student profile and repeating the filtration process. Finally, the recommended digital objects are prepared to be presented in a suitable way on
the windows of active student eCourse.
Other keywords Content-Based System, Collaborative Filtering, Rule-Based Filtering

4.0 Meta-Data
Meta-Data about the LPs helps Members determine the value of the LP for them.
According to Paul and Elder (1997 ,2006), the ultimate goal is for the standards of reasoning to become infused in all thinking so as to become the guide to better and better reasoning.
The Meta-Data provides information designed to help the Member better value the LP for them”
- Clarity – Is the LP Consistent and Concise
? - Accuracy – Is the LP Accurate? Has it been checked to see if it is true
? - Precision – Is it precise enough for the goal? Is it specific enough
? Could you give me more details? Could you be more exact? - Relevance – Does it relate to the argument one is making?
How does that help us with the issue? - Depth/Breath – Is it too narrow or too broad?
- Logic –
Does all of this make sense together? Does what you say follow from the evidence? - Significance –
Is this the most important problem to consider? Is this the central idea to focus on? Which of these facts are most important? - Fairness –
Is the LP reasonable in context? Does the LP take into account the thinking of others? Is my purpose fair given the situation? Is the LP using an educated usage, or is it the LP to get a specific argument?
Universal Intellectual Standards
Paul, R. and Elder, L. (2010). The Miniature Guide to Critical Thinking Concepts and Tools. Dillon Beach: Foundation for Critical Thinking Press.
Externally Generated CI
This is the practice of datasets from multiple third-party end users and comparing the results to one’s own experience of reality. In such cases, as much as possible is known about the third-party, so the best valuation can take place. Data collection is normally proactive and automated once consent has been given.
Internally Generated CI
This is the practice of aggregating CI data internally within one’s own Learning Community. In such cases, the need for anonymized data is reduced, since information gathered may not be shared with external third parties.
Members connect to the @lantis Library and share relevant information. This data is normalized and aggregated to enable other members to benefit from one another’s experiences.
The data is analyzed using our @lantis algorithms to learn members best practices, sharing these globally in an anonymized fashion.
The tool alerts members to potentially impending issues, as well as when their learning strays from global best practices.
Learning monitoring tools incorporating CI do so by collecting and tagging learning events for specific members before parsing and enriching them with additional metadata.
The learning events are then indexed into a big data platform.
Traditional pattern analysis is forsaken in favor of alternative approaches that look at what searches members are running when encountering similar learning events. The focus is on guided learning via suggestions, as opposed to attempting to identify the specific needed learning through exhaustive analysis.<[/et_pb_text]
Open Standards
Open Standards allow people to share all kinds of data freely and with perfect fidelity. They prevent lock-in and other artificial barriers to interoperability, and promote choice between vendors and technology solutions. FSFE pushes for the adoption of Open Standards to promote free competition in the IT market, as they ensure that people find it easy to migrate to Free Software or between Free Software solutions.
Starting from the definition contained in the original version of the European Commission’s European Interoperability Framework (EIF), we engaged in a dialogue with various key players in industry, politics and community. In this process, the definition was reworked into a set of five points that found consensus among all the involved. The definition has subsequently been adopted by the SELF EU Project, the 2008 Geneva Declaration on Standards and the Future of the Internet or the Document Freedom Day. A very similar set of “Open Standards Principles” was adopted by the UK Government in July 2014.
Definition
An Open Standard refers to a format or protocol that is
- subject to full public assessment and use without constraints in a manner equally available to all parties;
- without any components or extensions that have dependencies on formats or protocols that do not meet the definition of an Open Standard themselves;
- free from legal or technical clauses that limit its utilisation by any party or in any business model;
- managed and further developed independently of any single vendor in a process open to the equal participation of competitors and third parties;
- available in multiple complete implementations by competing vendors, or as a complete implementation equally available to all parties.
Comment on Emerging Standards
When a new format or protocol is under development, clause 5 cannot possibly be met. FSFE believes this is the correct behaviour in cases where technological maturity is required. In several scenarios, e.g. governmental deployment, the cost of failure can be very high.
In scenarios that seek to promote the growth of Open Standards, strict application of the clause could prevent new Open Standards. From the view of the definition, such standards would compete directly against vendor-driven proprietary formats. In such cases, it can make sense to allow failure of clause 5 for “Emerging Standards.”
Which treatment such “Emerging Standards” receive is largely dependent on the situation. Where cost of failure is high, only fully Open Standards should be used. Where promotion of Open Standards is wanted, Emerging Standards should receive special promotion.
Generally speaking: Open Standards are better than Emerging Standards and Emerging Standards are better than vendor-specific formats. The closer a format comes to meeting all points of the definition, the higher it should be ranked in scenarios where interoperability and reliable long-term data storage is essential.