All Lessons

All Lessons

Atlantis Learning Packets (ALPS) – Communication Guide

Atlantis Learning Packet_

Minimum Meta-Data

  • Contributor
  • Date

Additional Meta-Data

  • Author of the Content
  • Agree with the Content
  • Learning Category
  • Original Publisher of the Content

Learning Types

  • Answers
  • Arguments
  • Beliefs
  • Classes
  • Definitions
  • Facts
  • Lessons
  • Posts
  • Questions
  • Quotes

Learning Topics

  • Communication
  • Economics
  • Engineering/Science
  • Information Science
  • Legal
  • Political Science
  • Social Science
  • Religion/Spirituality
  • Technology

Security and Authentication: Permissions and OAuth

ALPs supports authentication, and the most common two authentication mechanisms: HTTP basic authentication and oAuth.

Learning Packet Transfer: Retrieval

Retrieving a collection of Learning Packets from an ALN Library and storing them locally to support reports or visualizations is the best way to work with the data. 

Constantly querying the ALN Library is less effective because it’s a big load. Pulling down the data also allows you to do more in-depth queries of the data because the Learning Packet API doesn’t query on all Learning Packet parameters (extensions, for example, can not be queried with the Learning Packet API).

An ALN Library will need to carefully index data because efficiently serving the queries can be complex.

One challenge is efficiently handling relationships between Learning Packets. An example of this would be if a search for certain conditions that match Learning Packet C must return Learning Packet A. Then if A targets B which targets C, all three must be returned, as described in filter conditions for Learning Packet refs. Learning Packet references are ways to point to another Learning Packet as important to the referencing Learning Packet.

When a query for Learning Packets is made to the ALN Library, it returns a number of Learning Packet results and generates a URL, which has to remain active for 24 hours to allow for more Learning Packets to be pulled down. 

There are also guidelines about the length of URLs and storage of query data, that’s all in the spec too.

Data Quality

The ALN Library has specific responsibilities when it comes to voided Learning Packets. 

This is because @lantis I Learning Packets are immutable. They cannot be deleted. 

A false or inaccurate Learning Packet can only be voided. The burden is on the ALN Library to make sure that the original flawed Learning Packet is dealt with appropriately so it doesn’t mire reports and other ways that the data may be used.

The ALN Library needs to store all ‘voiding’ Learning Packets, and when returning Learning Packet results the ALN Library must filter out any Learning Packets which have been voided. 

The ALN Library should check to see if that Learning Packet made any changes to the “Meta-data” and roll the changes that the Learning Packet made back in. 

The ALN Library checks for syntax. It has to validate all of the pieces of a Learning Packet to make sure that data is formatted properly, required fields are present, and that the JSON values match the requirements of the spec. 

The semantics of a Learning Packet and meaning of the data within the Learning Packet is the responsibility of the content provider and since all content providers add their ID to the Blockchain, problems with the Learning Packets can be traced back to the creator of the Learning Packet.

Considering the volume of data that is flowing in and out of an ALN Library, concurrency is a major concern. The specification has many rules and checks that an ALN Library must do to make sure that the data in the ALN Library is not being degraded by putting old data into the ALN Library.

Signed Learning Packets have a special case in the ALN Library. 

They can be validated by the authority signing the Learning Packet without trusting the system that the Learning Packet originated in. 

In order for an ALN Library to validate signed Learning Packets, the ALN Library must check specific algorithms and properties in the JSON Web Signature (JWS) format. 

The implementor of the ALN Library needs to understand specification for JWS as well as the specific rules outlined in the ALPs specification.

An ALN Library returns only meta information only to HTTP headers, not the full documents in the ALN Library. A request can be made to find out if there is a newer version of a document without downloading the full document. 

This is a less load intensive way of checking to see if documents need to be downloaded from the ALN Library to support an application or report.

Future-Proofing: API versioning and resource requests

Version headers are required in every response from an ALN Library to make it clear what versions of the ALPs specification the ALN Library supports. The version headers use semantic versioning. 

An ALN Library needs to provide an ‘about’ resource that returns JSON that identifies the versions of the specification that the ALN Library supports.

Credential Management

An ALN Library is not required by the specification to do user management and permission management, but in order to be usable it will need to. 

This includes building ways for applications to register as OAuth consumers, managing user credentials, and managing basic authorization combos. 

Managing who can talk with the ALN Library is one part, the next step is to give controls for how much they can access which comes with a much more complicated set of decisions to be made about security and permissions.

What is ALPs?

ALPs refers to the @lantis Learning Packet API.

The ALPs Specification defines how to structure, send, and retrieve learning packets. 

When tools adhere to the ALPs specification, it makes it possible for learners and teachers to more effectively learn from each other. 

Also, since the Learning Packet is in a “Standard Format”, people can more easily analyze and compare data from different sources and, as a result, more easily create new content by finding new patterns.

ALPs Benefits

The main benefit of ALPs is that it allows learning wherever it occurs. 

It empowers you to bring learning from many different learning places and platforms into a single tool.

Learning is no longer limited to what occurs on a specific Learning Management System (LMS). You can now get learning that occurs in an eLearning course, on a website, in a mobile app, in a flight simulator, in a face-to-face session, or anywhere else.

On top of this, ALPs is very flexible. For example, you can use ALPs data to determine things like:

  • Which resources people access within an eLearning course and how much time they spend viewing each resource.
  • Which actions a pilot takes in a flight simulator (control wheel movements, buttons pressed, etc.), as well as which actions the pilot takes while flying the plane itself.
  • Which notes and responses people type into an online, ALPs-enabled workbook.
  • Which potential clients a sales rep had conversations with (along with their notes about each conversation).
  • And so much more!

If the activity is occurring on a system that, at some point, has access to the internet, then you can likely track any aspect of that activity with ALPs.

Finally, ALPs data is interoperable because it is “REST” based. 

Just as you can upload a SCORM package to any LMS, you can send ALPs data to and from any REST Based Endpoint. 

This means that, when using ALPs, you are not locked to any specific software. You can bring your learning with you to new tools, platforms, and vendors. 

It also means that ALPs-enabled tools know how to deal with data they receive from other ALPs-enabled tools.

In essence, ALPs gives you one language and set of rules that you can use to learn and teach “any and/or all” of the human learning and performance experiences that occur.

What is an ALPs Learning Packet?

ALPs data exists in the form of human- and machine-readable ALPs Learning Packets. 

Therefore, ALPs Learning Packets are the building blocks of ALPs. 

Each Learning Packet is composed of two core components: the content and the meta-data. 

Atlantis Learning Packet_

The Content is any group of bits. It could be graphics or text or a video or all three. It doesn’t matter.

The Meta-Data has two core components: the “Providence” of the Content, the “Providence” of the member making the contribution.

ALPs Examples

ALPs deals with sending, retrieving, and analyzing data, but it’s up to the practitioner to use the data in meaningful ways. Let’s consider some of the different ways that you can use ALPs:

  • An adaptive eLearning assessment shows you questions based on your performance in other eLearning modules AND on the job.
  • An L&D team supporting a customer service function analyzes employee learning and performance data to determine the effectiveness of their learning programs — this helps them determine where to focus their efforts for improving their learning offerings.
  • An ALPs-enabled chatbot looks at someone’s past learning data to recommend the next learning resource that the person should access.
  • A company wants to track user-specific learning data without an LMS, so they add ALPs to their website to track which pages people are viewing, which resources they’re accessing, and which eLearning courses they’re launching.
  • A talent manager looks at a dashboard showing which employees would be good fits for which positions based on their learning and on-the-job performance data.
  • A large enterprise uses ALPs to track all of the learning and performance experiences that occur across an organization. They work with data scientists and machine learning tools for deep insights about where to focus their efforts for improvement.
  • An eLearning team uses ALPs to track very specific behaviors in their eLearning offerings, and then they use this data to improve content effectiveness and the user experience.

As you can see, ALPs is often used to track and analyze human learning and performance. ALPs data can help designers create adaptive learning experiences and make data-driven improvements to their learning offerings.

ALPs Examples

ALPs deals with sending, retrieving, and analyzing data, but it’s up to the practitioner to use the data in meaningful ways. Let’s consider some of the different ways that you can use ALPs:

  • An adaptive eLearning assessment shows you questions based on your performance in other eLearning modules AND on the job.
  • An L&D team supporting a customer service function analyzes employee learning and performance data to determine the effectiveness of their learning programs — this helps them determine where to focus their efforts for improving their learning offerings.
  • An ALPs-enabled chatbot looks at someone’s past learning data to recommend the next learning resource that the person should access.
  • A company wants to track user-specific learning data without an LMS, so they add ALPs to their website to track which pages people are viewing, which resources they’re accessing, and which eLearning courses they’re launching.
  • A talent manager looks at a dashboard showing which employees would be good fits for which positions based on their learning and on-the-job performance data.
  • A large enterprise uses ALPs to track all of the learning and performance experiences that occur across an organization. They work with data scientists and machine learning tools for deep insights about where to focus their efforts for improvement.
  • An eLearning team uses ALPs to track very specific behaviors in their eLearning offerings, and then they use this data to improve content effectiveness and the user experience.

As you can see, ALPs is often used to track and analyze human learning and performance. ALPs data can help designers create adaptive learning experiences and make data-driven improvements to their learning offerings.

What is an ALPs Activity Provider

ALPs activity providers (or learning record providers) are anything that can generate and send ALPs Learning Packets. For example, you may have an eLearning course that generates Learning Packets, an app that’s used during face-to-face sessions that generates Learning Packets, and an LMS that generates ALPs Learning Packets. The eLearning course, app, and LMS would all be considered activity providers in the context of ALPs.

How Does ALPs Work?

In technical terms, ALPs “works” by communicating data from an activity provider to an ALN Library via HTTP requests. To break this down further:

  1. An activity provider generates an ALPs Learning Packet, which gets sent to an ALN Library.
  2. The ALN Library receives the ALPs Learning Packet. If the Learning Packet is not compliant with the ALPs specification, then it will get rejected.
  3. A person uses analytics tools in the ALN Library to analyze the data, or the data gets forwarded to other tools or platforms for further processing and analysis.

Activity providers can also pull ALPs data from the ALN Library. This allows the learning experience to adapt and change based on the user’s previous learning and performance experiences. 

In that case, ALPs works like this:

  1. An activity provider sends a request to the ALN Library to retrieve ALPs Learning Packets.
  2. The ALN Library validates the request, then it provides a stream of Learning Packets that meet the given criteria.
  3. The activity provider programmatically changes the content that the user sees based on the Learning Packets that it receives.

In short, ALPs works by sending data to and retrieving data from an ALN Library. 

Let’s take a closer look at what an ALN Library is.

What is a @lantis Learning Network Library (ALN Library)?

So, all of this talk about ALPs and ALN Libraries, but what is an ALN Library? 

An ALN Library is a database that holds your ALPs Learning Packets. 

Since you need an ALN Library to hold your ALPs data, you cannot use ALPs without an ALN Library. They go hand-in-hand.

In addition to holding your ALPs data, ALN Libraries often include analytics capabilities. They allow you to create dashboards, generate reports, and work with your data to gain insights.

You can use multiple ALN Libraries to hold data from your different systems, but it’s a good idea to have one ALN Library that serves as the “point-of-truth” for all of your learning and performance data. 

By bringing all of this data into one ALN Library, you can look at relationships and create more meaningful, informative dashboards.

When done effectively, this will give you insights into the effectiveness of your learning programs. It will also help you identify where to focus your efforts to improve effectiveness.

ALN Library vs LMS

The @lantis Learning Network Library (ALN Library) is not to be confused with the Learning Management System (LMS). 

ALN Libraries hold your learning and performance data — they receive the ALPs Learning Packets from your ALPs content providers.

LMSs, on the other hand, handle many more tasks. They allow you to gate content behind user accounts, give different levels access to different users, and use basic reporting capabilities.

While some LMSs have built-in ALN Library, most do not. If you already have an LMS, you can use an ALN Library from another vendor — this is a very common workflow. Likewise, if you have an LMS that does have an ALN Library included, then you should be able to send ALPs Learning Packets from any activity providers to your built-in ALN Library.

ALPs Objects

Technically speaking, ALPs Learning Packets are JSON objects. 

JSON, which stands for JavaScript Object Notation, is a set of syntax rules that tells you how to structure your data. JSON is not unique to ALPs — it is used by most modern web applications to structure and communicate data.

Each ALPs Learning Packet is a JSON object, but the Learning Packet object is made up of smaller, more specific JSON objects (such as “actor,” “verb,” and “object” objects).

Since JSON is the “language” of ALPs and this section could get quite technical, I am going to refrain from including any code. The links at the bottom of each section bring you to deep dive articles that include many code samples (if you’re interested).

So, earlier, we mentioned that ALPs Learning Packets can hold information about an actor, verb, and object, but they can also hold much more information. Let’s take a closer look.

Learning Type Object

Rawdata, Facts, Beliefs, Arguments, Posts, Lessons, & Classes.

Content Providence Object

The “Content Providence” Object provides Meta-data about the Learning Content.

Creator Providence Object

The “creator Providence” object provides Meta-data about the Creator of the Content.

Resource Object

The “Resource” object holds the Content..

Context Object

The “context” object situates the Learning Packet within the greater context of the experience as a whole. It can include information about a parent activity, a group of related activities, or even the ALPs profile that it’s adhering to.

Beyond information about activities, it can include context such as who the instructor was for the experience, which team the actor is a part of, and more.

ALPS Overview

 

 

Table of Contents

What is xAPI?

xAPI refers to the Experience API, which was initially known as the Tin Can API or “Project Tin Can.” It is a technical specification for the Learning and Development (L&D) industry.

The xAPI specification defines how to structure, send, and retrieve learning and performance data. When tools adhere to the xAPI specification, it makes it possible for all of them to communicate with one another. Also, since the data is in the same format, it makes it easier for people to analyze and compare data from different sources.

xAPI Benefits

The main benefit of xAPI is that it allows you to track learning wherever it occurs. It empowers you to bring data from many different learning tools and systems into a single tool for analysis.

This means that tracking is no longer limited to what occurs on a Learning Management System (LMS). You can track learning that occurs in an eLearning course, on a website, in a mobile app, in a flight simulator, in a face-to-face session, or anywhere else.

On top of this, xAPI is very flexible. For example, you can use xAPI data to determine things like:

  • Which resources people access within an eLearning course and how much time they spend viewing each resource.
  • Which actions a pilot takes in a flight simulator (control wheel movements, buttons pressed, etc.), as well as which actions the pilot takes while flying the plane itself.
  • Which notes and responses people type into an online, xAPI-enabled workbook.
  • Which potential clients a sales rep had conversations with (along with their notes about each conversation).
  • And so much more!

If the activity is occurring on a system that, at some point, has access to the internet, then you can likely track any aspect of that activity with xAPI.

Finally, xAPI data is interoperable. Just as you can upload a SCORM package to any LMS, you can send xAPI data to and from any Learning Record Store (LRS). We’ll take a closer look at LRSs in a later section.

This means that, when using xAPI, you are not locked to any specific software. You can bring your data with you to new tools, platforms, and vendors. It also means that xAPI-enabled tools know how to deal with data they receive from other xAPI-enabled tools.

In essence, xAPI gives you one language and set of rules that you can use to track and report on all of the human learning and performance experiences that occur.

What is an xAPI Statement?

xAPI data exists in the form of human- and machine-readable xAPI statements. Therefore, xAPI statements are the building blocks of xAPI. Each statement is composed of three core components: an actor, verb, and object. For example:

  • Devlin (actor) read (verb) xAPI Article (object).
  • Yeo (actor) approved (verb) Project 1 (object).
  • Team A (actor) selected (verb) Choice B (object).

The actor, verb, and object components are required to send an xAPI statement, but you can add much more detail as needed. For example, more detailed xAPI statements can tell you the following information:

  • Devlin spent 2 minutes and 47 seconds reading the xAPI article on www.devlinpeck.com.
  • Yeo commented, “Great article, but I’m having trouble with the statements section” on Devlin’s xAPI article.
  • Team A selected Choice B on the seventh question in the CPR simulation. This is the correct answer and their current score is 100%.

We’ll cover xAPI statements in more depth in the xAPI Objects section later in this article. For now, it’s important to know that xAPI data is sent and received in the form of xAPI statements. These statements can hold specific, flexible data about any human experience.

xAPI Examples

xAPI deals with sending, retrieving, and analyzing data, but it’s up to the practitioner to use the data in meaningful ways. Let’s consider some of the different ways that you can use xAPI:

  • An adaptive eLearning assessment shows you questions based on your performance in other eLearning modules AND on the job.
  • An L&D team supporting a customer service function analyzes employee learning and performance data to determine the effectiveness of their learning programs — this helps them determine where to focus their efforts for improving their learning offerings.
  • An xAPI-enabled chatbot looks at someone’s past learning data to recommend the next learning resource that the person should access.
  • A company wants to track user-specific learning data without an LMS, so they add xAPI to their website to track which pages people are viewing, which resources they’re accessing, and which eLearning courses they’re launching.
  • A talent manager looks at a dashboard showing which employees would be good fits for which positions based on their learning and on-the-job performance data.
  • A large enterprise uses xAPI to track all of the learning and performance experiences that occur across an organization. They work with data scientists and machine learning tools for deep insights about where to focus their efforts for improvement.
  • An eLearning team uses xAPI to track very specific behaviors in their eLearning offerings, and then they use this data to improve content effectiveness and the user experience.

As you can see, xAPI is often used to track and analyze human learning and performance. xAPI data can help designers create adaptive learning experiences and make data-driven improvements to their learning offerings.

What is an xAPI Activity Provider

xAPI activity providers (or learning record providers) are anything that can generate and send xAPI statements. For example, you may have an eLearning course that generates statements, an app that’s used during face-to-face sessions that generates statements, and an LMS that generates xAPI statements. The eLearning course, app, and LMS would all be considered activity providers in the context of xAPI.

How Does xAPI Work?

In technical terms, xAPI “works” by communicating data from an activity provider to an LRS via HTTP requests. To break this down further:

  1. An activity provider generates an xAPI statement, which gets sent to an LRS.
  2. The LRS receives the xAPI statement. If the statement is not compliant with the xAPI specification, then it will get rejected.
  3. A person uses analytics tools in the LRS to analyze the data, or the data gets forwarded to other tools or platforms for further processing and analysis.

Activity providers can also pull xAPI data from the LRS. This allows the learning experience to adapt and change based on the user’s previous learning and performance experiences. In that case, xAPI works like this:

  1. An activity provider sends a request to the LRS to retrieve xAPI statements.
  2. The LRS validates the request, then it provides a stream of statements that meet the given criteria.
  3. The activity provider programmatically changes the content that the user sees based on the statements that it receives.

In short, xAPI works by sending data to and retrieving data from an LRS. Let’s take a closer look at what an LRS is.

What is a Learning Record Store (LRS)?

So, all of this talk about xAPI and LRSs, but what is an LRS? An LRS is a database that holds your xAPI statements. Since you need an LRS to hold your xAPI data, you cannot use xAPI without an LRS. They go hand-in-hand.

In addition to holding your xAPI data, LRSs often include analytics capabilities. They allow you to create dashboards, generate reports, and work with your data to gain insights.

You can use multiple LRSs to hold data from your different systems, but it’s a good idea to have one LRS that serves as the “point-of-truth” for all of your learning and performance data. By bringing all of this data into one LRS, you can look at relationships and create more meaningful, informative dashboards.

When done effectively, this will give you insights into the effectiveness of your learning programs. It will also help you identify where to focus your efforts to improve effectiveness.

LRS vs LMS

The Learning Record Store (LRS) is not to be confused with the Learning Management System (LMS). As discussed, LRSs hold your learning and performance data — they receive the xAPI statements from your xAPI activity providers.

LMSs, on the other hand, handle many more tasks. They allow you to gate content behind user accounts, give different levels access to different users, and use basic reporting capabilities.

While some LMSs have built-in LRSs, most do not. If you already have an LMS, you can use an LRS from another vendor — this is a very common workflow. Likewise, if you have an LMS that does have an LRS included, then you should be able to send xAPI statements from any activity providers to your built-in LRS.

Choosing an LRS

There are dozens of LRSs on the market, and while all of them can store your xAPI statements, they come with different feature lists and price points. Since you need an LRS to use xAPI, choosing the right LRS plays a big role in successful xAPI adoption. You can view the full list of LRSs on the xAPI Adopter Registry, but I will cover some of those that I am familiar with here.

Important note: If the LRS is not included in the adopter registry, then it may not be xAPI-conformant. xAPI-conformant LRSs must pass a test suite that includes over 1300 tests, and this ensures that the xAPI statements are interoperable across systems. I do not recommend using an LRS that has not passed this test!

Learning Locker LRS

Learning Locker proclaims that they are the “most installed Learning Record Store” in the world. They have an open-source install that you can either host on a server of your own or install with one click on AWS. This configuration will run you at least $30 a month on an AWS server, but it gives you a fully functioning LRS.

They also have Enterprise offerings where they take care of all the hosting and backups. This gives you access to their xAPI-enabled tools, one of note being a GDPR-compliance tool. With servers based in Europe, this is a great option for companies in the EU who need to comply with GDPR rules and regulations.

Veracity LRS

Veracity LRS is an excellent LRS for people looking to get started with xAPI without breaking the bank. They have a great free tier plan, and unlocking additional storage or record stores is much more affordable than it is with other companies.

Their feature set is also impressive, giving you full control over the dashboards and charts that you create (even on their free tier).

Watershed LRS

Watershed LRS is another popular option. Watershed is an offshoot of Rustici Software, the company that developed the xAPI and SCORM specifications, so you know that the people behind this LRS know their stuff.

That being said, you’ll be paying accordingly if you want to unlock any of their analytics and advanced reporting capabilities. They do have a “free forever” tier, but this tier only allows you to store your data…not analyze it or create visualizations to gain insights.

The other great thing about Watershed LRS is that they integrate with Zapier, which in effect allows you to integrate your LRS with thousands of other tools. This allows you to move data to and from the other tools easily using Zapier integrations. (We’ll take a closer look at Zapier in the xAPI Tools section.)

GrassBlade LRS

GrassBlade LRS is another popular choice due to its price point and great integrations. GrassBlade integrates with Zapier too, but you need to request an invite from their support.

The most popular GrassBlade LRS configuration is with WordPress and the LearnDash plug-in. This lets people upload xAPI and SCORM packages to WordPress and generate valuable learning data just as they would on a full-fledged LMS.

With this being said, GrassBlade LRS is capable of handling xAPI statements from any activity provider.

Choosing an LRS Conclusion

The great thing about xAPI is that interoperability is at the forefront of the specification. Because of this, you can send xAPI data from one LRS to another with ease.

Many LRSs have this functionality built-in: you can automatically forward statements in LRS A to LRS B, even if LRS B is from a completely different company. This functionality makes it easy to use different LRSs for their different strengths (as well as switch LRSs completely if you ever choose to do so).

Furthermore, even if your LRS does not have analytics tools (or you’re using a free version and cannot access them), that does not mean that you cannot analyze your data. You always have the option to download your xAPI statements as spreadsheets and analyze them in a tool like Microsoft Excel or Google Sheets.

The History of xAPI

Now that you have a general idea of what xAPI is and how it works, let’s take a look at how xAPI got started.

xAPI vs. SCORM

To appreciate how xAPI came to the scene, it’s important to understand how it compares to SCORM. In 2020, SCORM is still the most popular eLearning interoperability standard. However, it has severe shortcomings.

SCORM deals with how eLearning packages are hosted on an LMS, as well as how the eLearning package and LMS communicate with one another. As the industry has recognized for some time now, most learning occurs outside the LMS — it occurs on the open web, in mobile apps, in learning games, on discussion boards, and more.

Also, even though SCORM is intended to track eLearning on an LMS, it does not give you much information. It can only track basic completion and quiz score data. (See this slide-level analytics article to learn more about the detailed eLearning activity that you can track with xAPI.)

In other words, SCORM allows eLearning courses to be deployed on multiple LMSs, but it doesn’t let you obtain specific, nuanced data from a wide array of learning and performance experiences. This is where xAPI comes in.

Due to SCORM’s inability to track these common types of learning experiences, the industry began asking for something more.

xAPI and ADL

After seeing the shortcomings of SCORM and recognizing the need for something more, the Advanced Distributed Learning (ADL) Initiative began asking the community for input on where to go next. The ADL is the same program that gave rise to SCORM, and this interest in replacing it became serious in 2010.

Soon after, the ADL awarded Rustici Software a contract to ideate the next-generation data specification for the industry.

Together, the ADL, Rustici Software, and the rest of the L&D industry worked on Project Tin Can, which is what xAPI was called before it reached its launch-ready version 1.0. The xAPI we know and love today launched officially in 2013.

xAPI Adoption

Since its release in 2013, xAPI adoption has slowly increased. Adoption usually occurs when teams and organizations find that their learning data is not meeting their information needs. When this is the case, xAPI is often the go-to solution.

Let’s consider the numbers. The most recent data is from a 2019 Learning Guild Report: The State of xAPI Adoption. This report shows that:

  • 20.7% of respondents do not know what xAPI is
  • 47.6% of respondents are interested in xAPI but have not yet used it at their organizations
  • 28.3% of respondents have used xAPI in some capacity at their organizations
  • 3.4% of respondents have decided not to use xAPI at their organizations

As we can see, the majority of participants have yet to use xAPI. However, based on these statistics and the constant buzz about xAPI in the industry, it seems that we are reaching a tipping point.

As more people learn about and begin using xAPI, the profession will likely enter a more data-driven, results-oriented era.

Barriers to Adoption

Even though xAPI brings countless benefits in terms of tracking and reporting, there are still barriers that hold organizations back from adopting it.

Lack of Awareness

Based on the survey data, it’s clear that one of the largest barriers to xAPI adoption is that people don’t know about xAPI. As more people post their xAPI case studies and share xAPI resources, we would expect this awareness to rise.

Of the people who know that it exists, it seems that many people do not know exactly how to get started with xAPI at their organizations. This article is intended to help with this — by learning about these core xAPI concepts, you should be in a much better position to move forward with implementation.

LRS Price

Since you can’t use xAPI without an LRS, choosing and purchasing an LRS can be an overwhelming step in and of itself. When organizations see large LRS price tags, they sometimes shy away from the whole endeavor.

However, as discussed, many LRSs have generous free plans (and some LRSs have relatively cheap paid plans).

Technical Skill Set Required

There’s no doubt that implementing xAPI successfully today requires a technical skill set. You need someone who knows how to write code to collect the xAPI data that you need, and you need someone who can draw insights and actionable conclusions from the data.

Fortunately, there are a host of free online resources to help with this. Also, IT departments are often involved with large xAPI initiatives, and consultants like myself are available to help out with the initial and ongoing technical implementation.

xAPI’s Flexibility

Ironically, one of the barriers holding back xAPI adoption is one of its features: it’s “too” flexible. Since there is no clear road map about exactly how to implement xAPI and which statements to send when, it’s up to the organization to define how they will use it meaningfully.

For people who are unfamiliar with the ins and outs of the spec (or without a big-picture view of the data that they would like to collect), this can be overwhelming.

This is where xAPI profiles come in!

xAPI Profiles

xAPI Profiles define how xAPI should be used within a specific organization or industry. Profiles, previously known as recipes, include guidelines about how xAPI will be implemented in an organization or industry’s unique context.

Technically speaking, profiles contain concepts, statement templates, and patterns — this tells people how to define their activities, which activities are related, and a whole lot more. You can learn more about xAPI profiles in this article by Yet Analytics.

This additional structure that profiles provide not only increases interoperability (making it easier to analyze data from different tools or organizations), but it also provides a clear set of rules for how to implement xAPI for a given use case.

Organizations may create their own xAPI profiles if they are doing a large-scale rollout. In addition to this, they can draw on public xAPI profiles. The public xAPI profiles include, but are not limited to:

  • Video Profile – this profile defines how you should collect data from people watching and interacting with video content.
  • Serious Games Profile – this profile defines how you should collect data from learning games.
  • Open Badges Profile – this profile defines how you should use xAPI to work with open badges.

So, for example, if you are implementing xAPI at your organization, you may use the video profile to send statements from videos, the open badge profile to send statements relevant to badging, and an organization-specific profile for everything else.

WIth all of this being said, you do not have to use a profile to use xAPI. You can begin conducting xAPI experiments at your organization without an xAPI profile, but you should carefully consider the statements that you will use to track human learning and performance.

xAPI and cmi5

Perhaps the most notable and “game-changing” xAPI profile is cmi5. As we mentioned earlier, part of the reason for the relatively slow uptake of xAPI is due to its flexibility. Also, when people do get started with xAPI, it is usually in the context of eLearning. Cmi5 is perfect for this use case.

Cmi5 stands for “computer managed instruction,” and it defines how LMSs should communicate with xAPI eLearning content. Whereas xAPI as a whole is too broad and flexible to replace SCORM, cmi5 as an xAPI profile is exactly what’s replacing SCORM. It’s also often referred to as “xAPI with rules.”

Cmi5 outlines how ten specific xAPI verbs should be used to define human activity on the LMS and in eLearning modules. Furthermore, it defines how the LMS and eLearning must communicate with one another, and this leads to some amazing new possibilities.

Due to these benefits and the straightforward use case for xAPI, the US Department of Defense is in the process of adopting cmi5. When this transition from SCORM to cmi5 by the DoD is official, we expect that industry adoption will increase rapidly (especially since this is exactly what happened with SCORM). You can read more about cmi5 adoption here.

There are currently a few cmi5-enabled LMSs, such as Talent LMS, RISC Inc. LMS, and Brainier LMS. When more LMSs are cmi5-compliant, having an LRS will be the norm. This will make the barrier to working with other xAPI data much lower.

xAPI Objects

Technically speaking, xAPI statements are JSON objects. JSON, which stands for JavaScript Object Notation, is a set of syntax rules that tells you how to structure your data. JSON is not unique to xAPI — it is used by most modern web applications to structure and communicate data.

Each xAPI statement is a JSON object, but the statement object is made up of smaller, more specific JSON objects (such as “actor,” “verb,” and “object” objects).

Since JSON is the “language” of xAPI and this section could get quite technical, I am going to refrain from including any code. The links at the bottom of each section bring you to deep dive articles that include many code samples (if you’re interested).

So, earlier, we mentioned that xAPI statements can hold information about an actor, verb, and object, but they can also hold much more information. Let’s take a closer look.

Actor Object

The “actor” object tells you who is performing the action in the xAPI statement. This object includes the actor’s name, as well as either an email address or account to identify them. It’s also important to note that the actor can be a single agent or a group.

You can learn more about the “actor” object in the xapi.com actor deep dive.

Verb Object

The “verb” object tells you which action the actor is performing. It includes the verb’s name, as well as a unique identifier (usually in the form of a URL) to differentiate the verb from any other verbs. This ensures that the meaning of the verb is the same every time that it’s used.

It’s best practice to pull the verb from an existing registry or xAPI profile. The verb’s unique identifier should also resolve to a public URL so that people can visit it to learn more about how the verb is used.

You can learn more about the “verb” object in the xapi.com activity deep dive.

Object Object

The “object” object tells you which object the actor is performing the action on. The object can be an activity, another actor, or even another statement.

The most common type of object is an activity. This is where you provide information about the activity’s type, unique identifier, name, and description.

You can learn more about the “object” object in the xapi.com object deep dive.

Result Object

The “result” object holds important information when it comes to quizzes and assessments. It can hold information about whether the person completed the activity successfully, what their current score is, what the minimum and maximum scores are, and what their responses are to each question.

The “result” object can also hold duration information, which is one of my personal favorites. You can use the duration property to see how long someone spends on a given question, slide, resource, video, and more.

You can learn more about the “result” object in the xapi.com result deep dive.

Context Object

The “context” object situates the statement within the greater context of the experience as a whole. It can include information about a parent activity, a group of related activities, or even the xAPI profile that it’s adhering to.

Beyond information about activities, it can include context such as who the instructor was for the experience, which team the actor is a part of, and more.

You can learn more about the “context” objection the xapi.com context deep dive.

xAPI Extensions

You can add “extension” objects to several different parts of an xAPI statement. These objects let you add whichever data you’d like, and the reason for this is to ensure that xAPI can work for use cases beyond those foreseen by its creators.

This demonstrates just how flexible xAPI is: if your data doesn’t fit within one of the existing objects or properties, then you can define your own.

You can learn more about the “extension” object in the xapi.com extension deep dive.

Additional xAPI Statement Information

When your xAPI statement is received by the LRS, it will include some additional information with it, such as the time when the statement was generated, the time when the statement was stored, and authority information about the activity provider that generated the statement.

To learn more about xAPI statements and see code examples, check out my How to Write an xAPI Statement from Scratch tutorial or the xAPI Statements 101 post on xapi.com.

xAPI Tools

There are many tools on the market, both free and paid, that will make it easier for you to send and work with xAPI data. In this section, we’ll explore some of the most popular xAPI-enabled eLearning authoring programs, LMSs, and supporting tools.

eLearning Authoring Tools

Most people and organizations get started with xAPI by sending statements from their eLearning courses. This is the natural starting place because most eLearning tools can publish xAPI output with the click of a button. However, the degree of control that you have over which xAPI statements are generated varies wildly by tool. You can view this detailed authoring tool xAPI breakdown from 2018, but we’ll explore a few of the most common options here.

Articulate Storyline

You can publish an Articulate Storyline course as “Tin Can” output and host it on any xAPI-enabled LRS. Once this is done, user activity in the course will generate a hefty stream of pre-determined statements.

For example, the Storyline course will send xAPI statements to the LRS every time a user:

  • Views a slide
  • Answers a built-in question type
  • Starts the course
  • Finishes the course

It’s important to note that you do not have any control over this data stream…which statements get sent are determined by the Articulate developers. Since a statement gets generated every time someone views a slide, the volume of statements produced by the default output can quickly bloat your LRS storage.

On top of this, you have no control over the verbs that are used by the xAPI statements. If you are generating statements from many different tools, for example, then you will have a hard time keeping track of which verbs refer to which activities.

You can resolve these issues by implementing custom xAPI tracking with JavaScript, which is what I do for many of my clients. Custom solutions allow you to generate xAPI statements from the user behavior that you’re most interested in tracking. It also allows you to use the verbs that make the most sense given your xAPI profiles and use cases.

You can learn how to send custom xAPI statements from Articulate Storyline with this guide.

Articulate Rise

You can also publish Articulate Rise courses as xAPI packages, but the statements you receive are very limited. Essentially, the xAPI output provides you the same start and completion data that the SCORM output provides.

To get more valuable xAPI data from Articulate Rise, you need to use custom code.

Adobe Captivate 

Adobe Captivate has similar out-of-the-box xAPI reporting to Articulate Storyline. However, Captivate does give you some control over the statements that you want generated: you get to choose whether you want to track each slide view and question response.

While turning this off may save you some storage space in your LRS, you still have little control over which statements are generated, and the data is not much more valuable than what you can already get with SCORM.

Once again, to use xAPI to spec and ensure that your data plays nicely with the data from your other tools, you would want to use JavaScript to do a custom xAPI implementation.

dominKnow One

dominKnow One has the best out-of-the-box xAPI reporting capabilities by far. They have a statement builder that allows you to trigger an xAPI statement from any action, and this is currently the gold standard for what’s possible in an authoring tool without code.

However, the statement builder has one critical shortcoming: your verb choices are limited to a curated verb list. This is okay if you’re just dipping your toes in xAPI, but for serious use cases that require you to adhere to industry and organization-specific profiles, you may need to use different verbs.

If you do need to use verbs that are not on their list, then you will have to add custom JavaScript to the course (just as you do with the other tools).

Unity

While Unity is a popular tool to use with xAPI, it does not support xAPI out of the box. It is up to the developers to add custom code that generates xAPI statements when users perform the actions that you would like to track.

Adding custom code in this scenario allows you to collect xAPI statements from games, VR simulations, AR experiences, and more.

Learning Management Systems

LMSs are another consideration when it comes to xAPI. If you want to host xAPI packages generated by an eLearning authoring tool, then you’re going to need an LMS that’s capable of hosting them.

It’s also important to note here that if you implement custom xAPI with JavaScript, then you can send xAPI statements from SCORM or HTML5 packages (without any need for an xAPI-compliant LMS).

However, if you are relying on an xAPI package generated from an authoring tool, then you should be good to go with most modern LMSs. These LMSs include:

  • Blackboard
  • Docebo
  • LearnDash
  • Litmos
  • Moodle
  • Saba
  • Talent LMS
  • Xapiapps
  • Cornerstone

You can view the full list of xAPI-enabled LMSs here.

Supporting xAPI Tools

Finally, there are a great selection of supporting tools that make it easier to send xAPI statements. Let’s take a look at several of them.

Zapier

Zapier lets you send data from one tool to another, automatically. It integrates natively with thousands of tools, including a couple of LRSs (discussed in the Choosing an LRS section). Using Zapier, you can send xAPI data to and from any tool that has a Zapier integration.

For example, every time a salesperson takes an action in Salesforce, you can fire off an xAPI statement. Likewise, you can use a Landbot.io chatbot that looks at someone’s previous xAPI data so that it knows which learning experience to direct them to next.

Even if your LRS doesn’t have a Zapier integration, you can send xAPI statements using the Zapier Code Action. The only downside of using an LRS without a Zapier integration is that you will not be able to retrieve statements from the LRS and bring that data into other tools via Zapier (like in the chatbot example above).

xAPI Wrapper

The xAPI JavaScript Wrapper makes it easier to communicate with the LRS using code. The ADL maintains these wrappers for a variety of programming languages, and they let you write much simpler code to send and retrieve xAPI statements. You or your developers will likely use these wrappers if you’re dealing with custom xAPI implementations.

xAPI Lab Statement Generator

The xAPI Lab is an xAPI statement generator. It lets you input each element of the xAPI statement via a convenient text input, and then it generates a full xAPI statement that you can send to an LRS.

This is a great tool to use as you learn about the different parts of an xAPI statement and what data they can hold.

xAPI Bookmarklet

The xAPI Bookmarklet lets you send xAPI statements from the webpages that you view. It adds an icon to your browser that, when clicked, sends a statement to the LRS of your choice; the statement will include details about the webpage you visited, the time it was accessed, and more.

YouTube to LRS

xAPI data from videos can provide deep insight about user engagement and content effectiveness. To make getting this data easier, ADL created a YouTube to xAPI tool that makes it possible to automatically send xAPI statements from YouTube videos via YouTube’s iframe API.

Learn More About xAPI

If you would like to learn more about xAPI, then there are some great resources at your disposal. Let’s consider a few of them.

The Full Guide to xAPI and Articulate Storyline

If you’re interested in the technical implementation of xAPI, then you’ll love my Full Guide to xAPI and Storyline. This is where I dive deep into writing your first statement, adding more detail to your statement, and sending statements from Articulate Storyline courses.

I do my best to explain the JavaScript as we introduce it, and there are even tutorials on how to query the LRS to bring xAPI data back into an eLearning course.

Check it out if you want to start sending custom xAPI statements on your own.

xAPI Cohort

TorranceLearning hosts two 12-week xAPI learning cohorts per year. During the cohort, you attend weekly webinars from xAPI experts and participate in hands-on xAPI projects with other attendees.

Xapi.com

Xapi.com has a wealth of information on xAPI. The site includes technical deep dives, beginner articles, case studies, and more. They also include xAPI-enabled prototypes and tools.

If there’s something to know about xAPI, you can probably find it on the xapi.com website.

LinkedIn Learning

Anthony Altieri has a great xAPI course on LinkedIn Learning. Similar to my Full Guide to xAPI and Storyline, this course gets technical. It’s intended for people who are going to implement xAPI by writing code themselves.

xAPI User Group

The Australian xAPI User Group is a community of L&D practitioners that contributes to the adoption of xAPI. They share information and help with xAPI implementation, so it’s a great community to get involved with if you’re interested in xAPI (especially if you’re in Australia).

Conclusion

The xAPI specification brings L&D into the data-rich world that other professions have been immersed in for some years now. xAPI data enables organizations to use a data-driven approach to their learning design, and it can help them focus their efforts on the tasks that produce the largest impact.

If you need a hand implementing xAPI at your organization for the first time or resolving a tough technical xAPI and eLearning task, then please contact me for a free consultation.

ALPS Data Levels

The Learning Community Communication Guide

Purpose of the Learning Community Communication Guide

The key to effective learning is using a structured communication framework to share information between individuals within a Learning Community and between Learning Communities.

This Guide contains the detailed specification of Learning Community Communication.

This definition consists of Learning Community Communication’s guides, roles, events, artifacts, and the rules that bind them together.

Definition of Learning Community Communication

Learning Community Communication (n): The communication framework used by individuals within a Learning Community and between learning communities.

The Learning Community Communication Framework provides the structure for effective communication by breaking down communication interactions into its basic communication component.  This allows a learning community to effectively and creatively deliver the highest possible educational value.

Learning Community Communication is:

Lightweight
Simple to understand
Easy to master

The Learning Community Communication framework consists of Learning Community Communication Cells and their associated roles, events, artifacts, and rules.

Each component within the framework serves a specific purpose and is essential to Learning Community Communication’s success and usage.

The rules of Learning Community Communication bind together the events, roles, and artifacts, governing the relationships and interaction between them.

Learning Community Communication employs an iterative, incremental approach to optimize predictability and control risk.

Six pillars uphold every implementation of the communication framework: 1) diversity, 2) transparency, 3) inspection, 4) reciprocity, 5) adaptation, and 6) etiquette.

 

1) Diversity

An optimal learning environment requires a diverse communication network, which provides the flexibility of an individual to adapt to varied opinions and views.

 

2) Transparency

The key to effective learning is “Transparency.”

The more transparent information is, the better the users of that information can value it.

Transparency requires a common standard so observers share a common understanding of what is being seen.

For example:

  • A common language referring to the process must be shared by all participants; and,
  • Those communicating must share a common definitions.

 

3) Inspection

Learning Community Communication users must frequently inspect Learning Community Communication artifacts and progress toward a “Learning Goal” to detect undesirable variances.

Their inspection should not be so frequent that inspection gets in the way of the work. Inspections are most beneficial when diligently performed by skilled inspectors at the point of work.

 

4) Reciprocity

High reciprocal interactions promote the dyadic exchange of information and resources among individuals, and thus ensure an optimal communication environment.

 

5) Adaptation

If an inspector determines that one or more aspects of a process deviate outside acceptable limits, and the resulting product will be unusable, the process or the material being processed must be adjusted. An adjustment must be made as soon as possible to minimize further deviation.

Learning Community Communication prescribes four formal events for inspection and adaptation, as described in the Learning Community Communication Events section of this document:

  • Learning Planning
  • New Learning Community Communication
  • Learning Goal Review

 

6) Etiquette

High-quality communication requires participants to be polite to each another, even during disagreements. Besides, it also encourages participants to make rational arguments supported by logical explanations. High-quality discourse is important to building optimal consensus.

Learning Community Communication Roles

The Learning Community Communication Roles consists of a:

  • Question Moderator
  • Vetted Contributor
  • Un-vetted Contributor

 

The Course Moderator

The Question Moderator is responsible for maximizing the value of the question and the work of the individual learning community members. How this is done may vary widely across Learning Community Communication Teams and individuals.

The Question Moderator is the sole person responsible for managing the contributions Backlog. Question Backlog management includes:

Clearly expressing Question Backlog items
Ordering the items in the Question Backlog to best achieve goals and mission
Optimizing the value of the work the contributors perform
Ensuring that the Question Backlog is visible, transparent, and clear to all, and shows what the Learning Community Communication Team will work on next
Ensuring the Learning Community understands items in the Question Backlog to the level needed.

The Question Moderator is one person, not a committee. The Question Moderator may represent the
desires of the Learning Community, but those wanting to change a Question Backlog
item’s priority must address the Question Moderator.

For the Question Moderator to succeed, the entire organization must respect his or her decisions. The
Question Owner’s decisions are visible in the content and ordering of the Question Backlog.

The Learning Community

The Learning Community consists of professionals who do the work of delivering a potentially
releasable Increment of “Done” product at the end of each Sprint. Only members of the
Development Team create the Increment.

Development Teams are structured and empowered by the organization to organize and
manage their own work. The resulting synergy optimizes the Development Team’s overall
efficiency and effectiveness.

Development Teams have the following characteristics:

  • They are self-organizing. No one (not even the Learning Community Communication Master) tells the Development Team how to turn Product Backlog into Increments of potentially releasable functionality;
  • Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;
  • Learning Community Communication recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there are no exceptions to this rule;
  • Learning Community Communication recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business analysis; there are no exceptions to this
    rule; and,
  • Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as a whole.

Development Team Size

Optimal Development Team size is small enough to remain nimble and large enough to complete significant work within a Sprint. Fewer than three Development Team members decrease interaction and results in smaller productivity gains. Smaller Development Teams may encounter skill constraints during the Sprint, causing the Development Team to be unable to deliver a potentially releasable Increment. Having more than nine members requires too much coordination. Large development Teams generate too much complexity for an empirical process to manage. The Product Owner and Learning Community Communication Master roles are not included in this count unless they are also executing the work of the Sprint Backlog.

 

The Learning Community Communication Master

The Learning Community Communication Master is responsible for ensuring Learning Community Communication is understood and enacted. Learning Community Communication Masters
do this by ensuring that the Learning Community Communication Team adheres to Learning Community Communication theory, practices, and rules.

The Learning Community Communication Master is a servant-leader for the Learning Community Communication Team. The Learning Community Communication Master helps those
outside the Learning Community Communication Team understand which of their interactions with the Learning Community Communication Team are helpful
and which aren’t. The Learning Community Communication Master helps everyone change these interactions to maximize the
value created by the Learning Community Communication Team.

Learning Community Communication Master Service to the Product Owner

The Learning Community Communication Master serves the Product Owner in several ways, including:

  • Finding techniques for effective Product Backlog management;
  • Helping the Learning Community Communication Team understand the need for clear and concise Product Backlog items;
  • Understanding product planning in an empirical environment;
  • Ensuring the Product Owner knows how to arrange the Product Backlog to maximize value;
  • Understanding and practicing agility; and,
  • Facilitating Learning Community Communication events as requested or needed.

Learning Community Communication Master Service to the Development Team

The Learning Community Communication Master serves the Development Team in several ways, including:

  • Coaching the Development Team in self-organization and cross-functionality;
  • Helping the Development Team to create high-value products;
  • Removing impediments to the Development Team’s progress;
  • Facilitating Learning Community Communication events as requested or needed; and,
  • Coaching the Development Team in organizational environments in which Learning Community Communication is not yet fully adopted and understood.

Learning Community Communication Master Service to the Organization

The Learning Community Communication Master serves the organization in several ways, including:

  • Leading and coaching the organization in its Learning Community Communication adoption;
  • Planning Learning Community Communication implementations within the organization;
  • Helping employees and stakeholders understand and enact Learning Community Communication and empirical product development;
  • Causing change that increases the productivity of the Learning Community Communication Team; and,
  • Working with other Learning Community Communication Masters to increase the effectiveness of the application of Learning Community Communication in the organization.

Learning Community Communication Events

Prescribed events are used in Learning Community Communication to create regularity and to minimize the need for meetings
not defined in Learning Community Communication. All events are time-boxed events, such that every event has a maximum
duration. Once a Sprint begins, its duration is fixed and cannot be shortened or lengthened. The
remaining events may end whenever the purpose of the event is achieved, ensuring an
appropriate amount of time is spent without allowing waste in the process.

Other than the Sprint itself, which is a container for all other events, each event in Learning Community Communication is a
formal opportunity to inspect and adapt something. These events are specifically designed to
enable critical transparency and inspection. Failure to include any of these events results in
reduced transparency and is a lost opportunity to inspect and adapt.

The Sprint

The heart of Learning Community Communication is a Sprint, a time-box of one month or less during which a “Done”, useable,
and potentially releasable product Increment is created. Sprints best have consistent durations
throughout a development effort. A new Sprint starts immediately after the conclusion of the
previous Sprint.

Sprints contain and consist of the Sprint Planning, Daily Learning Community Communications, the development work, the
Sprint Review, and the Sprint Retrospective.

During the Sprint:

  • No changes are made that would endanger the Sprint Goal;
  • Quality goals do not decrease; and,
  • Scope may be clarified and re-negotiated between the Product Owner and Development Team as more is learned.

Each Sprint may be considered a project with no more than a one-month horizon. Like projects,
Sprints are used to accomplish something. Each Sprint has a definition of what is to be built, a
design and flexible plan that will guide building it, the work, and the resultant product.

Sprints are limited to one calendar month. When a Sprint’s horizon is too long the definition of
what is being built may change, complexity may rise, and risk may increase. Sprints enable
predictability by ensuring inspection and adaptation of progress toward a Sprint Goal at least
every calendar month. Sprints also limit risk to one calendar month of cost.

Cancelling a Sprint

A Sprint can be cancelled before the Sprint time-box is over. Only the Product Owner has the
authority to cancel the Sprint, although he or she may do so under influence from the
stakeholders, the Development Team, or the Learning Community Communication Master.

A Sprint would be cancelled if the Sprint Goal becomes obsolete. This might occur if the
company changes direction or if market or technology conditions change. In general, a Sprint
should be cancelled if it no longer makes sense given the circumstances. But, due to the short
duration of Sprints, cancellation rarely makes sense.

When a Sprint is cancelled, any completed and “Done” Product Backlog items are reviewed. If
part of the work is potentially releasable, the Product Owner typically accepts it. All incomplete
Product Backlog Items are re-estimated and put back on the Product Backlog. The work done on
them depreciates quickly and must be frequently re-estimated.

Sprint cancellations consume resources, since everyone has to regroup in another Sprint
Planning to start another Sprint. Sprint cancellations are often traumatic to the Learning Community Communication Team,
and are very uncommon.

Sprint Planning

The work to be performed in the Sprint is planned at the Sprint Planning. This plan is created by
the collaborative work of the entire Learning Community Communication Team.

Sprint Planning is time-boxed to a maximum of eight hours for a one-month Sprint. For shorter
Sprints, the event is usually shorter. The Learning Community Communication Master ensures that the event takes place and
that attendants understand its purpose. The Learning Community Communication Master teaches the Learning Community Communication Team to keep it
within the time-box.

Sprint Planning answers the following:

  • What can be delivered in the Increment resulting from the upcoming Sprint?
  • How will the work needed to deliver the Increment be achieved?

Topic One: What can be done this Sprint?

The Development Team works to forecast the functionality that will be developed during the
Sprint. The Product Owner discusses the objective that the Sprint should achieve and the
Product Backlog items that, if completed in the Sprint, would achieve the Sprint Goal. The entire
Learning Community Communication Team collaborates on understanding the work of the Sprint.

The input to this meeting is the Product Backlog, the latest product Increment, projected
capacity of the Development Team during the Sprint, and past performance of the Development
Team. The number of items selected from the Product Backlog for the Sprint is solely up to the
Development Team. Only the Development Team can assess what it can accomplish over the
upcoming Sprint.

After the Development Team forecasts the Product Backlog items it will deliver in the Sprint, the
Learning Community Communication Team crafts a Sprint Goal. The Sprint Goal is an objective that will be met within the
Sprint through the implementation of the Product Backlog, and it provides guidance to the
Development Team on why it is building the Increment.

Topic Two: how will the chosen work get done?

Having set the Sprint Goal and selected the Product Backlog items for the Sprint, the
Development Team decides how it will build this functionality into a “Done” product Increment
during the Sprint. The Product Backlog items selected for this Sprint plus the plan for delivering
them is called the Sprint Backlog.

The Development Team usually starts by designing the system and the work needed to convert
the Product Backlog into a working product Increment. Work may be of varying size, or
estimated effort. However, enough work is planned during Sprint Planning for the Development
Team to forecast what it believes it can do in the upcoming Sprint. Work planned for the first
days of the Sprint by the Development Team is decomposed by the end of this meeting, often to
units of one day or less. The Development Team self-organizes to undertake the work in the
Sprint Backlog, both during Sprint Planning and as needed throughout the Sprint.

The Product Owner can help to clarify the selected Product Backlog items and make trade-offs.
If the Development Team determines it has too much or too little work, it may renegotiate the
selected Product Backlog items with the Product Owner. The Development Team may also invite
other people to attend in order to provide technical or domain advice.

By the end of the Sprint Planning, the Development Team should be able to explain to the
Product Owner and Learning Community Communication Master how it intends to work as a self-organizing team to
accomplish the Sprint Goal and create the anticipated Increment.

Sprint Goal

The Sprint Goal is an objective set for the Sprint that can be met through the implementation of
Product Backlog. It provides guidance to the Development Team on why it is building the
Increment. It is created during the Sprint Planning meeting. The Sprint Goal gives the
Development Team some flexibility regarding the functionality implemented within the Sprint.
The selected Product Backlog items deliver one coherent function, which can be the Sprint Goal.
The Sprint Goal can be any other coherence that causes the Development Team to work
together rather than on separate initiatives.

As the Development Team works, it keeps the Sprint Goal in mind. In order to satisfy the Sprint
Goal, it implements the functionality and technology. If the work turns out to be different than
the Development Team expected, they collaborate with the Product Owner to negotiate the
scope of Sprint Backlog within the Sprint.

Daily Learning Community Communication

The Daily Learning Community Communication is a 15-minute time-boxed event for the Development Team to synchronize
activities and create a plan for the next 24 hours. This is done by inspecting the work since the
last Daily Learning Community Communication and forecasting the work that could be done before the next one. The Daily
Learning Community Communication is held at the same time and place each day to reduce complexity. During the meeting,
the Development Team members explain:

  • What did I do yesterday that helped the Development Team meet the Sprint Goal?
  • What will I do today to help the Development Team meet the Sprint Goal?
  • Do I see any impediment that prevents me or the Development Team from meeting the Sprint Goal?

The Development Team uses the Daily Learning Community Communication to inspect progress toward the Sprint Goal and to inspect how progress is trending toward completing the work in the Sprint Backlog. The Daily Learning Community Communication optimizes the probability that the Development Team will meet the Sprint Goal. Every day, the Development Team should understand how it intends to work together as a self-organizing team to accomplish the Sprint Goal and create the anticipated Increment by the end of the Sprint. The Development Team or team members often meet immediately after the Daily Learning Community Communication for detailed discussions, or to adapt, or replan, the rest of the Sprint’s work.

The Learning Community Communication Master ensures that the Development Team has the meeting, but the Development Team is responsible for conducting the Daily Learning Community Communication. The Learning Community Communication Master teaches the Development Team to keep the Daily Learning Community Communication within the 15-minute time-box.

The Learning Community Communication Master enforces the rule that only Development Team members participate in the Daily Learning Community Communication.

Daily Learning Community Communications improve communications, eliminate other meetings, identify impediments to development for removal, highlight and promote quick decision-making, and improve the Development Team’s level of knowledge. This is a key inspect and adapt meeting.

Sprint Review

A Sprint Review is held at the end of the Sprint to inspect the Increment and adapt the Product Backlog if needed. During the Sprint Review, the Learning Community Communication Team and stakeholders collaborate about what was done in the Sprint. Based on that and any changes to the Product Backlog during the Sprint, attendees collaborate on the next things that could be done to optimize value. This is an informal meeting, not a status meeting, and the presentation of the Increment is intended to elicit feedback and foster collaboration.

This is a four-hour time-boxed meeting for one-month Sprints. For shorter Sprints, the event is usually shorter. The Learning Community Communication Master ensures that the event takes place and that attendants understand its purpose. The Learning Community Communication Master teaches all to keep it within the time-box.

The Sprint Review includes the following elements:

  • Attendees include the Learning Community Communication Team and key stakeholders invited by the Product Owner;
  • The Product Owner explains what Product Backlog items have been “Done” and what has not been “Done”;
  • The Development Team discusses what went well during the Sprint, what problems it ran into, and how those problems were solved;
  • The Development Team demonstrates the work that it has “Done” and answers questions about the Increment;
  • The Product Owner discusses the Product Backlog as it stands. He or she projects likely completion dates based on progress to date (if needed);
  • The entire group collaborates on what to do next, so that the Sprint Review provides valuable input to subsequent Sprint Planning;
  • Review of how the marketplace or potential use of the product might have changed what is the most valuable thing to do next; and,
  • Review of the timeline, budget, potential capabilities, and marketplace for the next anticipated release of the product.

The result of the Sprint Review is a revised Product Backlog that defines the probable Product Backlog items for the next Sprint. The Product Backlog may also be adjusted overall to meet new opportunities.

Sprint Retrospective

The Sprint Retrospective is an opportunity for the Learning Community Communication Team to inspect itself and create a plan for improvements to be enacted during the next Sprint.

The Sprint Retrospective occurs after the Sprint Review and prior to the next Sprint Planning. This is a three-hour time-boxed meeting for one-month Sprints. For shorter Sprints, the event is usually shorter. The Learning Community Communication Master ensures that the event takes place and that attendants understand its purpose. The Learning Community Communication Master teaches all to keep it within the time-box. The Learning Community Communication Master participates as a peer team member in the meeting from the accountability over the Learning Community Communication process.

The purpose of the Sprint Retrospective is to:

  • Inspect how the last Sprint went with regards to people, relationships, process, and tools;
  • Identify and order the major items that went well and potential improvements; and,
  • Create a plan for implementing improvements to the way the Learning Community Communication Team does its work.

The Learning Community Communication Master encourages the Learning Community Communication Team to improve, within the Learning Community Communication process framework, its development process and practices to make it more effective and enjoyable for the next Sprint. During each Sprint Retrospective, the Learning Community Communication Team plans ways to increase product quality by adapting the definition of “Done” as appropriate.

By the end of the Sprint Retrospective, the Learning Community Communication Team should have identified improvements that it will implement in the next Sprint. Implementing these improvements in the next Sprint is the adaptation to the inspection of the Learning Community Communication Team itself. Although improvements may be implemented at any time, the Sprint Retrospective provides a formal opportunity to focus on inspection and adaptation.

Learning Community Communication Artifacts

Learning Community Communication’s artifacts represent work or value to provide transparency and opportunities for inspection and adaptation. Artifacts defined by Learning Community Communication are specifically designed to maximize transparency of key information so that everybody has the same understanding of the artifact.

Product Backlog

The Product Backlog is an ordered list of everything that might be needed in the product and is the single source of requirements for any changes to be made to the product. The Product Owner is responsible for the Product Backlog, including its content, availability, and ordering.

A Product Backlog is never complete. The earliest development of it only lays out the initially known and best-understood requirements. The Product Backlog evolves as the product and the environment in which it will be used evolves. The Product Backlog is dynamic; it constantly changes to identify what the product needs to be appropriate, competitive, and useful. As long as a product exists, its Product Backlog also exists.

The Product Backlog lists all features, functions, requirements, enhancements, and fixes that constitute the changes to be made to the product in future releases. Product Backlog items have the attributes of a description, order, estimate and value.

As a product is used and gains value, and the marketplace provides feedback, the Product Backlog becomes a larger and more exhaustive list. Requirements never stop changing, so a Product Backlog is a living artifact. Changes in business requirements, market conditions, or technology may cause changes in the Product Backlog.

Multiple Learning Community Communication Teams often work together on the same product. One Product Backlog is used to describe the upcoming work on the product. A Product Backlog attribute that groups items may then be employed.

Product Backlog refinement is the act of adding detail, estimates, and order to items in the Product Backlog. This is an ongoing process in which the Product Owner and the Development Team collaborate on the details of Product Backlog items. During Product Backlog refinement, items are reviewed and revised. The Learning Community Communication Team decides how and when refinement is done. Refinement usually consumes no more than 10% of the capacity of the Development Team. However, Product Backlog items can be updated at any time by the Product Owner or at the Product Owner’s discretion.

Higher ordered Product Backlog items are usually clearer and more detailed than lower ordered ones. More precise estimates are made based on the greater clarity and increased detail; the lower the order, the less detail. Product Backlog items that will occupy the Development Team for the upcoming Sprint are refined so that any one item can reasonably be “Done” within the Sprint time-box. Product Backlog items that can be “Done” by the Development Team within one Sprint are deemed “Ready” for selection in a Sprint Planning. Product Backlog items usually acquire this degree of transparency through the above described refining activities.

The Development Team is responsible for all estimates. The Product Owner may influence the Development Team by helping it understand and select trade-offs, but the people who will perform the work make the final estimate.

Monitoring Progress Toward a Goal

At any point in time, the total work remaining to reach a goal can be summed. The Product Owner tracks this total work remaining at least every Sprint Review. The Product Owner compares this amount with work remaining at previous Sprint Reviews to assess progress toward completing projected work by the desired time for the goal. This information is made transparent to all stakeholders.

Various projective practices upon trending have been used to forecast progress, like burn-downs, burn-ups, or cumulative flows. These have proven useful. However, these do not replace the importance of empiricism. In complex environments, what will happen is unknown. Only what has happened may be used for forward-looking decision-making.

Sprint Backlog

The Sprint Backlog is the set of Product Backlog items selected for the Sprint, plus a plan for delivering the product Increment and realizing the Sprint Goal. The Sprint Backlog is a forecast by the Development Team about what functionality will be in the next Increment and the work needed to deliver that functionality into a “Done” Increment.

The Sprint Backlog makes visible all of the work that the Development Team identifies as necessary to meet the Sprint Goal.

The Sprint Backlog is a plan with enough detail that changes in progress can be understood in the Daily Learning Community Communication. The Development Team modifies the Sprint Backlog throughout the Sprint, and the Sprint Backlog emerges during the Sprint. This emergence occurs as the Development Team works through the plan and learns more about the work needed to achieve the Sprint Goal.

As new work is required, the Development Team adds it to the Sprint Backlog. As work is performed or completed, the estimated remaining work is updated. When elements of the plan are deemed unnecessary, they are removed. Only the Development Team can change its Sprint Backlog during a Sprint. The Sprint Backlog is a highly visible, real-time picture of the work that the Development Team plans to accomplish during the Sprint, and it belongs solely to the Development Team.

Monitoring Sprint Progress

At any point in time in a Sprint, the total work remaining in the Sprint Backlog can be summed. The Development Team tracks this total work remaining at least for every Daily Learning Community Communication to project the likelihood of achieving the Sprint Goal. By tracking the remaining work throughout the Sprint, the Development Team can manage its progress.

Increment

The Increment is the sum of all the Product Backlog items completed during a Sprint and the value of the increments of all previous Sprints. At the end of a Sprint, the new Increment must be “Done,” which means it must be in useable condition and meet the Learning Community Communication Team’s definition of “Done.” It must be in useable condition regardless of whether the Product Owner decides to actually release it.

Artifact Transparency

Learning Community Communication relies on transparency. Decisions to optimize value and control risk are made based on the perceived state of the artifacts. To the extent that transparency is complete, these decisions have a sound basis. To the extent that the artifacts are incompletely transparent, these decisions can be flawed, value may diminish and risk may increase.

The Learning Community Communication Master must work with the Product Owner, Development Team, and other involved parties to understand if the artifacts are completely transparent. There are practices for coping with incomplete transparency; the Learning Community Communication Master must help everyone apply the most appropriate practices in the absence of complete transparency. A Learning Community Communication Master can detect incomplete transparency by inspecting the artifacts, sensing patterns, listening closely to what is being said, and detecting differences between expected and real results.

The Learning Community Communication Master’s job is to work with the Learning Community Communication Team and the organization to increase the transparency of the artifacts. This work usually involves learning, convincing, and change. Transparency doesn’t occur overnight, but is a path.

Definition of “Done”

When a Product Backlog item or an Increment is described as “Done”, everyone must understand what “Done” means. Although this varies significantly per Learning Community Communication Team, members must have a shared understanding of what it means for work to be complete, to ensure transparency. This is the definition of “Done” for the Learning Community Communication Team and is used to assess when work is complete on the product Increment.

The same definition guides the Development Team in knowing how many Product Backlog items it can select during a Sprint Planning. The purpose of each Sprint is to deliver Increments of potentially releasable functionality that adhere to the Learning Community Communication Team’s current definition of “Done.” Development Teams deliver an Increment of product functionality every Sprint. This Increment is useable, so a Product Owner may choose to immediately release it. If the definition of “done” for an increment is part of the conventions, standards or guidelines of the development organization, all Learning Community Communication Teams must follow it as a minimum. If “done” for an increment is not a convention of the development organization, the Development Team of the Learning Community Communication Team must define a definition of “done” appropriate for the product. If there are multiple Learning Community Communication Teams working on the system or product release, the development teams on all of the Learning Community Communication Teams must mutually define the definition of “Done.”

Each Increment is additive to all prior Increments and thoroughly tested, ensuring that all Increments work together.

As Learning Community Communication Teams mature, it is expected that their definitions of “Done” will expand to include more stringent criteria for higher quality. Any one product or system should have a definition of “Done” that is a standard for any work done on it.

End Note

Learning Community Communication is free and offered in this Guide. Learning Community Communication’s roles, artifacts, events, and rules are immutable and although implementing only parts of Learning Community Communication is possible, the result is not Learning Community Communication. Learning Community Communication exists only in its entirety and functions well as a container for other techniques, methodologies, and practices.

Acknowledgements

People

Of the thousands of people who have contributed to Learning Community Communication, we should single out those who were instrumental in its first ten years. First there was Jeff Sutherland working with Jeff McKenna, and Ken Schwaber working with Mike Smith and Chris Martin. Many others contributed in the ensuing years and without their help Learning Community Communication would not be refined as it is today.

History

Ken Schwaber and Jeff Sutherland first co-presented Learning Community Communication at the OOPSLA conference in 1995. This presentation essentially documented the learning that Ken and Jeff gained over the previous few years applying Learning Community Communication.

The history of Learning Community Communication is already considered long. To honor the first places where it was tried and refined, we recognize Individual, Inc., Fidelity Investments, and IDX (now GE Medical).

The Learning Community Communication Guide documents Learning Community Communication as developed and sustained for 20-plus years by Jeff Sutherland and Ken Schwaber. Other sources provide you with patterns, processes, and insights that complement the Learning Community Communication framework. These optimize productivity, value, creativity, and pride.

Rules and Processes

 

 

Rules and Process

Rules and Processes