As I travel around on the Internet, read a book, have a discussion with family and friends, or watch a movie or TV, I am often presented with new information.  And, when presented with this new information, it is not uncommon for me to have some “epiphany” where I think about how this new information would help me in any number of ways.  These could range from the utterly mundane (I should use less sugar in my coffee) to the potentially career-altering (I need to learn to listen more and talk less).

Unfortunately, most of these frequent epiphanies vanish into the “Digital Fog” almost as soon as I perceive I have them.  Whatever learning the epiphanies yield much more often than I would like, vanish utterly, without a trace.

The problem is that there’s no way of storing these little flashes of insight gleaned from the day’s daily experiences.  I could capture them all in a list, either digital or paper-based, but a list of random reminders I’d look at once in a rare while wouldn’t be terribly helpful.

That’s why I am building a Learning Community.  I don’t like the idea of losing some many insights that could have a positive influence in my life.

As a result I’ve defined the @lantis LMS to be:

  • Dynamic
  • Context-aware
  • Personalized
  • Goal focused
  • Rule Based
  • Transparent
  • Pro-active
  • Authortitive

The core of the idea is: learning is deeply personal, and it happens over the course of an ordinary day.  Many of us would be better off with a system for capturing and applying that hard-won learning.

​User trust is an issue I see come up regularly in design discussions surrounding analytics tools.  There’s a paradox at the heart of building good data visualizations: in order for the visualization to accomplish anything useful, it needs to reveal something new to the user.  For example, this could mean identifying a previously unknown customer segment, revealing the extent of a trend only previously suspected, or disproving internal conventional wisdom about what customers want and do.

Data and design types tend to think hard about the soundness of the underlying concepts represented in the data—sometimes too hard, leading to a presentation that doesn’t make intuitive sense to the lay user who just has a few minutes to spend with the tool.  And his or her response typically isn’t, “Now let me think carefully about the underlying structure of the mathematical concept presented.”  It’s, “This data must be wrong.”

“The data must be wrong”: no product team wants to hear this, because it’s a conclusion that undercuts everything else that may be of value in the tool.  The user’s reaction to those surprising new insights will be to focus on the quality of the data, not on the business questions at hand.  In other words, much of the value of the analytics is lost.  Sometimes, of course, the data really is wrong.  But it’s all the more frustrating when the data is right—just not on terms users understand.

For example, I once worked on an analytics tool with a feature designed to show which job skills for a given role were typically associated with the highest salaries.  For job seekers, recruiters, and hiring managers, this sounds like a pretty useful and important question, right?  The data team who worked on this tool before me had run into a problem with the display though: what counts as a skill and what counts as simply a defining feature of a job?  For example, is Java a skill that’s part of the job of “Java engineer,” or is it just a defining feature of being a Java engineer?  The team decided on the latter, and wrote an algorithm to programmatically determine and exclude “defining skills” like these from the list.  The result?  Users looked at the feature and were confused.  Why wasn’t Java on the list of skills?  Once you think about it, the explanation that Java can’t be associated with earning a higher or lower salary because it’s a defining part of the role makes sense.  But users weren’t stopping to think about it that long.  They just assumed the tool was broken.

When I joined the project, we learned from this mistake and redesigned the display to present the “typical skills” associated with a job—things you might expect, like a Java engineer knowing Java—and a separate list to show high-earning skills.  That list of “typical skills” wasn’t going to offer much new insight to most users.  But it provided a level set to show “Hey, our software grasps what a Java engineer is in the same way you do.”  It built trust, so that when it came time to dive into data users didn’t already know, they were ready to focus on new insights and implications.

Since then, I’ve made it to point to think about these trust-building features in analytics designs.  All too often, we may want to dismiss these elements as a waste of screen space because they don’t provide new insights.  But they are doing some other, quite important work from an overall user experience standpoint: helping users to trust what the data is telling them.