Posts Tagged ISD
This post is a follow-up to one of my talking points in this week’s keynote at the 2013 Adult Student Recruiting and Retention Conference
We are fast approaching the top of the hype cycle when discussing learning analytics and big data. The use of data to improve student outcomes is not a new phenomenon but, like in other areas, the promise of what is now possible is staggering. The operative word in that last sentence is “promise.” While analytics are rapidly changing the business world, systematic use of student data in education is still in its infancy. (note: I recommend reading Michael Feldstein’s recent post A Taxonomy of Adaptive Analytics Strategies for more)
If you use Amazon to shop, you have likely seen the recommendations made just for you. Many times, these make complete sense, like when I am told that author William Gibson has a new book out that I might want to try. But occasionally I am shown a very odd item. Amazon is, as most of us know by now, mining data across millions of users and billions of transactions to look for patterns. Sometimes, these patterns are not evident based on the purchasing behavior alone. Occasionally, these items are laughable and we wonder how Amazon could make such a ridiculous recommendation. Other times, however, it is eery how I am shown something that does not connect directly to any of my past shopping history that is relevant to me. While sometimes helpful, it is also just a little creepy.
One recent story about Target’s use of customer data has gone viral and is now quite well-known. This gist of the story is that Target began sending coupons to a teenage girl who, according to her shopping habits, appeared to be pregnant. The girl’s father was outraged at the store only to learn later that his daughter was, in fact, pregnant (Forbes version of this story). Subsequent stories explored the extent of data retailers now collect about ever aspect of their customer’s lives. Much of this data is freely given through use of loyalty card programs and credit card privacy agreements but this data is then cross-referenced by buying and selling through data brokers (see FTC takes aim at data brokers for a recent story). Most of us are likely unaware of the power of data brokers and how information we assume is in a silo is actually being resold and then cross-referenced, compiled, bundled for marketers. Target, like many other retailers takes advantage of these types of services to learn more about customers and provide specialized offers. This is not necessarily a bad thing for consumers but might make us uncomfortable when we realize this is happening without our knowledge (ever wonder why you might get a discount on gas with your grocery loyalty card?).
The mustard problem
Despite the amount of data Target has about me, my family, and my shopping habits, they (and other retailers) still fail in a very important way. At the moment of highest need, they cannot help me make a decision. When I am standing in front of 40 different kinds of mustard, there is no accurate guide to help me figure out what I really want. Of course, in today’s mega stores, it is not just mustard but every product seems to have a dizzying array of choices. While I might get a coupon to persuade me to make a choice, that is about the retailer and the product manufacturer trying to influence me to maximize profit and not help me match my taste bud’s desire at that moment.
In the 1997 journal article “Artificial Tutoring Systems: What Computers Can and Can’t Know” Frick outlines this exact problem. Computers are good at knowing how to do something (process information in various ways) but not good at establishing meaning. While computers might house a lot of data about me to the point of being “creepy,” they need an incredible amount of contextual and social data at the moment of need to provide accurate guidance to me about what mustard I want to purchase. A lot of progress has been made since 1997 but the same basic problems remain.
Now think of the almost endless amount of digital education content (and analog instructional options) and it makes 40 types of mustard seem like a trivial problem. We are not even doing a good job helping students personalize their learning needs with static web content. As the wealth of new instructional opportunities increase, the need to help people discover what works best for them will also increase.
While Target might know a lot about us, our schools and universities have equally vast repositories of data. Demographics, income, login patterns, healthcare issues (in the case of on-campus clinics), email volume, contact lists, web surfing and internet use (when on campus), and countless other data points are available. Note that I did not even touch on what is available through the LMS, which is interesting but only a partial snapshot of your data profile. Universities have many rules in place to isolate and segregate this data. FERPA and other laws require educational institutions to develop and follow strict data privacy standards. Our scruples and legal requirements make it hard for us to fully take advantage of the vast troves of data retailers are mining but there have to be steps in between where we are today and the “shady” back alleys of consumer data brokers.
We might not yet be at the point technologically to make perfect recommendations but we are getting closer. Taking the next steps, however, might mean using data in unusual and even “creepy” ways to determine what really works.
Frick, T. W. (1997). Artificial tutoring systems: What computers can and can’t know. Educational Computing Research, 16(2), 107–124.
Hill, K. (2012). How Target figured out a teen girl was pregnant before her father did. Forbes. Retrieved from http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/
I just returned from a meeting in Seattle with distance education colleagues from several universities. Six of us traveled from the University of Wisconsin-Extension in Madison to the institution of our outstanding hosts at the “other” UW (University of Washington). We were joined by instructional designers, media developers, and distance education leaders from Georgia Tech, Northwestern, and Boston University. The meeting was a great example of how to gather a group of experts together to create a meaningful learning experience. This post is part reflection and part micro-case study.
The dean at UW-Extension had spoken with his peers about the distance learning groups in their institutions. He sent an email to see if there was interest in getting instructional designers and others in elearning departments together for a professional development opportunity. Representatives from five other institutions responded to the email. I agreed to start the ball rolling by setting up the first phone conference. We began phone conversations in September 2011 about whether the expense and effort of a face-to-face meeting would benefit our teams. The first point of discussion was about the format. I asked whether we needed yet another conference for our teams to attend. The steering group agreed that there was little value in that format because there are many conferences we can attend if we want a traditional event. I suggested that we try a loosely structured series of conversations. There would be no no PowerPoint or even show-and-tell, just semi-structured conversations and networking time.
These types of events can seem a bit risky if you are used to having lots of structure and predetermined content. Will we have enough to talk about? What if our work is too different to relate? Will everyone be able to participate? This group, however, immediately thought the format would work and we brainstormed about 20 topics.
Tools for the job
I started a Google doc and we began to use it as a central repository for topic ideas, the attendee list, and logistics (hotels, meeting info, etc). We then narrowed down our 20 topics to 6, each with 90 minute blocks over a 1 1/2 day event. We added 45 minutes for morning coffee and bagels, an extended networking lunch, and 15-minute breaks after each session so that we had plenty of mental “white space” around the topics.
In the Google doc, we created a table with the schedule and included an extra column for volunteer moderators. At first, no one ventured to put their name in the boxes. I sent an email explaining the moderator role as a facilitator, not an expert on a topic. Moderators needed to simply come prepared to throw a few questions out as conversation starters. The column filled with moderators as the event approached. Moderators began listing topic questions in the Google doc and created spaces for notes. As the event unfolded, the doc became a crowdsourced repository of links, notes, and ideas.
The event was held in a unique setting: the Center for Urban Horticulture just off the University of Washington campus. There reasons for holding the meeting there were largely pragmatic but there was some intent to get us out of a traditional conference space. The unique space made a big difference. People felt at ease as soon as they walked through a small garden to reach the classroom.
We started with introductions and then launched into the first conversation about LMS usage. Even though the moderator was prepared to keep the conversation going, there was no lull that required intervention. The first 90 minutes was almost over too soon and people continued talking into the break. After the break, people were seated and ready to go, no prompting or gathering required. The next topic, mobile learning, quickly picked up steam and carried us through to lunch. An extended lunch gave us time to get to know each other and take strolls along the nature paths down to the lake.
After lunch, we discussed how our institutions were approaching the use of media in elearning courses. This was not a deep technical discussion but showed remarkably different approaches to using video, interactive media, and dealing with faculty perceptions of media use. We ended the first day by describing how we work with faculty and subject matter experts as we build online courses. In all of the first day’s topics, the differences were more interesting than the similarities but we sensed a growing shared understanding about our work.
We made informal small group plans for dinner and left for the night. Unfortunately, a few of us never made it to dinner due to a bit of a debacle with a cab company, but that is a different story. When we returned the next morning, it was almost as if we had all been working together for years.
We discussed how we cope with the “firehouse of new technology” in our organizations. Working with online learning technology keeps us all on the bleeding edge at our institutions but maintaining our innovative spirit and continuing to invest is a significant challenge with tight budgets. In the final session, we had our only breakout session. People could choose to talk in-depth about online learning development tools or discuss managing the distance education enterprise. We could see the progress across groups because people were taking notes for both sessions in the same Google doc.
We came back together for a few minutes before we had to go our separate ways. Numerous ideas came out about how we could all work together to solve some of our shared challenges. Some of these ideas became next steps in the Google doc. The most popular idea was that this event was one of the more valuable professional development opportunities people had attended and it should be held at least annually. Boston anyone?
By the end of the meeting, our Google doc had expanded to over 15 pages of shared notes and we all gained a lot from sharing across institutions. More importantly, the format showed us that we had colleagues facing similar challenges and we could reach out when needed. I look forward to learning more with this group as a part of my extended network!
Rube Goldberg created incredibly silly contraptions to accomplish simple tasks. His work is now recognized through an annual contest. In the event, engineering students compete to create the most complicated and creative solutions that to solve simple problems. The official 2012 contest is approaching and has me thinking about how instructional designs can be over-engineered.
In March 2011 the University of Wisconsin-Stout team won the national competition (for the second year in a row) with a machine that completed 135 steps to water a plant. This year’s competition is to over-complicate inflating a balloon.
What is the value of creating a complicated, albeit creative, solution to a simple problem? For the students involved, they have fun and learn about engineering, physics, and the properties of various substances. The events promote science, engineering, and the programs at the schools involved. These activities are outstanding in every way except one: the outcome. The products these machines create are irrelevant.
As we create instructional technologies and designs, how many of them are Rube Goldberg machines? Designers create complex new technical schemes or media-rich products when much simpler and more elegant solutions will work. Everyone involved in such design processes learns a lot and the contraption might be fun to watch but is the end product worth it?
The answer is, of course, it depends. If the intended outcome is knowledge for the technologist, then such a design is a success. Similarly, if the users of the product participate in its creation, then it is also worth it (but this is a rare instance in most instructional product designs). If we want the output of the design to be the most important part of the process, then simple and elegant solutions yield better results.
Perhaps a Rube Goldberg instructional design competition would be a lot of fun: create the most complex instructional solution to teach someone a simple task…