It was time to do some significant blog updating and housecleaning and, as a part of that process, I have decided to move to RovyBranon.com. While I will keep the Situativity.org domain indefinitely, I always had to repeat the name at least 3 times before telling people, “just Google ‘Rovy’ and you will find it.”
Here is a post about the move.
Please update your feed readers and bookmarks.
Note that while the few posts in WordPress here will remain here for sometime, the old posts that were in Moveable Type will go away soon (they are all moved onto the new site) because of the growing security risk of having an 8-year old un-updated platform.
The recent announcement about Google killing its Reader product had me depressed for a few moments. RSS still forms the core of my daily intake of journals, blogs, and news sources. Twitter and G+ have their places but there are certain sources I want to make sure I see each day. Luckily, my PLN was also abuzz with the Google announcement and options began surfacing almost immediately.
The first recommendation came from a G+ comment: Feedly. I grabbed the Chrome version right away and then downloaded the iOS version. Slick interface, clean, and the makers claim they will seamlessly transition my Google Reader feeds to their platform when the execution occurs July 1. Fantastic! But, then the first little issues began to appear: 1) Feedly’s Chrome app took over my browser when I would click on a Feedburner feed (attention web app makers: do not take over my browser without asking). 2) I then began to look at whether I could see the RSS source links so I could copy them into other Reader replacement options. I could not find a way to expose the original links. Perhaps you can but, it is not obvious. 3) This led me look for export options. Even though they are killing Reader, at least Google allows you to export data. This post indicates that it is not even on the radar. Locking my feeds into a platform when a simple export function is so easy is not acceptable. So, scratch Feedly.
The Old Reader also looks like a viable product and it seems to be supported by folks dedicated to keeping your data accessible but, in the rush of folks leaving Google Reader they were having scalability issues (since resolved). So I noted this option to try again later and kept looking.
Then I read the post by D’arcy Norman about the feed reader called Fever (D’arcy gives a great description – read it). Fever is a paid product and you have to install it on a web sever. I have a hosted server space for this blog so I thought I would see if it would support the install. After running the package, it worked like a charm. I paid my 30 bucks and began to get it configured. Wow, what a difference. In many ways it is superior to Reader.
Fever has a typical folder structure but it also shows what’s hot based on inbound linking to an article or topic. I have a love-hate relationship with aggregator sites because they often surface interesting articles but they also fill my unread count. These can be labeled “sparks. ” Labeling a feed as a spark means that it does not show up as unread but it contributes to the “heat” that topics have.
The way I read and post articles is actually now more efficient than it was when I was in my Google Reader rut. So, I want to personally thank Google for getting me out of my comfort zone to look at new options. This change has me thinking again about all of my posts across various sources and whether I should be pulling it all into one place that I “own.” That’s a project for another time…
This post is a follow-up to one of my talking points in this week’s keynote at the 2013 Adult Student Recruiting and Retention Conference
We are fast approaching the top of the hype cycle when discussing learning analytics and big data. The use of data to improve student outcomes is not a new phenomenon but, like in other areas, the promise of what is now possible is staggering. The operative word in that last sentence is “promise.” While analytics are rapidly changing the business world, systematic use of student data in education is still in its infancy. (note: I recommend reading Michael Feldstein’s recent post A Taxonomy of Adaptive Analytics Strategies for more)
If you use Amazon to shop, you have likely seen the recommendations made just for you. Many times, these make complete sense, like when I am told that author William Gibson has a new book out that I might want to try. But occasionally I am shown a very odd item. Amazon is, as most of us know by now, mining data across millions of users and billions of transactions to look for patterns. Sometimes, these patterns are not evident based on the purchasing behavior alone. Occasionally, these items are laughable and we wonder how Amazon could make such a ridiculous recommendation. Other times, however, it is eery how I am shown something that does not connect directly to any of my past shopping history that is relevant to me. While sometimes helpful, it is also just a little creepy.
One recent story about Target’s use of customer data has gone viral and is now quite well-known. This gist of the story is that Target began sending coupons to a teenage girl who, according to her shopping habits, appeared to be pregnant. The girl’s father was outraged at the store only to learn later that his daughter was, in fact, pregnant (Forbes version of this story). Subsequent stories explored the extent of data retailers now collect about ever aspect of their customer’s lives. Much of this data is freely given through use of loyalty card programs and credit card privacy agreements but this data is then cross-referenced by buying and selling through data brokers (see FTC takes aim at data brokers for a recent story). Most of us are likely unaware of the power of data brokers and how information we assume is in a silo is actually being resold and then cross-referenced, compiled, bundled for marketers. Target, like many other retailers takes advantage of these types of services to learn more about customers and provide specialized offers. This is not necessarily a bad thing for consumers but might make us uncomfortable when we realize this is happening without our knowledge (ever wonder why you might get a discount on gas with your grocery loyalty card?).
The mustard problem
Despite the amount of data Target has about me, my family, and my shopping habits, they (and other retailers) still fail in a very important way. At the moment of highest need, they cannot help me make a decision. When I am standing in front of 40 different kinds of mustard, there is no accurate guide to help me figure out what I really want. Of course, in today’s mega stores, it is not just mustard but every product seems to have a dizzying array of choices. While I might get a coupon to persuade me to make a choice, that is about the retailer and the product manufacturer trying to influence me to maximize profit and not help me match my taste bud’s desire at that moment.
In the 1997 journal article “Artificial Tutoring Systems: What Computers Can and Can’t Know” Frick outlines this exact problem. Computers are good at knowing how to do something (process information in various ways) but not good at establishing meaning. While computers might house a lot of data about me to the point of being “creepy,” they need an incredible amount of contextual and social data at the moment of need to provide accurate guidance to me about what mustard I want to purchase. A lot of progress has been made since 1997 but the same basic problems remain.
Now think of the almost endless amount of digital education content (and analog instructional options) and it makes 40 types of mustard seem like a trivial problem. We are not even doing a good job helping students personalize their learning needs with static web content. As the wealth of new instructional opportunities increase, the need to help people discover what works best for them will also increase.
While Target might know a lot about us, our schools and universities have equally vast repositories of data. Demographics, income, login patterns, healthcare issues (in the case of on-campus clinics), email volume, contact lists, web surfing and internet use (when on campus), and countless other data points are available. Note that I did not even touch on what is available through the LMS, which is interesting but only a partial snapshot of your data profile. Universities have many rules in place to isolate and segregate this data. FERPA and other laws require educational institutions to develop and follow strict data privacy standards. Our scruples and legal requirements make it hard for us to fully take advantage of the vast troves of data retailers are mining but there have to be steps in between where we are today and the “shady” back alleys of consumer data brokers.
We might not yet be at the point technologically to make perfect recommendations but we are getting closer. Taking the next steps, however, might mean using data in unusual and even “creepy” ways to determine what really works.
Frick, T. W. (1997). Artificial tutoring systems: What computers can and can’t know. Educational Computing Research, 16(2), 107–124.
Hill, K. (2012). How Target figured out a teen girl was pregnant before her father did. Forbes. Retrieved from http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/
I will give the closing keynote a the Adult Student Recruiting and Retention Conference next week in Madison. While the talk this year is new, those who were there last year might remember this video.
I like videos that show early stage technological innovation. This 1981 KRON news clip is one of my favorites because in 2 minutes and 17 seconds it captures the essence of disruptive change. The visionaries were right about the big picture (that people could one day receive all their news over the computer) but they underestimate the how disruptive the technology will be (“we will not make much money but we will not lose much either”). You can almost hear a slight tone of sarcasm in the newscaster’s voice as she wraps the story by describing the cost of the internet.
“Of the 2000-3000 computer owners in the Bay area…”
Many technology enthusiasts were likely predicting the end of the papers as we knew them even in 1981. They were right, of course, but not for almost 30 years. Many elements had to be in place before the technology had a systemic impact. Beyond just access to low cost internet and the need to have computers in millions of homes, new modes of publishing, blogging, citizen reporting, and an information sharing ecosystem had to have time to develop.
When disruptive change happens, it takes a relatively long time. When that change results in a pink slip for your position, however, it can seem like it happened overnight. Evidence of the disruption that has hit the news industry is now in the (electronic) news almost every week. New Orleans recently became the largest U.S. city to lose daily newspaper service. It took many years for disruptive change to play out and, while this video might seem quaint in 2012, it makes me wonder what changes are happening in our own higher education backyards that will seem obvious in 2042.
I just returned from a meeting in Seattle with distance education colleagues from several universities. Six of us traveled from the University of Wisconsin-Extension in Madison to the institution of our outstanding hosts at the “other” UW (University of Washington). We were joined by instructional designers, media developers, and distance education leaders from Georgia Tech, Northwestern, and Boston University. The meeting was a great example of how to gather a group of experts together to create a meaningful learning experience. This post is part reflection and part micro-case study.
The dean at UW-Extension had spoken with his peers about the distance learning groups in their institutions. He sent an email to see if there was interest in getting instructional designers and others in elearning departments together for a professional development opportunity. Representatives from five other institutions responded to the email. I agreed to start the ball rolling by setting up the first phone conference. We began phone conversations in September 2011 about whether the expense and effort of a face-to-face meeting would benefit our teams. The first point of discussion was about the format. I asked whether we needed yet another conference for our teams to attend. The steering group agreed that there was little value in that format because there are many conferences we can attend if we want a traditional event. I suggested that we try a loosely structured series of conversations. There would be no no PowerPoint or even show-and-tell, just semi-structured conversations and networking time.
These types of events can seem a bit risky if you are used to having lots of structure and predetermined content. Will we have enough to talk about? What if our work is too different to relate? Will everyone be able to participate? This group, however, immediately thought the format would work and we brainstormed about 20 topics.
Tools for the job
I started a Google doc and we began to use it as a central repository for topic ideas, the attendee list, and logistics (hotels, meeting info, etc). We then narrowed down our 20 topics to 6, each with 90 minute blocks over a 1 1/2 day event. We added 45 minutes for morning coffee and bagels, an extended networking lunch, and 15-minute breaks after each session so that we had plenty of mental “white space” around the topics.
In the Google doc, we created a table with the schedule and included an extra column for volunteer moderators. At first, no one ventured to put their name in the boxes. I sent an email explaining the moderator role as a facilitator, not an expert on a topic. Moderators needed to simply come prepared to throw a few questions out as conversation starters. The column filled with moderators as the event approached. Moderators began listing topic questions in the Google doc and created spaces for notes. As the event unfolded, the doc became a crowdsourced repository of links, notes, and ideas.
The event was held in a unique setting: the Center for Urban Horticulture just off the University of Washington campus. There reasons for holding the meeting there were largely pragmatic but there was some intent to get us out of a traditional conference space. The unique space made a big difference. People felt at ease as soon as they walked through a small garden to reach the classroom.
We started with introductions and then launched into the first conversation about LMS usage. Even though the moderator was prepared to keep the conversation going, there was no lull that required intervention. The first 90 minutes was almost over too soon and people continued talking into the break. After the break, people were seated and ready to go, no prompting or gathering required. The next topic, mobile learning, quickly picked up steam and carried us through to lunch. An extended lunch gave us time to get to know each other and take strolls along the nature paths down to the lake.
After lunch, we discussed how our institutions were approaching the use of media in elearning courses. This was not a deep technical discussion but showed remarkably different approaches to using video, interactive media, and dealing with faculty perceptions of media use. We ended the first day by describing how we work with faculty and subject matter experts as we build online courses. In all of the first day’s topics, the differences were more interesting than the similarities but we sensed a growing shared understanding about our work.
We made informal small group plans for dinner and left for the night. Unfortunately, a few of us never made it to dinner due to a bit of a debacle with a cab company, but that is a different story. When we returned the next morning, it was almost as if we had all been working together for years.
We discussed how we cope with the “firehouse of new technology” in our organizations. Working with online learning technology keeps us all on the bleeding edge at our institutions but maintaining our innovative spirit and continuing to invest is a significant challenge with tight budgets. In the final session, we had our only breakout session. People could choose to talk in-depth about online learning development tools or discuss managing the distance education enterprise. We could see the progress across groups because people were taking notes for both sessions in the same Google doc.
We came back together for a few minutes before we had to go our separate ways. Numerous ideas came out about how we could all work together to solve some of our shared challenges. Some of these ideas became next steps in the Google doc. The most popular idea was that this event was one of the more valuable professional development opportunities people had attended and it should be held at least annually. Boston anyone?
By the end of the meeting, our Google doc had expanded to over 15 pages of shared notes and we all gained a lot from sharing across institutions. More importantly, the format showed us that we had colleagues facing similar challenges and we could reach out when needed. I look forward to learning more with this group as a part of my extended network!
Rube Goldberg created incredibly silly contraptions to accomplish simple tasks. His work is now recognized through an annual contest. In the event, engineering students compete to create the most complicated and creative solutions that to solve simple problems. The official 2012 contest is approaching and has me thinking about how instructional designs can be over-engineered.
In March 2011 the University of Wisconsin-Stout team won the national competition (for the second year in a row) with a machine that completed 135 steps to water a plant. This year’s competition is to over-complicate inflating a balloon.
What is the value of creating a complicated, albeit creative, solution to a simple problem? For the students involved, they have fun and learn about engineering, physics, and the properties of various substances. The events promote science, engineering, and the programs at the schools involved. These activities are outstanding in every way except one: the outcome. The products these machines create are irrelevant.
As we create instructional technologies and designs, how many of them are Rube Goldberg machines? Designers create complex new technical schemes or media-rich products when much simpler and more elegant solutions will work. Everyone involved in such design processes learns a lot and the contraption might be fun to watch but is the end product worth it?
The answer is, of course, it depends. If the intended outcome is knowledge for the technologist, then such a design is a success. Similarly, if the users of the product participate in its creation, then it is also worth it (but this is a rare instance in most instructional product designs). If we want the output of the design to be the most important part of the process, then simple and elegant solutions yield better results.
Perhaps a Rube Goldberg instructional design competition would be a lot of fun: create the most complex instructional solution to teach someone a simple task…
the platform. The simple ability to “follow” fellow students and faculty even after a class semester ends opens many new learning opportunities. Pearson gave a few new details about the social elements during the design partner meeting, including taking steps to integrate with Google+.
Examples of how to build on social
Pearson developersshowed off some nifty innovations created during a previously held two-day internal hackathon. While these innovations may or may not make it into the formal OpenClass product roadmap, a few were notable and showed the potential of platform: 1) An integration of Google+ Hangouts to create video office hours from within OpenClass (very slick) 2) a collaborative multiple-choice test-taking tool that requires students to work in teams to answer questions 3) a badge system that gives rewards for a variety of activities (e.g. % of people you have interacted with in the class).
The other major feature is still in the works and is currently called the Exchange. An over-simplified view is to think of the Exchange as an app store for learning content. As design partners, we got to see some Exchange wireframe mockups. Creating a one-stop-shop for learning content is far more complex than creating an app store for a monolithic software platform. The vision is that it will provide a simple interface across open source and paid repositories of content. As faculty build their courses using content from the Exchange, they will see a running tally of what students would have to pay (if they select any paid resources). Students will have options on whether to pay for content when they login (e.g. I own the paper text and do not want the digital one in the course).
Pearson knows that gaining and maintaining trust in the Exchange means that they must be egalitarian and transparent in how content is listed. This is a tricky line to walk because, unlike Apple and iTunes, they also own a content business. Faculty ratings of content and open discussion forums will help build trust but attempts to highlight content (like most other digital stores do) will be difficult to navigate while avoiding the appearance of bias. Other design-related challenges include how much information to include for each offering (peer-review status, device compatibility info, evaluation data, etc.), what types of media to include (e.g. video file formats, proprietary players/readers, Flash), and rights management.
Beyond the technical and design questions, there are also potential institutional challenges. The long-term vision of the Exchange includes the ability to accept content submissions from any individual. Similar to the Apple App Store, each person can determine whether he or she will charge for the materials on a per student basis. I suspect this will bring many long-simmering questions about digital course ownership to the fore – especially the first time a faculty member creates a 99¢ math video that a giant community college system decides is a part of their core curriculum.
I think OpenClass is a bold product. In an area where innovation has been very slow and incremental, it offers a chance to rethink the LMS. The challenges are immense and there are no guarantees they can all be overcome. In these two brief posts, I did not touch on the possibilities of global scale and analytics, the host of open APIs, and the deep integration with Google (and soon other providers) but the social features and Exchange lead me to believe that investing time in a robust pilot is worth it to see where this goes.
Note: I do not promote or endorse any product on behalf of my employer. These are my own opinions.
Last week I was fortunate to participate in the design partner meeting for Pearson’s OpenClass LMS. At the University of Wisconsin-Extension, we are early in our OpenClass pilot but I was interested to hear what others had experienced.
It was a remarkable couple of days. Putting more than 20 people from different types of institutions in the same room to have in-depth conversations about what next generation learning technology infrastructure should look like was worth the trip to Denver. Conversations ranged from high-level technical (what should the SIS API enable?) to more broad philosophical issues (should the system encourage a move away from rigid course structures?).
I was impressed with the candor from everyone about the potential and the challenges of building a globally-scalable learning technology platform. OpenClass is available but still in beta and has some kinks to be worked out. Everyone agreed that the basics needed to “just work” before a more substantial rollout could happen at their institutions. What constitutes the basics, however, was different depending on partner needs.
I believe Pearson is facing a classic “Innovators Dilemma” (Christensen) as they launch OpenClass into a mature market. Mature markets have developed expectations about how products should function. The power users of mature technologies expect products that offer rich features that have grown over time. As these products add more features, they also become more expensive and harder for novices to use. This is generally the case in the LMS/CMS market. A disruptive innovation often offers less or different functionality but can be significantly cheaper than the more mature products. In the case of OpenClass, the cost drops to zero (at least in terms of licensing and hosting).
OpenClass does not have the feature depth of more mature learning management systems (at least the features we have come to expect). For example, there are fewer quiz options, the forums are not as robust, and the ability to customize roles is limited. This is not to say that the OpenClass features are not capable but, if you are used to being able to configure each role in 100 different ways, OpenClass does not have that kind of flexibility out of the box. The key is to look at how OpenClass is different and where it is better than current LMSs. What does it offer that other LMSs do not? It uses a much more social approach to learning, has a fantastic interface, and the potential for content sharing on a global scale (I will talk a bit about the Exchange in Part 2).
The big question is whether Pearson can capture enough of the mature market while pushing into new directions. Given that 3000 institutions/organizations are at least kicking the tires, it seems like they have a good shot at getting a solid user base.
We are continuing to move forward with our pilot at UWEX and are challenging ourselves to think different about course design and structure to take advantages of the platform’s strengths.
I split this reflection into two parts and I will talk more specifically about the current and future capabilities that are reasons we are moving forward in Part 2.
NOTE: This post was originally shared to my Google+ stream as a part of an ongoing conversation. Head over there to comment.
I have been thinking about the ongoing conversation related to the $10K Bachelor’s degree that Myk Garn held at the SREB Technology Cooperative and Barry Dahl has continued on G+ over several posts. The higher ed value/cost proposition conversation is also going on in many other contexts.
One problem we have is the fixation on a bachelor’s degree. Universities are right to protect the meaning of a degree to include all of the associated experiences that people expect from college. I think this justified protectiveness is a part of the problem when we start talking about “cheap” degrees. Most everyone would agree that bachelor’s degree represents more than a simple assessment of skill level.
My question is this: we have a GED for high school. It does not imply that the person went through the whole high school experience but shows that they have certain, specific knowledge outcomes that might be expected of a high school graduate
What about a BDE? A BDE is a Bachelor’s Degree Equivalency assessment program that demonstrates an expected college knowledge competency level without implying the rest of the college experience. Such programs might open doors for adults, AND protect the significance of earning a bachelor’s degree.
Perhaps the BDE is combined with other micro-certifications or industry-specific credentials to help people show they have a baseline technical knowledge level and also have the general ability to write, think critically, etc. at a college level.
To be successful I think a BDE would have to be:
Valid and reliable (no small task)
Delivered by reputable organizations/universities
Accepted by employers (this would take time to happen – just like the GED)
Transparent in every way it is designed (both to those taking the assessment and employers)
Are there states/institutions with similar programs today? Would this concept make a difference in the ongoing conversation? Interested in Friday out-of-the-box thought…