The Information Age of Things

Background
After the dissolution of the Institute of Learning Innovation, the EMMA project was taken over by the University of Naples (Federico II), and Prof. Grainne Conole, myself and Dr. Brenda Padilla were contracted to create and run two MOOCs on the EMMA platform. I was responsible for “21st Century Learning” – a MOOC about innovation in pedagogy and related technologies since the turn of the century.

The discussions amongst participants generated all manner of questions. One in particular set me thinking. During the week on Virtual Reality, the question arose as to how different terms should be defined. The accepted definitions for these, and various related terms, have varied over the years, but I want to try and pin down some definitions. I’ve tried to create ones that relate to the experience, independent of any technology.

artificial reality
virtual reality
virtual environment
virtual world
immersive and not immersive
augmented reality
avatar
avatar vs first-person view
telepresence

A contemplation
Let’s start with the question, “So, what’s wrong with ordinary reality?”

And I suppose the answer is, “Nothing.” Except that it’s a bit restrictive. I can’t fly, or teleport, I don’t have x-ray vision, I can’t swim to the bottom of the sea or visit Mars, I can’t shrink myself to an ant and explore a forest, I can only run so far without having to have a good lie down… you get the idea.

Now it may be that I have an over-active imagination and want to do lots of unreasonable things, but I don’t think so, because the last few hundred years has seen people using technology to extend their reality in all manner of ways. Since the 16th Century, if not before, there’s been a stage illusion called “Pepper’s Ghost”  where a sheet of glass is placed at 45 degrees across the stage, so that the audience see both through the glass – the main stage – and a reflection of a person or object off-stage – which can appear to float in mid air.

As soon as film was invented, people started to create images that weren’t real, such as the Cottingley Fairies from 1920 – photographs of paper cut-out fairy shapes that were believed to be real  And that was just the start of our love affair with the unreal. So many images we see nowadays have been adjusted, or “photoshoppped”, that people are starting to question whether we may now have strayed too far from reality.

In 1937 the Walt Disney Studios created its first feature-length animated film (with a quarter of a million hand-drawn images…): Snow White and the Seven Dwarfs. The first feature-length animation made entirely from computer generated imagery (CGI) was Toy Story in 1995. Even “ordinary” movies contain so many “special effects” that they are a long way from the reality that existed during their making.

Humans, it seems, are not satisfied with reality – at least, humans in modern Western society. But why? Damien Walter wrote an interesting article, leading to the question: “Do our fantasy worlds help us to escape, not from reality, but from our own limitations?”  Maybe so.

And now the escapists – that is, all of us – have technology so advanced it would, not so long ago be indistinguishable from magic (to misquote Sir Arthur C. Clarke), and so basic that in 100 years we will be called “primitive”.

So, let’s start at the other end and work backwards. What form will our escape from reality take in 100 years time? Ok. I have no idea. So let’s try a related question. What form would we like this escape from reality to take in 100 years time?

I’m sitting in an armchair, in a room at home. It’s a nice home, but really, I’ve sat in it quite a bit since I moved here. Wouldn’t it be nice if I could change the appearance of the walls at the touch of a button? No, at a voice command. “Make the walls light blue”, and now the walls look as though a painter’s been in. Except there’s no painter, and no paint, and if I turn off the piece of technology that’s making the walls look blue they’ll jump back to being cream.

What is this technology, you may ask? Well, we’re still 100 years in the future, and it hasn’t been invented yet, but please bear with me a bit longer…

How about, I don’t like walls at all. Let’s remove them. “Remove walls, create a country scene”. So, now there’s a coffee table in front of me, and then a sofa, and then a hedge, some trees, and a field of sheep. A robin flies down and perches on the edge of the coffee table. It feels like I’m sitting in the countryside. Of course, I know the hedge is actually a brick wall, so I’m not going to try and jump over it, and the sheep won’t wander into the room because, well, they are virtual sheep, computer generated, and the computer knows that small birds are pretty and sheep are, well, annoying.

I call my friend, and we have a chat. She’s sitting opposite me, on the sofa. Except she’s not of course. She’s sitting on her own sofa miles away, but it feels like she’s here, and – from her perspective – I’m sitting in a chair at her house. Of course, I can’t hand her a coffee, because she’s only virtually here – a telepresence – but that’s ok, because it feels as though she’s here, which is the main thing.

I’ve tried to create definitions that would fit this scenario just as effectively as today’s technology, so that the definitions describe the experience, not the technology.

artificial reality – coined by Myron K. Krueger, this is the broadest definition: anything that appears to be real – completely real and present, genuinely mistakable for a real, ordinary, physical experience – but is actually an illusion. We don’t really have the technology to do this yet, but we will, and sooner than one may think. The experience doesn’t have to be computer-generated, although that is our current best technology. The obvious fictional example is Star trek’s “holodeck” but I don’t want to get hung up on technology – we don’t need any specific technology for these definitions, they are about the experience, not the hardware.

virtual reality – artificial reality that is not completely convincing – it is apparent that something is amiss, or the presentation requires some imagination. So, for example, anything you need a screen to see, or some sort of head-mounted display that’s sufficient to remind you that something odd is going on, or objects don’t behave as real objects – pixellated, juddery, not fully believable. The experience is not “real” in some noticeable way. The big thing is that humans are very adaptive, and can gain a lot from experiencing reality in a non-realistic form – from feature films to Second Life.

virtual environment – sufficient objects using virtual reality to feel as though you are in a location, and the ability to move around that location. So, not just a virtual teapot, but a whole room. Or countryside. Or Mars. A virtual environment can include other people that one may interact with, usually represented by avatars – characters that don’t necessarily look exactly like the actual person (or look completely different, such as a cat). (I’ll talk more about avatars in a while.)

virtual world – a versatile virtual environment that appears to be, or is effectively, infinite. So, not just the one virtual space, but numerous different spaces, and usually the ability to create new spaces with virtual objects at will. Coming back to artificial reality, if technology ever reaches the point where it can create a virtual world indistinguishable from the real one (i.e. you can physically walk around and interact with non-existent things) then I would call that an artificial world.

immersive vs not immersive – this is really subjective, and depends on whether someone using this sort of technology reaches the point where they suspend disbelief and act – within reason – as though the experience is real. Clearly, this is easier with some experiences (e.g. a modern racing car simulator) than others (e.g. the “Pong” video game), but it also depends on the person as to whether they make the necessary imaginative leap from a virtual experience to a real one. (Of course, by my definition, an artificial experience wouldn’t require any imaginative leap, so would, by definition, be immersive – unless, that is, the person knows it’s artificial and deliberately treats it as not real).

augmented reality – this is where the real world and virtual (or artificial) reality are both present at the same time, like the robin on the coffee table. Microsoft’s HoloLens actually comes remarkably close to this (as we saw in the MOOC), despite still needing a lot of development.

avatar – computer-generated character that represents a person in a virtual environment. As in the book Snow Crash, the representation of people in virtual spaces will move forward significantly when actual facial expressions can be represented accurately in the virtual. Lindon Lab’s Project Sansar promises to go some way towards this  In the long run, I see avatars as becoming increasingly realistic to the point where they look like real people – although, like today’s avatars, not necessarily like the actual person.

avatar vs first-person view – in my opinion, following an avatar all the time is a hangover from virtual reality’s gaming past. In some of the interviews for the SWIFT project (where we created and trialled a virtual genetics lab in Second Life) I asked participants if they felt they identified with the avatar (a concept that seemed important at the time). Generally, I got the impression that participants thought this to be an odd question. Participants generally had the feeling that they themselves were doing the experiment – not the avatar, and not something they watched on video. Indeed, two of the interviewees said that the avatar just got in the way (obscured their view of the lab bench). So I would not refer to “first person view”, any more than I think of my reality sitting here typing as “first person view”, it’s just how I see the world – how I’ve always seen the world. As we move forward towards artificial reality, I think it’s time to leave viewpoint behind and just think about reality – what I see is how it is.

Telepresence – the experience of being somewhere in the real world other than where one is. Telepresence often refers to some form of videoconferencing, such as Edward Snowden’s TED talk but also extends to control of distant robots with various amounts of realism. From the perspective of other people at the remote location, the robot or video screen represents the person engaged in telepresence. The film “Avatar” imagines a highly sophisticated version of telepresence. From the experiencer’s perspective, telepresence is similar to virtual (or even artificial) reality, but the big difference is that the environment in which they find themselves is actually a real environment and, unlike virtual or artificial reality, may contain physical people, animals, etc. The difference is important: in virtual (or artificial) reality other people are not actually present so cannot be physically harmed, whereas the environment the telepresent person is in is real, and real harm can occur to the people in it (but not to the telepresent person, of course).

Conclusion
I think, over time, the need to distinguish between these forms of reality will diminish. We will get used to the idea of virtual objects and experiences being part of our lives, to a greater or lesser extent. A virtual robin on the coffee table will seem ordinary – it will just be a robin and a coffee table. Probably some slang term will appear for virtual objects. Maybe something like “Don’t bother feeding the robin, its holo”. There will be shops full of models walking around displaying the clothes, changing instantly to match what a computer decides nearby shoppers might like, and the shoppers will know that there are no models, but it will seem normal. There will be parks, and art galleries, and sports arenas that are just empty concrete spaces, but we will only know that if we ignore the trees and paintings and action in front of us, and stop and think.
It will be The Information Age of Things.

Paul

Dr. Paul Rudman, April 2016, Leicester, UK
paulrudman.net

 

Why I don’t use a smartphone

It’s 2014, I’m a learning technologist, and I don’t have a smartphone. Why? It’s all about real estate.

Randi Boice image on Flickr

Randi Boice image on Flickr

You see, when the iPad first came out, I knew it made sense for me. It hit the sweet spot of portability, compact size, battery life, and functionality, so I could do *most* of what I wanted to do on a computer but anywhere and anytime. I bought (well my dear husband bought it for me for Christmas) the 3G iPad generation one. That iPad has now been replaced by a mini wifi only, but I have a mifi device which gives me connectivity pretty much anywhere I can’t get wifi.

Meanwhile, everyone else was buying smartphones. But why did I need to be poking around on a tiny keyboard and looking at a tiny screen when I had my iPad? What about actual phone calls, you may ask? Well I do have a little Nokia which cost me 15 quid at a local shop, which I use for a grand total of 5 quid per month. So that 5 quid, plus the 10 quid monthly I pay for my mifi — total of 15 quid a month and all devices purchased outright so I’m not paying for those monthly.

There are occasions when it seems like I might need a smartphone. For example, Instagram. Have never done Instagram. I do enjoy taking photos with my iPad and my camera, and I have accounts on Flickr and Pinterest for some of these. Also Snapchat — have not done that one, as it seems to demand a smartphone. Another time I felt out of it without a smartphone was when I used Google Glass. Google Glass wanted to pair with a smartphone, or at least an official 3G-enabled tablet. My iPad with a mifi did not qualify. I had to borrow my boss’s iPhone to test Google Glass (slightly awkward).

One last thing: I’m a woman. I carry a bag with me pretty much all the time. Some of my clothes don’t have good pockets. So carrying an iPad mini is no problem — it’s with me all the time. Even when I go to a fancy party, I can fit my iPad mini and mifi into a small bag. So I guess it’s a big goofy that I carry 3 devices (iPad, mifi, dumb phone) instead of one… but I think I’m ahead in terms of money, and I have the real estate which I like. What do you think? Will I have to bite the bullet and get a smartphone?

Terese Bird
Learning Technologist, University of Leicester

How are 1st year Medical undergraduates using iPads?

Screen Shot 2014-07-29 at 08.19.51Since autumn 2013, the University of Leicester Medical School has been issuing each first-year undergraduate with an iPad. This is the first UK medical school to implement this kind of initiative.

Why did we do this? One of the main drivers was to solve the problem of having to print paper workbooks, which cost too much and then constantly need to be updated. Our solution was to simply format the workbooks into a nice shape for the full-size iPad, save as pdf, and distribute to the students via Blackboard. We then instructed the students to buy Notability (an app which reads pdfs and allows note-taking) and download the Dropbox cloud storage app, which works together with Notability and many other apps. We also instructed the students to bring their iPads to every class session. And then we watched what happened!

By their own report in several surveys, the students mainly used the iPads to read and annotate their workbooks, and to follow along in lectures, annotating onto pdf lecture slides. But it was beyond that use where things got interesting. Students worked together in small group sessions, discussing and drawing on paper, then at the end of the session photographing their notes and developing them individually later in personal study. Students created study groups on Facebook and shared documents and discussed. Students created flashcards of the names of muscles, for example, and tested their knowledge personally and in groups. Below is a list of apps which students mentioned in a survey, which they found useful, along with a brief description of what the app does or how students reported using it. After that is a further brief list of reported learning activities with their iPads.  I hope to report further developments on this initiative as it matures.

List of Apps mentioned by Year 1 Medical students and how they use

Notability – read, take notes on workbooks and lecture notes

Dropbox – storage space, keep their notes, serves as a go-between between some apps

Brainscape — create your own flashcards

Essential Skeleton – anatomy app

Teach Me Anatomy – anatomy app

YouTube – search for educational video

Essential Anatomy – anatomy app

Calendar – calendar syncs with university calendar

iMessage – direct message each other

Facebook – study groups

Adobe Reader – read workbooks and ebooks offline

Pages – opens Word docs, edit, save back as Word; create Word docs

Numbers – spreadsheets (to graph numbers and share in discussion with others)

MB Anatomy – anatomy app

Visible Bodies – anatomy app

Resuscitation – virtual patient simulator

OSCE skills – exam prep app

Anatomy Quiz – anatomy quiz

Simple Mind – mind map

Penultimate – handwriting app, saves into Evernote

Other interesting ways they are using their iPads:

Use iPad as a second screen – look at lecture slides on iPad, type notes on laptop

Use stylus and Penultimate to draw what they saw in dissection room

Use iMessage to share pictures and diagrams with other group members

Use Numbers to graph questions in group work

Terese Bird, Learning Technologist, University of Leicester

What learning materials are downloaded in China Part 2

The University of Leicester launched its iTunes U channel in April 2013. Since then, it has been interested to see what countries are viewing and downloading our material. Last week, Apple decided to feature our Study Skills collection on the front page of iTunes U. Not only has this attracted many more people to view and download our material (thanks, Apple!) but also it produced a shift in which countries are downloading our material. Usually, either the UK or the USA is the number 1 country viewing and downloading University of Leicester material. But today, 7 March 2014, the picture is the following:

Visitors by Country, to University of Leicester iTunes U Channel, 7 March 2014

Visitors by Country, to University of Leicester iTunes U Channel, 7 March 2014

So basically, China has taken over as the number 1 country downloading our material. This happened once before, when Apple featured a different collection of ours, Model Organisms in Biomedical Research. It seems that people using iTunes U in China really respond to what is displayed on the front page of iTunes U.

Visitors by Device, 7 March 2014

Visitors by Device, 7 March 2014

The above breakdown of what devices people are using to download our iTunes U materials is very interesting. The last time we had a featured collection, the majority of downloads was to Windows computers running iTunes. Today, it’s dominated by iPads and iPhones. Last year, Apple included China as one of the countries receiving early shipments of iPad Air and both models of iPhone 5, and has been working hard to prioritise China in its plans for the future. I am idealistic enough to believe that higher education is not quite the same sort of consumer product as iPads. But it is good to be able to see what people in other countries such as China are enjoying of our learning material, and hopefully this can inform future developments.

Terese Bird, Learning Technologist & SCORE Research Fellow, University of Leicester

Cool webinars for Open Education Week 2014

This year Open Education Week falls 10 March through 14 March 2014.  What is Open Education Week, I hear someone ask? Open Education Week raises awareness of the open education movement and its impact on teaching and learning worldwide. Open education encompasses notions of open educational resources or OER, open courses such as MOOCs, and other open practices.

Because the Institute of Learning Innovation is working on the EU-facilitated eMundus project, we are doing a special themed webinar on Friday, 14 March, at 11am until 12noon GMT. Our webinar is one of a series showcasing aspects of the eMundus project, which is (among other things) mapping out institutional partnerships in open education, such as universities which accept MOOC credits for transferring in, and the OER University. Our Friday webinar will look at the pedagogies of MOOCs. Check out  the poster below for more cool webinars you can join in during Open Education Week. With special thanks to Athabasca University for facilitating our whole series of webinars!

OER benefits for enrolled students

The open education movement has often focused on explaining the benefits of open educational resources (OER) and other open education initiatives to people beyond the reach of formal education — those who cannot afford it, who live too far away from schools, who cannot access formal education for any number of reasons. But in addition, current students benefit from the use of OER. This article by CK-12 Foundation gives good examples of how American schools are making OER work for students, largely through saving money on textbooks.

The Manufacturing  Pasts project (video above) was funded by JISC to digitise and mash up into learning materials artefacts from Leicester’s industrial past.  I had the privilege of working on the project. Now, a year on since the project ended, I can see that the work we did is benefiting current students in ways we did not expect. For example, I was just helping to teach a digital media session in University of Leicester Museum Studies department. The students are putting together museum displays with sound and video installations augmenting the photos and physical items. When we directed them to MyLeicestershire.org.uk and the Manufacturing Pasts collection, and told them these were all CC-licensed, there was an audible sigh of relief that they did not have to hunt for copyright permissions as they must for other items.

Another way OER and open practice benefits currently-enrolled students is in the way some universities are launching MOOCs designed to help their own students. University of Northampton, for example,  has launched and is continuing develop a MOOC teaching academic skills (referencing, how to handle feedback, writing) — with a version for undergrads and a version for postgrads. These MOOCs require only about 2 hours weekly and are offered to students who have been accepted to the university, as well as any student already having begun to study. Academics who were already teaching these things to smaller groups of students have put together the online materials. It’s a bit early to conclude yet how well these MOOCs will help the student. I will check back with Northampton in a few weeks as I continue to gather stories of how open educational practices can and are helping students currently enrolled at the participating institutions. Please comment if you have such a story.

Terese Bird, Learning Technologist & SCORE Research Fellow, Institute of Learning Innovation, University of Leicester

A Pedagogical Look at MOOCs

As a part of Open Education Week 2014, Professor Gráinne Conole and I plan to hold a webinar (details to be announced shortly; watch this space) on the topic of A Pedagogical Look at MOOCs. This webinar is not simply a University of Leicester production; it will be part of the EU-funded eMundus project, one task of which is to map out patterns of open educational partnerships between institutions around the world. An example of such a partnership would be the OER University, or a university accepting some form of credit for successful completion of a MOOC.

Our webinar will take a pedagogical look at MOOCs in the following way: first we choose 5 MOOCs, each corresponding to a primary learning approach taken in the MOOC. Then we map each MOOC against 12 dimensions identified originally by Grainne in her blog post “A New Classification for MOOCs” (and with thanks to Stephen Downes for identifying the last two dimensions (Downes, 2010)). Below is my initial attempt, having chosen only 2 MOOCs so far: the Open University Learning Design MOOC (OLDS), which I identify as constructivitst, and the original George Siemens Connectivist MOOC. Many thanks to Paul Rudman for his input on this mapping exercise as well.

One obvious question is: how does one pedagogically categorise a MOOC? Another big question: how are we defining these dimensions and what would constitute Low, Medium, or High for each one. I am interested in your views on these and other questions — please comment! I include the webinar abstract at the end of this post.

9.57.52

Webinar Abstract: As the number and variety of free online courses and MOOCs increases, it becomes more important to be aware of their differing pedagogical approaches. After initial attempts to categorise MOOCs as cMOOCs and xMOOCs (roughly, C for connectivist and X for EdX –style), it began to be clear that more nuanced categorisation was needed, and especially when considering the course’s primary learning approaches. Taking Conole’s 12-dimensional MOOC classification (Conole, 2013) and choosing 5 learning approaches often used in elearning (Mayes & De Freitas, 2004) (Bird & Conole, 2013), we categorise 5 MOOCs as an exploratory exercise for this webinar. Does this exercise display clues to the direction of MOOCs and free online courses in general? Are there any warning signals which we as educators should note? In the context of the eMundus project, does this classification help quality officers make decisions in open educational practice, for example about accepting credit for a completed MOOC?

Bird, T., & Conole, G. (2013). From E-Learning to M-Learning. In From E-Learning to M-Learning. Singapore. Retrieved from http://www.slideshare.net/tbirdcymru/from-elearning-to-mlearning

Conole, G. (2013). A new classification for MOOCs. e4innovation Blog. Retrieved January 25, 2014, from http://www.e4innovation.com/?p=727

Downes, S. (2010). Fairness and equity in education. Huff Post Education.

Mayes, T., & DeFreitas, S. (2004). JISC e-Learning Models Desk Study Stage 2 : Review of e-learning theories , frameworks and models.

 Terese Bird, Learning Technologist and SCORE Research Fellow, Institute of Learning Innovation, University of Leicester

Reflections on Student Views of Lecture Capture

The University of Leicester has been piloting two lecture capture systems (Echo360 and Panopto) since autumn 2013. I have been working on evaluating the systems and the use of lecture capture at our university generally. I thought the wider community might be interested in our preliminary findings on student views particularly, and some reflections.

Running for President on a platform of lecture capture

Running for President on a platform of lecture capture

I share some questions and responses to the online survey given to students:
1. Did you listen to/view at least one of the recorded lectures?

Yes: 81%        No: 19%

Asked why they didn’t view any, students gave answers such as “Didn’t feel it was required because I made notes during all my lectures.” One student said they make their own audio recording so don’t need a recording made by the university, thereby underlining their perceived need for some kind of recording.

2. How many times did you listen to/view the lecture?

Once: 70%         2-5 times: 23%       More than 5 times: 7%

3. Did you attend the lecture?

Yes: 90%            No: 10%

4. If you did not attend, did the fact that you could watch the recording later influence your decision not to attend?

3 students admitted yes it did influence their decision not to attend. It is still very early days in our university’s foray into lecture capture. Will students not attend lectures because they know they are being recorded? The 3 students answering yes on this question represent 4% of students who did the survey. Another interesting note is that one of the participating lecturers commented in an interview (I paraphrase), “I have so many students in my lecture that I actually don’t mind if they don’t attend and just watch the recording.” I would say this lecturer’s view is far from typical, and yet this is not the only time I have heard this point of view.

5. For what reason(s) did you listen to/view the recorded lecture? More than one answer could be chosen.

Exam revision:  27 

To make sure I understood everything: 40

To go over something I did not understand: 46

I did not attend the lecture and wanted to catch up: 9

To catch up details I missed the first time: 1

6. How important is it for the recording to be made available to you quickly after the lecture:

90% answered somewhat or very important. When asked how long after the lecture should the recording be made available, the vast majority answered: within 24 hours.

What strikes me is that these students really value lectures, and they want to go over the materials covered in them again and again. I saw this in my work with lecture capture at Bangor University as well; students like lecture capture because they like lectures. I close with a couple of student comments about lecture capture, which again reinforce how much value students place on the lecture:

“Listening to the same lecture more than once helps to refresh the memory and aid in my better understanding. I could listen again at my own pace.”

“It’s a great review tool. When in lecture taking notes it’s difficult to take everything in; reading the provided text helps, but being able to go back to a lecture for clarification is priceless.”

Terese Bird, Learning Technologist, Institute of Learning Innovation, University of Leicester

on Twitter: @tbirdcymru

Confessions of a PhD student (15): “I feel a bit empty inside as my PhD is ending”

I have recently submitted my PhD thesis. After almost 4 years, it is ready. I finished. The literature review, the methodology, the data collection and analysis, the discussion, the conclusions, everything, it is done. Long hours of hard work have culminated in a 266-page long document.

It felt strange handing it in. It is not the final step of this journey, as I still have to wait for my viva voce presentation. But it is so close to the end that I cannot help but feeling a bit empty inside. An important period of my life is ending. My stay in the United Kingdom is almost over.

BCPR Thesis

This is my most liked picture on Facebook. I was impressed by the amount of support and good wishes I received.

It is time to look back and reflect on what I have learned. Throughout my studies, I have met many interesting people, who have shared with me their experience and knowledge. I have learned about technologies, pedagogical practices, research methodologies and more.

Unquestionably, the person that has contributed the most to my academic development has been my supervisor. We have worked together in a weekly basis. He is one of the most intelligent people I have ever met. I am grateful to have him as my mentor, my academic father. From him I have learned many lessons, including:

  1. Write properly. I knew this one before starting my PhD. But now I am better at it. A great idea/finding is nothing if expressed blandly.
  2. Use diagrams. Figures give readers a break from the text. They help those who just want to skim through your writing learn your main points.
  3. Choose your fights. I hate it when someone wants to use their “authority” to make me do something I do not want to do (e.g., unnecessary changes in my work). When I am in a situation like that, my first impulse is to argue and stand my ground. My supervisor taught me to keep calm and find the easiest way to solve the problem. Is it worthwhile to spend time discussing trifles? Usually, it is not. I have learned that now.

Is this really over? I want to think that this is not the end, but a new beginning. I will continue doing research, writing, learning… I will keep in contact with the people I have met and maybe even collaborate with them. New projects await. A new path lies ahead.  A new journey will start.

MOOCing for learning or MOOCeting for earning?

At the ALT MOOC SIG gathering in Southampton on 6 November, we were assured by Helena Gillespie, of University of East Anglia, that MOOCing is definitely a verb. I’d like to add a new one to the ever-increasing glossary of the MOOCosphere: MOOCeting. It is perhaps best explained by the image below:

Language of the MOOCosphere

Language of the MOOCosphere, G. Witthaus

It is clear that there have been two strands of MOOCs developing for some time now, and this distinction is often couched in the language of xMOOCs vs cMOOCs. Having previously carried out research into the Open Educational Resources university (OERu), which is dedicated to widening participation in higher education, I have become familiar with the language used to describe and bring into being a means of enabling everyone, everywhere, to get a fully accredited degree from a recognised institution by learning from openly licensed content on the Web. The key concepts that are at the root of the discourse here are: enabling massive numbers of learners with limited financial resources to get an accredited higher education qualification; reusing existing course materials; providing a basic level of support for learners to access the resources and navigate their way through them; disaggregating the provision of content, teaching and assessment for the benefit of the learners; providing assessment and accreditation at cost, and ensuring sustainability of the process.

The emerging discourse about the MOOCeting version of MOOCs, is, as the name implies, informed and dominated by institutional Marketing Departments. The primary question seems to be, “How will this benefit the institution?” Answers are speculative at this stage, but tend to centre on notions around “expanding our global footprint”, and ultimately recruiting fee-paying students by “converting” MOOC students into “real” students. To do this, the strategy is to develop MOOCs with substantial amounts of new, glossy materials, particularly video content. The quality of the content, both in terms of academic quality and high-tech multimedia quality, is seen as critical to the success of the project.

One thing both MOOC strands seem to agree on is that the MOOC explosion is innovative. Ultimately, it may happen that both strands move closer to one another in terms of the other dimensions too, as the apparent side-effect of institutional marketing might bring unexpected but valuable benefits to those institutions that are not explicitly seeking it, and the apparent side-effect of widening participation might actually turn out to be an important factor in the ultimate success of the MOOCs that are aimed at recruiting students with deeper pockets.

I have created a slide presentation containing more of my thoughts on MOOCs and some random factoids from these recent conferences:

Blog post by Gabi Witthaus