Free Open Access Medical Education

For some years now, I have noticed that medical educators are looking at learning innovations in their own unique way. I first became aware of medical education happening in virtual worlds and simulations, such as Coventry’s virtual maternity ward in Second Life, and St George’s paramedic training in Second Life. 

Screen Shot 2013-07-09 at 22.05.51

Damian Roland argues for the use of social media in a 26 June debate held at University of Leicester

Our own University of Leicester brought medical students into a virtual Genetics lab as a way of offering additional training in Genetics testing. Dr Rakesh Patel and his team developed a Virtual Ward (still going on today), in which students may visit virtual patients and practice coming up with a diagnosis. When I tweeted about these kinds of initiatives, I would receive replies using the hashtag #meded or #vitualpatient.

But last year I began to see a new one on Twitter: #FOAMed — Free Open Access Medical Education — or just #FOAM — Free Open Access Meducation. I began to follow people like Anne Marie Cunningham (@amcunningham) , Natalie Lafferty (@nlafferty), and Damian Roland (@Damian_Roland), among others who, as medics and medical educators, see the value of using social media in medical education, or the value of blogs, or the value of a crowd-sourced site of medical questions and answers such as gmep.org. Meanwhile, Rakesh was coming up with ideas thick and fast: why not tweet and record the Nephrology conference SpR Club this past April, and the TASME Meeting at DeMontfort University this past May? And so I did!

Then Rakesh and Damian got the bright idea to debate the motion: “This house believes that medical educators must use social media to deliver education.” The debate took place on 26 June at University of Leicester, and I was able to live-stream and record it, as well as join in the Twitter discussion. There were several remote participants including one from Canada, in addition to the approximately 20 attendees face-to-face at the Medical School. Not only did the debate spark real interest and a sense of challenge among those present (many of whom seemed to be new to the ideas of FOAMed and social media), the discussion continued on Twitter for a good couple of days, as the images below show. You can listen to and watch the video of the debate here.

Screen Shot 2013-07-09 at 21.00.43 Screen Shot 2013-07-09 at 20.59.32Now the ASME Annual Scientific Meeting is happening in Edinburgh, and Rakesh, Natalie, and others are presenting a workshop on FOAM. My name is on the presenter’s list as well, and although I could not attend, I shall be eagerly watching for tweets from the conference. I have come to see, especially through the eyes of my medic colleagues, that Free Open Access Meducation is a better education than closed— better because more information is accessed the wider one’s network is, better because more learners are reached via open platforms than closed, better because open encourages interdisciplinary sharing and learning… the list of benefits goes on.

Terese Bird, Learning Technologist and SCORE Research Fellow

Virtual world training in 30 minutes

An interesting quesion arose from my ALT-C talk last week. It was basically “How can you use Second Life for teaching when it takes two hours to learn how to use it?”
Which isn’t really a question, of course. It’s a statement. Along the lines of “It takes my students two hours to learn to use Second Life”.

So, here’s a question in reply: Do you expect your students to be able to use MS Word? Yes? Including MailMerge? Macro programming? I suspect not. They probably just need basic formatting. Maybe headings. An index for the really advanced. And it’s the same with learning to use Second Life. Thirty minutes training is all that’s needed for most learners in Higher Education.

The key is to consider training as part of the overall design. Here’s what we did for SWIFT.
1) Define the Learning Objectives. For our second lab it was to practice evaluating experimental results and to learn the connection between theory and practice.

2) Design activities that will best support those Learning Objectives. In our second lab, the activity was to work through a sequence of experimental steps and results, answering quesions about procedure, interpreting results and seeing animations of molecular processes at critical moments.

SWIFT learner's avatar showing virtual lab and HUD and animation

3) Design the environment necessary for those activities. We created individual lab benches with replica equipment, and a Head-Up Display that acted as the automated guide.

4) Define the SL competencies necessary to accomplish those activities. So,

a) Walk – well enough to position the avatar in one place
b) Close the sidebar
c) Touch (click on) objects
d Chat
e) Zoom the camera in on one spot
f) Put on / remove a lab coat
g) Attach the HUD

Now, most of these only need to be done  once, and some will already be understood (like clicking on things) so there’s no need for lots of practice. All that learners really need to be good at is zooming the camera. So the 30 minutes is something like 10 minutes for the easy things, 10 minutes for the lab coat and 10 minutes for the camera.

Visitors in the SWIFT training area

5) Create or adapt a training area suitable for learning and practicing those skills (and only those skills, so the training area may need adjusting for different groups). There are many training areas in SL, some better than others. Ours is here. Basically, the avatar needs to be constrained until they can walk properly, instructions must be very clear to all, and tasks must be in a logical progression. We have adjusted our training area over the last 12 months using observation and in-world interviews and questionnaires.

And that’s it! We don’t teach them how to run, fly, IM, search, teleport, build, offer friendship, use weapons, drive vehicles … there’s quite a list, and if they choose to continue using SL in their own time and outside of the University island they will probably want to use many of these. And they may need MailMerge in MS Word for running their own business…

So, ask learners new to SL to sign up for an SL account on the web site in advance. Then in the class, when they first use SL, ask them to enter the location of your training area at the SL login screen (so they don’t wander round some public place) and the half-hour training will pretty much run itself. (Yes, really, you just need someone hovering to help the occasional student who uses existing knowledge or expectation in place of the instructions.) We would expect similar success with OpenSim implementations, but can’t speak from experience with these.

How well the actual lesson goes depends on many things, from what’s to be learned and how that’s represented in the virtual world, to how well the environment is built and how motivated the students (and teacher) are. Some things can be learned well in virtual spaces, others not. Some virtual world use is embarked upon with enthusiasm, some not. What we can say with some certainty though, is that SL training need not be a problem.

Paul Rudman,
BDRA

Why using virtual worlds for teaching just got easier

One of the Frequently Asked Questions about using virtual worlds as a teaching and learning environment is: “How much does it cost to prepare a learning environment?” )

Last week, Linden Labs (makers of the Second Life virtual world software (SL) ) added a new feature: “Mesh”. On the face of it, this could lower the cost of building suitable teaching environments within SL, but like most new features it’s hard to predict just how useful they will be. So I decided to try it out. . .

We are in the process of setting up the third SWIFT experiment in SL, and we need to create a simple building with some visual interest. We settled on an Egyptian-style pyramid. Until now, the standard (and almost only) way to build in SL was using “prims” – simple shapes one materialised (or “rezzed”) and manipulated within SL. Creating our pyramid with prims would look something like this (you add the “texture” – image of stone blocks or whatever – later):

With Mesh, you design objects first using a number of free or commercial programs and then import them as objects into SL. Apart from now having a choice of tools to use, there is one huge advantage: because the object is built out of lines rather than 3D objects, you only need think in terms of what you see, not component shapes that you have to imagine.

For example, in the picture above, I’m creating a pyramid out of triangular things. For a Mesh object, I can create it with lines, like this:

I’m using the free Google Sketchup program (that character is not an avatar, it’s just a 2D drawing, there to – I assume – give a sense of scale). Other programs are available, such as Blender  (better but not so easy to learn) and Maya (if you have a big budget!) Sketchup took a few hours to learn, but now I could recreate the pyramid in a few minutes – much quicker than using the building tools in SL.

Then, it’s a simple matter to export the shape as a file and import into SL. . . and, voila! A 3D pyramid in SL.

For the first SWIFT experiment I created a virtual PCR machine which, as I recall, took a whole afternoon to create out of textured prims. Even though much of the time was taken in preparing the textures (images taken in the real lab), drawing it using lines would definitely have been easier than shaping individual blocks, and the level of detail possible would have been greater too.

Mesh is still new, and there will be drawbacks for a while (there seems to be a bug that doesn’t let me walk to the far corner inside the pyramid, for example). Nonetheless, I’m really very impressed by the possibilities Mesh has to offer. I’m sure it won’t be so long before all the OpenSim grids support Mesh too.

So, if you were thinking of using virtual worlds for teaching and learning, things just got easier!

Paul Rudman,
BDRA

Why Google plus will fail

Before I began my first degree in Psychology, I read a book about how friendships work. It goes like this:

“The development of friendship occurs through the skills of partners in revealing or disclosing their attitudes first and later their personalities, inner characters and true selves. This must be done in a reciprocal manner, turn-by-turn, in a way that keeps pace with revelations and disclosures made by the partner” (Duck S. 1983 pg67)

When the first big social network (Facebook) began it was based on the idea of a college yearbook (Name, Photo, Personal information). That’s fine for a yearbook, because everybody who reads it will likely be part of the same social network. In the real world even this “basic” information can vary radically according to who we interact with.

For example, I have no single photo suitable for everyone I know. Work colleagues expect a professional photo (at a desk with Second Life running); personal friends want something more about me (Capsule hotel, Tokyo, 2001); Second Life friends expect, well, an avatar…

Then there’s my name. Surely that’s consistent? Well, again, the same three groups probably expect, respectively, Dr Rudman, Paul, PD Alchemi. Think about it. What does your boss call you? Your mother? Your partner at 1am?

Revealing one’s full name and work identity could be way too much of a leap for a new social acquaintance. A photo that somehow reveals religious or political views could be a shock for work colleagues who may have assumed something completely different. It’s not that these things are “secret”, just that they need to be shared appropriately.

We are all at some stage in the friendship forming process with each of our “friends”. For some it will be a temporary stage as we move forward. For others it will be a stage from which we prefer not to move further forward. But whatever the stage, we need to be careful not to jump too far ahead, not to reveal something which that particular friendship is not yet ready for.

Google plus – Google’s foray into the world of social networking – allows people to be allocated to “circles”, i.e. groups for filtering information. It’s a significant step forward, but alas, I suspect it will not be enough. Not all friends are equal. Some can be told about the club last night, some can’t; some can know about holiday exploits, others cannot. There needs to be some form of categorisation system that matches up individuals and information, so people can slowly move from strangers to “inner circle” – or not, as desired.

Google plus’s twitter-esque ability of one-sided friendships, also known as “following” people, or putting strangers in one’s circle, is another good move, but without a new system for controlling who sees what it’s just Twitter@Google.

One complaint about Google plus is that it won’t let people create an account for their avatar. An avatar is a mechanism for social relationships, whether Google like it or not. We recently saw the beginnings of a social network for avatars in Second Life. It’s pretty rudimentary at present, but it will probably survive, maybe even thrive, because it partly fills this gap.

The fundamental problem is one of revealing personal information appropriate to the depth of each social relationship. *Everything* needs to be tailored to the people who will receive it. Everything you post, your name, your photo, the other people in your network – who they are, what they represent and what they post – all say a lot more than most people realise. All can damage the delicate sequence and balance of a social relationship.

Circles were a great idea, but they just don’t go far enough. There needs a finer grained definition of who should know what. Like a leaking bucket, it’s not the bucket that needs fixing, it’s the leak.

And that’s why Google plus fails to improve on Facebook and Twitter, and ultimately will fail to become the new dominant social network.

Comments please…

Paul Rudman, BDRA

Duck, Steve. 1983. Friends, for life: the psychology of close relationships. Harvester Press, Brighton, Sussex

Kindling

Some years ago, I was surprised to discover that anyone could resell their books on Amazon. (Until then, I assumed Amazon was only available to businesses.) Two things happened as a result of this revelation:

1)      I became much more sceptical about buying books not sold by Amazon itself
2)      I started selling my own second hand books

Then came eBooks, and the Amazon Kindle store. My first assumption went along the lines of “well, you can’t sell second-hand eBooks, so everything here must by sold by Amazon”. Right? Of course not. I soon discovered that anyone can sell any text on the Amazon Kindle store.

So again, my perception of the store’s reliability dropped, albeit for a different reason.

It strikes me that virtual worlds suffer from the same kind of problem, only in reverse. When the first contemporary public virtual world (Second Life) was launched, anyone could create their view of a desirable world. And thousands did. Some creations were beautiful, some were downright weird. The press, obviously, couldn’t resist poking fun at some of the public spectacles.

Given time, the virtual world “publishers” came along and created spaces intended to be useful to large numbers of people (rather than being one personal idea of a useful world), and using evidence-based design. There are many such places now; in our case, it’a a laboratory for teaching and learning laboratory skills in genetics.

Now that stories of weird goings-on in virtual worlds are yesterday’s news, virtual worlds appear to have “had their day”, but this is not so, they have merely “had their 15 minutes of fame“. Many virtual worlds are now available, some now a good alternative to Second Life, and many organisations are developing successful educational and other professional spaces.

If the Kindle was meant as kindling for eBooks, Second Life has done the same for virtual worlds. I’m looking forward to seeing both become the roaring success they deserve.

Paul Rudman, BDRA

Celebrating Machinima

Last weekend, our SWIFT video saw it’s thousandth view. That’s not a lot in YouTube terms, but as a dissemination method for an academic project like SWIFT, it’s really quite impressive. Compare this to a conference presentation: a thousand people in the audience is virtually unheard of. A paper in a high-impact journal is good, but like a presentation only reaches an academic audience in that area. Nowadays, funders want a lot more interest generating for their money.

I first noticed the possibilities for videos filmed within virtual worlds like Second Life (known as “machinima”) when I saw the “Falling Woman Story”, an excellent insight into the reality that can inhabit a virtual world:

That was barely 18 months ago, yet things have moved on very quickly. Linden Labs (Second Life creators) now run a regular “Month of Machinima“, showcase. New techniques are being developed, new standards. Machinima is becoming a separate art form. Here’s an example from the first “Month of Machinima”:

It may be time to rethink ideas about project dissemination. There are now numerous technological ways to raise awareness of an academic project. They all have their place.

In my opinion, a website is a good “shop window” that people can be referred to as a “clearing house” for project information. A blog is about engaging people in dialogue within the project’s area, and needs to be used in conjunction with replying to other people’s blogs in the area. Twitter is good for generating interest in current activities – what’s going on NOW – or for sharing a kind of “stream of consciousness” about the project and its area. Facebook is good for creating and maintaining working relationships between individuals in an area. And video seems to be good for engaging people in the concept of a project, generating interest and getting people talking.

Oh, and it’s good to write some papers too…

Paul Rudman, BDRA

The icy winds of change

Social networking seems quite good at providing random, but surprisingly serendipitous, information.  Recently, academia.edu told me that someone had searched for a book chapter I co-authored a few years ago (Rudman et al. 2008). It included a paragraph of predictions for the future of elearning, and I began wondering how accurate our predictions were, even after only three years.

At the time, I was thinking some 10 years ahead. In fact, the future has arrived faster than I expected. We predicted two areas of technological development that would impact on learning. The first was the growth of personal, mobile technologies; we described a number of functionalities – communication by audio and text, sharing of media, GPS – and we were on the right path. What we didn’t see was  how effectively these functions would all be joined together in one device (i- and Android- phone) through the “App”. The second prediction was of data storage moving from individual devices to centralised servers (or “clouds”). This is happening too. I have, for example, recently redirected my personal email to a new gmail account, rather than the previous combination of hired server and Outlook, and now have access to email on my mobile too.

With hindsight, I would say that we were correct in our predictions, albeit a little conservative. My work here at the BDRA in creating and evaluating a learning space in the virtual world of Second Life suggests that the future is much more exciting than we had dared hope! We were, in fact, closer with an earlier paper (Vavoula et al. 2007) where we used part of a science fiction story from the 1960s to illustrate the future possibilities for technology-enhanced learning. The story is by Brian Aldiss and was written for a children’s science annual about a world, thirty years in the future, where children learn through guided project work rather than formal education.

The winter of 1963 - suitable year for an Antarctic experience. . . (© Copyright Richard Johnson - see link)

“…It was a simple thing to do. Many of the parts of the miniputer were synthetic bio-chemical units, their ‘controls’ built into Jed’s aural cavity; he ‘switched on’ by simple neural impulse. At once the mighty resources of the machine, equal to the libraries of the world, billowed like a curtain on the fringes of his brain…Its ‘voice’ came into his mind, filling it with relevant words, figures, and pictures. … ‘Of all continents, the Antarctic has been hardest hit by ice.’ As it spoke, it flashed one of its staggeringly vivid pictures into Jed’s mind. Howling through great forests, slicing through grasslands, came cold winds. The landscape grew darker, more barren; snow fell.” (Aldiss, B. 1963)

What we are finding with virtual worlds is that the “user’s” experience is remarkably real, setting in play relevant emotional responses and remaining in memory in many ways as though the experience had been real. Aldriss’s portrait of a virtual trip to Antarctica could be achieved today using virtual world technology.

“Bio-chemical” elements aside, if you were to take today’s virtual worlds back in time to 1963, I venture to suggest that Aldiss would agree we have already achieved his vision.

Paul Rudman, BDRA
.

Aldiss, B. (1963) The thing under the glacier. C. Pincher (ed.) Daily Express Science Annual No. 2, Norwich: Beaverbrook Newspapers Ltd.

Rudman, P. D., M. Sharples,  P. Lonsdale, J. Meek (2008). Cross-context learning. in Digital Technologies and the Museum Experience: Handheld guides and other media. L. Tallon and K. Walker. Lanham, MD, Alta Mira Press.

Vavoula, G. N., M. Sharples, P.D. Rudman, P. Lonsdale, J. Meek (2007). Learning Bridges: a role for mobile learning in education. Educational Technology Magazine. New Jersey, Educational Technology Publications, Inc. XLVII: 33-36.

The sun sets on our successful e-learning conference

Our Learning Futures Festival 2011, entitled Follow the Sun, was a 48 hour global e-learning conference that was co-hosted by the Australian Digital Futures Institute and presented consecutively from three countries: UK (Leicester), USA (Seattle) and Australia (Toowoomba).

Screenshot of Adobe Connect 8
Terry Anderson delivering his keynote during North America Day 1

Watch the recording

The conference had over 280 delegates and speakers from more than 25 countries. Keynotes, papers, workshops and debates were streamed through the online conferencing platform Adobe Connect 8, provided by sponsors CollaborATE UK

Screenshot of debate about the lecture

With Gabi moderating, Donald Clark, Jim Morrison and Stephen Downes debate the usefulness of the traditional lecture in higher education during North America Day 2

Watch the recording of Donald Clark and Jim Morrison

Watch the recording of Stephen Downes

Events were also held in the 3D environment of Second Life. These sessions were beamed – with full sound capability – into the Adobe Connect platform by sponsors Let’s Talk Online. This allowed delegates who were new to Second Life to see the environment in real time and engage through transferred text chat with the avatar delegates.

Screenshot of Second Life projected into Adobe Connect
Scott Diener looks at virtual worlds during Australia Day 1

Watch the recording

The conference Moodle site for delegates was hosted by USQ Communities (University of Southern Queensland, Australia). This site hosted a successful gallery of posters submitted by delegates, and gave delegates the opportunity to engage in asynchronous discussion around the sessions.

More than 25 hours of live events were generated by Follow the Sun. As well as the very significant financial savings and a tiny carbon footprint, a key advantage of this kind of online conferencing is the ability to record all streamed events.

Within two-and-a-half hours of the conference finishing, the recordings of all sessions were made available on the festival’s public site:

http://tinyurl.com/followthesun

We worked very hard to give our delegates an enriching experience, and are very grateful for their appreciative comments, which included the following:

“I found the conference an eye opener and I want to be in the frontline in being a practitioner and champion of the power of e-learning in my career as a teacher and have decided to proceed to study for Masters degree in e-learning. I hope to integrate these trends in e-learning in creating powerful and rich environment to learn almost anything that is simple or complex.”

“I heard about these conferences last year when visiting the BDRA. Despite the obvious excitement people felt I had no idea just how fantastic it could be. This has been extraordinary and probably the best single experience in my 10 years in HE. So much breadth and depth and such a sense of connectedness. Congrat’s to all for conceiving and delivering this event. I’ll spend the next 6 months processing it and pointing our people to the content.”

[On Twitter] “@lff11 The technical aspects were handled expertly and the level of engagement impressive. Well done one and all you rock! Bring on lff12”

“I was particularly impressed by the programme and how you and the moderators succeeded in drawing in farflung contributors. All in all, LFF was a credit to all involved.”

“The conference was fantastic!”

The Media Zoo
PhD students Brenda and Ali (with Natalia in Denmark and Tony in Canada) deliver their paper from the Media Zoo during UK Day 2

Watch the recording

I plan to blog in greater detail in the coming months about how this innovative and exciting conference was planned and executed.

Simon Kear

Keeper of the Media Zoo

A very real experience

I was at a club last weekend (in Second Life, of course…). It’s not that I have a particular liking for dancing puppets, but I do like meeting people from around the world, and a Second Life club is really rather good at facilitating that.

It was an ordinary club night,  DJ, friends, friends of friends, strangers. Around me, a conversation began along the lines of:
“How’s your arm?”
“Still sore”

When I enquired what had happened, her reply was well beyond what I expected.

It seems she had just returned from a visit to Japan. To the East coast North of Tokyo. Her cuts and bruises came from the earthquake. Her broken arm came from holding on to someone to stop them falling. She succeeded. Then they were hit by the water. Eventually, the Japanese army rescued them.

It was far, far worse than that description.

It is my good fortune that I live in the UK and wasn’t involved in the disaster, only hearing second or third-hand stories from commentators and videos. Listening to it first-hand suddenly pulled me into a reality I hadn’t been expecting. I used to imagine that the Japanese people understood earthquakes, that they would be fine. I don’t think they expected this one.

Paul Rudman
BDRA

Avatar or Invisible Man?

When I joined the SWIFT project, I began as an experienced Second Lifer. I had seen numerous people arrive in Second Life for the first time, with something like half of them staying and enjoying the experience, while the others never returned. Over time, I developed a hypothesis that there were two things that “hooked” people into returning:

1) those who stayed formed friendships of some kind during their first visit

2) those who stayed were interested in their avatar as a second identity, spending time and money on creating a “look”

Thus, for the first SWIFT experiment, we incorporated a significant amount of avatar personalisation into the Second Life training part of the experiment.

For the second experiment, we did less of this, mainly because it was just too time-consuming for the students to spend a whole hour on learning to use a piece of software that they may only use once.

And an interesting thing happened. When I interviewed the students afterwards, it seems that the avatar wasn’t particularly relevant to their experience. In fact, one person would have been happy to not see the avatar at all. So why the difference?

I’m thinking that it’s because the need for purpose is being satisfied in a different way. For “recreational” use, Second Life is, on the face of it, quite poor; one “arrives” somewhere in-world, and, well, that’s about it! It’s not a “game” – there’s nothing to “do” – so unless you meet someone interesting it seems a very lonely place. For our experiment though, there is something specific to do. We have a virtual genetics lab, and one can perform “simulated” experiments. that is the purpose, in fact, almost, the “game play”.

Which brings us to the question of identification with the avatar. If one is in Second Life with a definite purpose, and it’s neither necessary nor useful to socialise, the avatar doesn’t really have a role. It just, as one participant said, keeps standing in the way of something you’re trying to look at.

If this is the case, then it’s really good news for SWIFT. If the avatar proves to be unimportant for the learning situation we are creating, then we could reduce the training time significantly.

We’ll be reporting on this in our next paper. . .

Paul Rudman, BDRA