Carbon Footprint of Spam

Some of you may be aware that for the last 9 months or so we have been doing some initial exploratory research into the environmental sustainability of teaching & learning through the GECKO project. The project report of our findings is in its final draft and will be made available soon, but I was interested to read about a new study recently carried out for the computer security company McAfee. According to their research team ICF, there are some 62 trillion spam emails sent each year, wasting 33 billion kilo watt-hours (KWh) of power. Most of the energy is wasted at our computer, as we sift and delete messages, searching for the genuine ones!

These are just some of the findings from the report:

  • Spam filtering can reduce the energy wasted by up to 75 percent
  • Spam filtering is the global equivalent of taking 3.1 million cars off the road
  • The environmental impact of the spam generated in a year is the equivalent to driving around the earth 1.6 million times
  • The annual energy used to transmit, process and filter spam is equivalent to the electricity used in 2.4 million homes

The study looked at the energy expended to create, store, view and filter spam across 11 countries, including Australia, Brazil, Canada, China, France, Germany, Japan, India, Mexico, Spain, the United States and the United Kingdom. The study calculated the average greenhouse gas emission associated with a single spam message at 0.3 grams of CO2.

“We’ve been talking about spam for a long while, and we wanted to bring a quantifiable environmental impact,” said David Marcus, Security Research and Communications Manager at McAfee. He then went on to say, “Spam is bad for the environment as well as for your productivity.”

The report is clearly aimed at providing another reason for adopting McAfee spam filtering products but could also provide more ammunition for those of us wanting to take action against spam and improve the environment at the same time. I understand how hard it is to calculate accurately the carbon emissions of various environmental parameters and the numerous variables within each one. However, if we put aside any doubts about the accuracy of the study for one moment and focus on the issue here, which is raising our awareness of our actions on the environment, then I do not think that is such a bad thing!

Matthew Wheeler

Keeper of the Media Zoo

Brainteaser “Guess the Time”

Hello again!

I hope you had good time over the holidays and any Easter eggs sugar rush is starting to wear off. As the weather these past few days here in Leicester was as glorious as it always is on Bank Holidays, i.e. overcast and rainy, I seem to not have done much apart from reading and eating. So, with the choice to report on these two activities, I will be briefing you on the reading, starting with a little brain-teaser. Below I have listed the features of a computer operating environment – can you guess when this environment dates from? Is it a description of a promising future start-up, which a bunch of clever IT wizzard kids hope to attract venture capital to and repeat the success of Facebook? Or is it out there now, competing with Windows, Firefox, Facebook, Illuminate and the rest? Advertising how ingeniously and innovatively it integrates features facilitating collaboration, social networking, cloud computing and hypermedia publishing? Or, is it a blast from the past? Read and try to guess:

The NLS features:

  • 2-dimensional display editing
  • in-file object addressing, linking
  • hypermedia
  • outline processing
  • flexible view control
  • multiple windows
  • cross-file editing
  • integrated hypermedia email
  • hypermedia publishing
  • shared-screen teleconferencing
  • computer-aided meetings
  • context-sensitive help
  • distributed client-server architecture
  • universal “user interface” front-end module
  • multi-tool integration
  • protocols for virtual terminals
  • remote procedure call protocols
  • compilable “Command Meta Language”

Ready with your estimates? Check your guess here:

http://en.wikipedia.org/wiki/NLS_(computer_system)

What does this example bring to a discussion of the future? The future of technology, the future of technology-enhanced learning and the future of learning? Let’s discuss… Next time I will tell you more about the system described above and a challenging conference that we attended with some BDRA colleagues in Oxford, called The Shock of the Old.

Sandra Romenska, BDRA

15th April 2009

Toujours la change: new champagne in new bottles

While I was still working in the Institute of Educational Technology at the Open University in the 1990s, I led the development of the Institute’s distance-taught MA in Open and Distance Education. It was an exciting time, as the university was trying to go more global: we had students on the MA from many countries. And the programme was going online: the first courses for the MA were not entirely online but did incorporate email, discussion fora, use of web sites and electronic submission and return of assignments. The development created quite a few problems for the university’s systems. I was not very popular as I ‘pushed at the envelope’, as the Americans say. Could my students in Ulaan Bator (Outer Mongolia) and Argentina pay in sterling? With great difficulty. What if the Internet ‘lost’ a student’s assignment en route from Japan? Could the final exam be sat, offline, in Hong Kong? How should we handle students who wrote pages and pages for the primitive blog, and others who never surfaced because they were totally daunted?

I chaired the course team for H801 Foundations of open and distance education, a 60-point nine-month course first presented in 1997 with about 40 students guided by three tutors. The team drew quite heavily on existing text material, including some from the University of South Australia, with which we had a materials exchange agreement. In fact, we soon discovered the course was seriously overloaded. Our estimates of time needed by students were too low. We made the necessary changes for the second presentation. In fact, we may have been learning more about distance education than some of our students! They were drawn from many institutions, mostly ones teaching on campus.

You may be interested to hear that last February a very different MA course team and tutors launched the first presentation of H800 Technology-enhanced learning: practices and debates. H800, another 60-point nine-month course, has exceeded its target recruitment by more than 20, with a record number of over 120 students beginning a packed programme of activities and interaction. One distinctive feature of the design is the use of Elluminate – a new tool for most tutors and students. Elluminate tutorials play a major role early in the course, and students also have the option to use the tool for their own study groups. Early feedback has brought largely positive reports on the Elluminate tutorials, and has commended the clarity of the H800 activities – which enable students to explore key issues and practices in the field of technology-enhanced learning. Later in the course, students will use social networking tools such as Delicious and Twitter.

Doubtless the university’s systems are being stretched again as H800 brings new challenges, but that is how innovation goes ahead. Toujours la change (have I remembered the right French saying?): this is new champagne in new bottles.

David

My SONY ebook reader

When I say ‘my Sony ebook reader’, I actually don’t own one yet. The one I’m using now is a Sony property belonging to the DUCKLING project. Anyway, I ‘have’ an ebook in my care, and it is in my drawer right now.

First impressions: it’s light, thin, and most importantly good looking and fashionable, with its brown leather cover. I watched a You Tube video the other day about SONY ebook reader PRS-700 (a higher specification model, mine is PRS-505). The SONY 700 looks just cool! It has got new features such as:

· Disconnect with the cover
· Easy change text size from S, M, L, XL to XXL (in PRS-505, three options from S,M to L)
· Search, highlight, and add notes to the highlighted text
· Built in LED – easy read in the evening

More importantly, it has got the touch-screen display so that you can zoom in, move, turn a page with a pen or finger. However, these new and enhanced features don’t come cheap! A SONY 700 costs more than £300, almost doubled the price of a SONY 505!

I read my ebook on my train journey. The e-ink does work better under strong sunshine and dark condition (when the train was under a bridge or went through a tunnel) than reading from a laptop screen. The interface is quite easy. Turning page, changing text size, and changing menu are simple. I even found my favourite detective book written by Agatha Christie on my ebook reader! However, when the story was just about getting interesting, I got a message from my ebook, “to continue reading, visit www.waterstones.com/eBooks to download the full version.”

I experimented with some PDF files on my ebook reader later on. It was easy and all PDFs display appropriately on my ebook, including one with Chinese characters. However, one of my colleagues Nichola found some problems when handling PDFs on ebook. She found that when you create a PDF from a original Word file, you need to complete the ‘document properties’ box, and must complete the ‘Author’ and ‘Title’ fields as a minimum, otherwise you will have a meaningless display of your PDF on the ebook. I didn’t come across this trouble. It might be because I’m using Word 2007, whereas when you use Word 97-03, you might get this frustration.

Another drawback I found with the ebook is that it doesn’t allow you to create a separate folder containing your PDFs, so all the PDFs that I uploaded mixed with those pre-loaded books. As a student, I might want to separate my studying material from my books for entertainment or just to sort my study materials in different categories. No doubt these features will improve with time once their utility goes beyond basic reading.

Ming Nie 10 April 2009

The weebly, the wiki and the webquest: an experiment in rapid, collaborative e-learning authoring

One way to create e-learning courses is to appoint armies of instructional designers, Flash scripters, Java programmers, graphic designers, audio-visual experts and quality testers, and have them monitored and chivvied along by Gantt-chart-wielding project managers as they churn out high-level specs, low-level specs , scripts, storyboards, prototypes and so on. In many cases, this is the way to go. Sometimes, however, something a bit simpler, faster and cheaper is needed. As Thiagi (my favourite rapid-instructional-design guru) points out: ‘There is a built-in bias toward overkill in the conventional [instructional design] process. The obsession – for doing it right the first time through painstaking analysis and planning, for pleasing all the people all the time through incorporating everyone’s inputs and feedback, and for attempting perfection through several rounds of testing, revision, and retesting – violates the Pareto principle. Much time (and other resources) can be saved by focusing on critical content and key steps and producing a lean instructional package. Improvements to this core package can be added gradually after it is implemented.’

Last weekend, with my student hat on, I stumbled upon the simplest, fastest, cheapest way I know to do collaborative e-learning authoring. In the context of a course on creating educational web environments which I am doing through USQ, a motley team of three of us (in Fiji, Kuwait and England) produced a very useable, 40-hour online course in the space of a week, with most of the work being done over the period of a single weekend. A process model emerged, which can be summarised as: get a couple of writers (maybe an instructional designer and a subject matter expert), have them go straight to the Web with the first draft, and then have them collaboratively build the learning programme ‘live’. That’s it. Simple. Fast. Cheap. Effective if your writers know what they’re doing. Several Web 2.0 tools make this scenario possible now – even desirable… but more about that in a moment.

First, a bit of background on how the model arose: Our task was to collaboratively create a WebQuest, which is ‘an inquiry-oriented lesson format in which most or all the information that learners work with comes from the web’. (See http://webquest.org.) The concept of the WebQuest was popularised by Bernie Dodge from San Diego State University in the mid-90’s, and is even more relevant today in view of the increasing availability of open educational resources on the Web, allowing course designers to focus on designing the learning process rather than creating the content. It can be used at all levels of education and training – our focus was on work-based training.

My WebQuest team started the assignment late and so we were forced by necessity to come up with a process for rapid design and development of our WebQuest site. This is how we did it:

1. Through our initial discussions on USQ’s Moodle discussion forum, we agreed on the basic parameters for our WebQuest – the target audience (health and safety officers in a company), the topic (instructional design – how to create a health and safety induction course for employees) and the task (learners were to produce a report for the Board of Directors outlining how they proposed to develop the health and safety course). We also agreed on the stages and steps that the WebQuest would be divided into.

2. Having recently discovered the wondrous, WYSIWYG, free weebly programme for website building, I decided to create a weebly site for our WebQuest, with page headings reflecting the learning stages and steps we had agreed on. By going straight to the Web with only the most tenuous outline of the course we were developing, we were implementing the notion of rapid instructional design in the extreme. If there are any Gantt-chart wielding project managers out there, they might be throwing up their hands in horror at this scenario, arguing that you can’t build the house before you’ve built the foundation. But Web 2.0 tools such as the weebly are so sophisticated now that you can add and remove web pages or completely alter the navigational structure of the site with just a few clicks and no knowledge of html at all (although you can view and tweak the html if you want to) – almost like modifying the foundation of your house after you’ve started building it, if you decide to move a wall or change the angle of your roof.

3. To organise the work of the WebQuest design team, we set up a work-allocation-wiki in the USQ Moodle forum, with a table indicating all the pages that needed to be developed for the WebQuest site, and who was planning to do which ones, along with a short ‘status report’ for each page. We updated this frequently – sometimes several times a day.

4. All three of us in the WebQuest design team had the password to access the weebly, so we could take turns to go in and add content to the sections we had committed ourselves to completing. This worked particularly well with the person in Fiji being in a different time zone – it was always exciting to see what she had added while I was sleeping!

5. We used the blog within the weebly to summarise the changes we had made, keeping this updated on a regular basis, so that the rest of the authoring team was continuously informed of developments. (Bearing in mind that the weebly can go into the public domain from day one, and can be viewed by anyone who has the URL, this has significant implications for participation and collaboration by stakeholders. The blog could also be used for commentators to add their feedback as the work progresses.) The blog would probably be deleted or hidden when the WebQuest was piloted with learners, although the transparency of the authors’ process might be of interest to some learners – especially as in our case, the subject of the WebQuest was instructional design.

6. In addition to the weebly, we created a wiki using the free pbwiki, to enable our WebQuest learners to summarise key points as they worked through the WebQuest. The wiki was also to be the main point of reference and collaboration for learners in the final stage of their WebQuest, when they were required to jointly produce a report to the Board of Directors, reflecting how they would apply their knowledge of instructional design in an authentic context.

The final product was a website containing a very structured process for learners to learn about instructional design by exploring selected resources that are freely available on the Web, with the help of some carefully scaffolded questions. Trainers’ notes were also provided. It’s not perfect and there are plenty of ways in which it could be improved with the addition of more time and resources, but in the hands of a capable online facilitator/ trainer, it could provide a stimulating and useful learning experience for the learners. Much more so, I suspect, than some of those content-heavy, ‘electronic page-turner’ type courses that some commercial companies spend fortunes, and many months, on producing.

By Gabi Witthaus

First off the blocks? Implications of the revised approach to HEFCE’s strategy for e-learning (2009)

As the University of Leicester transits from the implementation of its first e-learning and pedagogical innovation strategy (2005-2008) through to the final phase of consultations and approvals for the new learning innovation strategy (2009-2012), HEFCE has recently published a document entitled Enhancing learning and teaching through the use of technology: A revised approach to HEFCE’s strategy for e-learning (2009).

This post scans the ‘2009 approach’ document and highlights some areas of variance from the HEFCE’s earlier strategy for e-learning (2005) and how these variances might impact on our work in e-learning and learning technologies – both the R&D work that we do from project to project, as well as for our strategic function in charting courses for the use of technology in learning, teaching, assessment and research within the institution.

According to the (thankfully, brief) publicity material and (unfortunately, limited) media coverage accompanying the launch of HEFCE’s 2009 approach, the document ‘focuses on enhancing learning, teaching and assessment through the use of technology’. Some of it draws upon the published 2005 strategy but also reflects on ‘how technology can support individual institutions in achieving (some of) their key strategic aims’.

No surprises there … as the focus is exactly where it was in the 2005 strategy (i.e. on enhancing learning, teaching and assessment through the use of technology), but what is new is the ‘also reflects’ bit.

Renewal of the cycle of any strategy is crucial and this reflection comes at an opportune time. Consider this – whilst the 2005 strategy did not attempt to define e-learning, the 2009 approach acknowledges that e-learning is used as shorthand for the ‘array of technological developments and approaches’ in use throughout the sector.

The 2009 approach also takes specific cognizance of the great diversity of uses of ICT. New and emerging technologies, according to this approach, clearly provide exciting opportunities for enhancement and innovation in learning opportunities on the campus, within the workplace or at home.

HEFCE thus aims to build the revised framework to focus on the broader opportunities offered through the use of technology, rather than solely concentrate on specific issues like distance learning.

The 2009 approach also commits HEFCE to continue working with partners, particularly JISC and the HE Academy, to support institutions in enhancing learning, teaching and assessment through the use of technology. This appears to be a clear vote of confidence in the research-support function enabled by JISC and the HE Academy and as beneficiary of funding for research and development projects, augurs well for the availability of future funding for researchers in the field.

The 2009 approach – in a pluralist vein rather than as a prescriptive blueprint – while explicitly acknowledging that technology has a fundamental part to play in higher education, also emphasizes that individual institutions can have different strategic missions and could (nay, should) perhaps also use technology differently and innovatively – in ways that are in synch with their institutional contexts and in pursuit of their own strategic goals. The key implication here is that ‘institutions need to consider how to invest (HEFCE’s) block grant appropriately’.

A ‘block grant’, as readers would be aware in HEFCE-speak, is the means of distributing funding for learning and teaching which HE and FE institutions can use to support their aims and objectives. The block grants contain targeted allocations, which are designed to recognize the additional costs of priority areas such as widening participation and part-time provision.

This is over and above the ‘capital funding’ for learning and teaching, research and infrastructure that HEFCE distributes by formula as conditional allocations to be used by HEIs to invest in supporting infrastructure. HEFCE normally announces the capital funding for learning and teaching, and research together so that institutions can plan their buildings and equipment spending requirements effectively.

How then, does the University of Leicester’s soon-to-be-published learning innovation strategy (2009-2012) sit within HECFE’s ‘2009 approach’? The strategic imperatives that Leicester enshrines (e.g. of ‘leading the UK in terms of innovation in teaching and learning through the application of e-learning’) do appear to be the basis for the learning innovation strategy (2009-2012), with its stated aims of:

  • continuing the promotion of pedagogical innovation,
  • increasing the deployment of technologies in pursuit of enhanced student learning experiences, and 
  • enabling research into e-learning in a way that directly addresses business opportunities and imperatives.

Fulfilment of the learning innovation strategy’s other aspirations – viz. providing equivalent and enhanced learning and support experiences for all Leicester students, and the framework that develops and extends the range of services and approaches already in place to deepen the understanding and deployment of learning technologies within the University – would however necessitate a serious substantial investment from the institution. Surely a targeted intervention from the University’s block grant would successfully enable implementation of specific aspects of the strategy.

Over to the powers that be… or the powers that ‘block’, as advantage lies with one who is first off their ‘blocks’!

- Jai Mukherjee / 8 April 2009

Achieving greatness: Can we rely on our genes, or is it all about hard work?

I wanted to focus my blog entry around an issue that I’ve read a lot about lately, and would be interested to hear what other people think. In our working lives, we all strive to be ‘successful’. Some of us have clearly defined routes to success, perhaps following a route that is planned out for us or pre-determined by the field we work in. For example, achieving the grades we ‘require’ at school, attaining a degree, going through higher education, perhaps working towards Chartership.


For others the route to success is very different: some people appear to be ‘blessed’ with a natural talent which, in itself, pre-determines and guides their future career. A ‘talented’ footballer; an ‘expert’ musician; a ‘born’ mathematician.


So what does it take to be great? Are some people naturally talented? Or is success and ‘greatness’ all about hard work and practice?


I recently stumbled upon two books that have got me thinking about these issues. One is called “Outliers” and the other is “Talent is overrated: What really separates world-class performers from everybody else.” These books focus on the idea that being successful is not about our DNA or our genetics. It’s about hard work, determination, perseverance. I like this approach: it gives me hope in the belief that anyone can become successful or achieve greatness if they put their mind to it, work hard and learn from the mistakes they make along the way. Even the great Albert Einstein claimed “It’s not that I’m so smart, it’s just that I stay with problems for longer”.


Within these books there is certainly evidence for this way of thinking, taking some of the world’s ‘great performers’ and considering their childhood and experiences. The suggestion is that these people didn’t just effortlessly get where they are today: they spent hours and hours practicing, dedicating themselves to attaining these skills.


But what about ‘child prodigies’? The psychological literature seems full of these case studies; anomalies; outliers. Are children with seemingly inexplicable skills ‘undeniable evidence’ of a ‘natural talent?’ And what about patients who suffer from neurological damage and, when they recover, have developed an ability or talent that they never had before? Similarly, the talent of autistic savants surely implies that there is some genetic reason for these areas of ‘genius’?


To finish with the (hopefully thought provoking) words of Rene Descartes: “It’s not enough to have a good mind; the main thing is to use it well.”


Kelly Barklamb

E-books: It’s All About Timing

When I was a child, I remember skateboards arriving on my street in the mid 1970s as the latest fad. Some kids could ride them, most couldn’t (I didn’t even get close). Then skateboards seemed to fade for a decade or so, only to re-emerge in a much, much bigger way in the early 1990s with – I think – the West Coast surf/mountain bike/ showboard/grunge culture.

I may have got this cultural sequence and referencing wrong, but the point is, as with many things in life, the timing appears to be crucial. It’s the same with e-books.

Nine or ten years ago, Glassbook, MS Reader, Rocket E-book and many others were battling it out for universal acceptance. Poor revenue models (does anyone remember Steven King’s ‘just-leave-a-buck’ system?), a lack of useful content (out-of-copyright material is out of copyright – i.e. free – for a reason, but hats off to the excellent Project Gutenberg nevertheless), the unwillingness of publishers to do anything other than dabble and the dotcom crash all ensured e-books remained peripheral.

But now they’re back. But why will it be different this time around, especially in higher education? I would say that there are at least five good reasons.

First, the largest global online bookseller (see Amazon’s Kindle) and the one of the largest technological manufacturers (see Sony’s Reader) are now key players. Publishers will trust both of these companies. Second, there is an acceptance that all traditional media companies (music, news, publishing) have to produce new revenue models not based on rigid digital rights management (DRM). Third, the increasing willingness (albeit tentative at the moment) of academic publishers to release useful material in digital form. Fourth, the public realisation that the emergence of the e-book in no way signals the death of the printed book. (How can it? The paperback surely is an example, in design terms, of the perfect combination of form and function.) And finally, the technological developments in educational infrastructure and the changing expectations of students.

So what does this mean for the future? Well, I genuinely believe that we are not far away from a situation where a student on a wireless-enabled campus at the end of a lecture is able to connect to that university’s Amazon storefront, check his or her credit balance, buy and download the chapter, section or pages recommended by the lecturer (not the whole book, mind), and then read, annotate and bookmark this in the coffee shop five minutes later. That student may even click on the hyperlinked references in the text’s bibliography and purchase additional ‘knowledge chunks’.

In effect, rather than buying several large printed textbooks, many of whose chapters will never be consulted, a student, during his or her degree, will construct a bought library of many of these high-quality knowledge chunks, each of which is highly specific to the course of study.

The hardware – whether a bespoke device, iPhone or netbook – is fun but, despite the whiz/wow factor, really not that important. Neither is the format of the material (PDF or HTML/XML), although I prefer the latter as it  is far more powerful and long lasting.

But connectivity, mobility and, most crucially of all, content are absolutely key. Excellent standards in the first two are already with us; for the third, it’s time for the academic publishers to climb back on their skateboards. They may tumble once or twice, but they really cannot afford to miss this potential market, DRM or not. In our world of VLEs and PLEs, the gains will be enormous.

Simon Kear

Social Notworking

Before you all jump in with comments about my spelling, don’t worry I have not misspelt the title of this posting, it is simply a play on words, for today trusted readers, I’m talking about a new phenomena known as ‘Social Notworking’ which is a term I suspect will be included in the next edition of the Oxford English Dictionary!

It appears that students at Bournemouth University have been complaining that access to computers has been reduced because fellow students are hogging the machines to check their Facebook and Twitter accounts. There is a call for certain computers at Bournemouth to be specifically marked for academic use only. Interestingly the debate has rumbled on with some university sources defending social networks as they are also being used for legitimate academic reasons.

I find this scenario particularly interesting with the growing support for Personal Learning Environments (PLEs) and Cloud Computing – is this another ‘greying’ of the boundaries which technologies always appear to cause? Or is it that the growth of technology adoption is out-pacing our understanding of it potential and therefore is easily frowned upon?

I personally find Facebook and LinkedIn excellent ways of keeping in touch with large numbers and various cohorts of people from all aspects of my live; I also enjoy reading people’s statuses and the kind of things that are happening in others lives, where they are in the world and the issues they are reflecting on.

Perhaps you can share your experience of social networking and we can discuss the positive and negative aspects to help us clarify the situation for the future?

Matthew Wheeler
Keeper of the Media Zoo

The Little Boy

Many of us involved in teaching will be familiar with this poem by Helen Buckley. If you know it, I invite you to revisit it. If you don’t, then please read it. All of it.

The first time I read the poem was many years ago. An inspirational teacher I had during my MEd used it years later. Today, in the world of  Web 2.0, iPhones, Second Life, Twitter and ‘GoogleLife’, The Little Boy seems to me as current as ever. I invite colleagues to ‘read’ this poem having perhaps replaced the setting, the characters and the technologies involved. Maybe we can come up with a version of this poem called ‘the little lecturer’?

Alejandro Armellini

Follow

Get every new post delivered to your Inbox.

Join 700 other followers

%d bloggers like this: