Benchmarking e-learning
Encyclopedia
Benchmarking
is a management tool that has been applied in many areas of business but it is only in 2005-06 that there has been immense growth in its application specifically to university use of educational technology, initially in New Zealand
(Marshall, 2005), then in Europe including the UK under the auspices of the Higher Education Academy
http://www.heacademy.ac.uk/benchmarking.htm and most recently spreading to the US. Benchmarking e-learning is now seen in the UK as a key enabler of change in universities - some 40 universities and university-level colleges http://www.heacademy.ac.uk/eLBPhaseOne.htm, around one quarter of all relevant UK institutions, are now starting work on this, with a further 12 having recently completed a pilot exercise http://www.heacademy.ac.uk/4779.htm.
It is possible to trace some early work on benchmarking e-learning in universities back to 1996 - on dimensions of virtuality in virtual universities - but the first work under the name of benchmarking appears to have been in the 1999-2002 era on the BENVIC project http://www.benvic.odl.org/ and some specific benchmarking activities in English colleges.
.
Development of this started in 2004 as a pilot project. There is a project page with a useful history and several key documents. The scheme is now available in draft form while it awaits final external peer review.
It is a criterion-based system where criteria (divided into eight main benchmark areas) are scored on a 1-5 scale with the help of scoring statements. It takes a relatively wide view of e-learning, ensuring linkage with general learning and teaching, with IT and with staff development processes. The use of the word "alignment" in several criterion scoring statements suggests that it has been affected by the MIT90s approach described elsewhere in this article.
Other information
The BENVIC system has eight core meta-indicators. These are:
All of them, with the exception of Accessibility, are the kind of top-level groupings that one finds in other methodologies. UK readers should also note that Accessibility does not mean only the narrow sense of SENDA but also covers many aspects of Widening Participation.
Each of these eight meta-indicators is associated with a range of assessment measurements (indicators) which enables BENVIC users to carry out an initial benchmarking diagnostic. The assessment measurements are of three types:
There are in total 72 structural and practice indicators – which is rather more than in many systems but less than some others. Whether or not this is an issue depends crucially on how difficult it is to score these indicators and whether they are all "compulsory".
Indicators (other than the performance ones which are metrically based, i.e. numeric) are scored on a scale of 0-2. Many other systems use a scale of 1-5, but one could use a natural mapping of 0 to 1, 1 to 3, and 2 to 5.
There is a more detailed analyis of BENVIC on the web.
As part of this brief, CHIRON appears to be developing a benchmarking methodology. This is specifically referred to in Work Package 7.
There are 11 criteria, divided into a total of 216 indicators. The criteria are as follows:
Most of the indicators are best described as specific and rather detailed e-learning standards and guidelines (for example on house style, usability, etc). The remaining few are drawn from a range of sources, including from the Quality on the Line criteria developed in the late 1990s by the Institute for Higher Education Policy in the US.
The version of ELTI on which the trials were originally based is the revised 2003 JISC version held at the JISC site http://www.jisc.ac.uk/index.cfm?name=project_elti. The document ELTI Audit Tools is the most directly relevant to what is commonly accepted as benchmarking, especially section 1 on so-called "Institutional Factors"; however, one at least of the pilot sites took a broader view.
The UK view of ELTI is given in the entry for ELTI in the Higher Education Academy Wiki.
Further details
The ELTI audit was originally developed as part of a JISC project and was designed to inform the process of embedding learning technologies, assist in developing appropriate institutional structures, culture and expertise and to encourage cross boundary collaboration and groupings.
The ELTI approach focuses on:
During the e-Learning Benchmarking Pilot the three institutions used the tools and produced alterations and their own contextualised indicators.
The three institutional blogs are at:
Further reading for UK HE newcomers to the ELTI framework
.
The E-Learning Maturity Model (eMM) is a quality improvement framework based on the ideas of the Capability Maturity Model (CMM)
and SPICE (Software Process Improvement and Capability dEtermination) methodologies. The underlying idea that guides the development of the eMM is that the ability of an institution to be effective in any particular area of work is dependent on their capability to engage in high quality processes that are reproducible and able to be extended and sustained as demand grows.
The eMM provides a set of thirty-five processes, divided into five process areas, that define a key aspect of the overall ability of institutions to perform well in the delivery of e-learning. Each process is selected on the basis of its necessity in the development and maintenance of capability in e-learning. All of the processes have been created after a rigorous and extensive programme of research, testing and feedback conducted internationally. Capability in each process is described by a set of practices organised by dimension.
The eMM supplements the CMM concept of maturity levels, which describe the evolution of the organisation as a whole, with dimensions. The five dimensions of the eMM are:
The key idea underlying the dimension concept is holistic capability. Rather than the eMM measuring progressive levels, it describes the capability of a process from these five synergistic perspectives. An organization that has developed capability on all dimensions for all processes will be more capable than one that has not. Capability at the higher dimensions that is not supported by capability at the lower dimensions will not deliver the desired outcomes; capability at the lower dimensions that is not supported by capability in the higher dimensions will be ad-hoc, unsustainable and unresponsive to changing organizational and learner needs.
Full details of eMM can be found at the master site http://www.utdc.vuw.ac.nz/research/emm/index.shtml. Updates and discussion appear on the eMM Blog
Version 2 of eMM has changed considerably from the Version 1 of 2003, as noted at http://www.utdc.vuw.ac.nz/research/emm/VersionTwo.shtml.
Note that the eMM and associated documentation is licensed under a Creative Commons
Attribution-ShareAlike 2.5 License.
The eMM is being trialled in the Higher Education Academy Benchmarking Pilot, by the University of Manchester
. Additional projects applying the eMM are underway supported by the Scottish Funding Council in Scotland and ACODE in Australia. Development and application of the eMM in New Zealand was supported by the New Zealand Ministry of Education Tertiary E-Learning Research Fund.
E-xcellence started in January 2005 and is due to conclude in November 2006, with a major launch at the EADTU conference in Tallinn, Estonia, 23–24 November 2006. Originally E-xcellence was not envisaged as a benchmarking methodology but as a quality monitoring tool, but at arbout a year into the project there was a shift in emphasis and benchmarking is now one of the aims envisaged for E-xcellence. In fact there are three orientations of the methodology:
Expected outcomes include:
There is little public information so far but it is believed that the basis of the E-xcellence benchmarking methodology is as follows:
It describes itself as "a web-based tool to track and benchmark institutional data systematically across time and among peer institutions".
The was undertaken in partnership with NUTN, the National University Telecommunications Network, and with sponsorship from Cisco Systems. NUTN has for some time had a major interest in quality and more recently benchmarking, as demonstrated for example by the topics and speakers at their 2006 conference. In particular there was a launch presentation of IQAT. One of the leading organisations in NUTN is Michigan State University, who have a leading role in organising an upcoming (October 2006) conference on quality in Beijing, China, entitled the First International Forum on Online Education: Quality Assurance, with a range of quality and benchmarking experts as speakers.
It is a commercial and proprietary tool. For additional information, inquiries should be directed to Hezel Associates.
The prime aim of the project is to design a model of support services for European universities engaged in e-learning. This is being done in a cooperative way involving a large network of organisations, the members of the MASSIVE consortium, tothether with stakeholders not directly participating in the project, who liaise via a Strategic Advisory Committee. The university members of MASSIVE include:
Student involvement is handled via ESIB, the National Unions of Students in Europe.
A key outcome of MASSIVE is to a promote a peer review evaluation approach, based on models widely tested in the university partners. Via Peer Review Visits, those in charge of the best support services practices will help each university to refne and improve their support services for e-learning. At this point the project becomes very similar to a benchmarking project.
(one of the 12 institutions in the Higher Education Academy Benchmarking Pilot) to assist in the structuring of its approach to benchmarking e-learning. For details of its use in Phase 1 see the Higher Education Academy Benchmarking Wiki entry for MIT90s.
The framework was developed by Michael Scott Morton as part of the work of the "MIT90s" initiative which flourished at MIT in the early 1990s. Michael Scott Morton is now Professor Emeritus at the MIT Sloan School of Management
. The work is cited under various names: it is correctly entitled The Corporation of the 1990s: Information Technology and Organizational Transformation, edited by Michael Scott Morton with an introduction by Lester Thurow (Oxford University Press, USA), published February 1991, ISBN 0-19-506358-9.
There has been some confusion over the correct name of the initiative and the framework. Readers will find "MIT 90s", "MIT90" and even "MITs 90" in various references.
The MIT90s framework has been central to a number of JISC and related studies (including from DfES) on adoption and maturity. In UK post-16 e-learning terms, probably the most successful use of this is in the RAIIE study led by David Nicol of the University of Strathclyde, funded by JISC during the period April 2003 to May 2004, which produced a final report A Framework for Managing the Risks of e-Learning Investment. There is also a substantial strand of work in Australia associated with the names of Philip Yetton, Anne Forster, Sandra Wills and others, some of it funded by DETYA. This also used the concept of strategic alignment described below.
The MIT90s initiative developed several companion pieces of work o which two are from Venkatraman: transformation levels and strategic alignment. Professor N Venkat Venkatraman is now Professor of Management at Boston University School of Management (and also a visiting professor at London Business School) but was at the Sloan School of Management at MIT during the period of relevance. His work on IT-Induced business reconfiguration is Chapter 5 (pages 122-158) of the main book referred to above.
The Venkatraman thesis is that business use of IT passes through five levels, differing in both the degree of business transformation and in the range (and amount) of potential benefits. The levels are:
Levels 1 and 2 are called evolutionary levels – levels 3, 4, and 5 are called revolutionary levels.
This has been applied to educational systems, especially in the schools sector, by Becta and DfES.
In passing, it is interesting that this is one of the first situations where a 5-point scale has been used in a situation akin to benchmarking.
The MIT90s framework could have relevance to e-benchmarking frameworks. In particular, the latest version (2.0) of the Pick & Mix methodology uses the MIT90s framework for tagging its criteria in the Pick&Mix 2.0 release of which a beta description is now available.
Further reading
In addition to the work from the University of Strathclyde - see their benchmarking blog for details - there are four main papers/reports that newcomers to the MIT90s framework are advised to read:
The original MIT90s book is:
To fill in further details readers are also referred to:
A book of great interest for those interested in seeing a comprehensive framework based on MIT90s is:
and Universities UK
, the association of all UK universities. The Observatory is now 100% funded by subscriptions and consultancy, and has over 130 institutional subscribers from more than twenty countries. It offers a wide range of services of which benchmarking is one. Within the benchmarking offering is a range of sub-offerings, of which one is deployed for the Higher Education Academy clients.
The OBHE methodology is a collaborative benchmarking methodology where a group of institutions get together and jointly agree relevant areasa of interest (in this case, within the e-learning space) and in a later phase, look for good practices. A succinct description of the variant of their methodology used for the Higher Education Academy Benchmarking Pilot is here.
The methodology goes back to benchmarking work done for European and Commonwealth universities in the 1990s. A related study to the Higher Education Academy Pilot is the work done for OECD to produce the report E-learning in Tertiary Education: Where Do We Stand? As reported in the Proceedings of the OECD Conference on Post-Secondary E-learning, "This was based on a detailed, qualitative survey of current e-learning practice in 19 institutions of higher education in North America, South America, Europe and the Asia-Pacific region. The information gathered through this survey was complemented by quantitative data collected by OBHE from the 500 members of the Association of Commonwealth Universities and of Universities UK."
The version of Pick&Mix on which the trials were originally based is version 1.0, first described in a public domain document from the ALT-C 2005 conference and refined slightly to version 1.1 for the Higher Education Academy. During the pilot, after the criterion-setting phase, this was updated to version 1.2.
The release being offered to HEIs for Phase 1 is version 2.0. A beta version of this is described here.
Version 2.0 has benefited substantially from input from the pilot user group of the Universities of Chester, Leicester and Staffordshire, who share in the moral rights of authorship. Released versions of Pick&Mix are in future to be put into the public domain via a Creative Commons
license (as was version 1.0 and a summarised literature search).
Pick&Mix was first developed in 2005 after an extensive literature search to suit the needs of Manchester Business School
for a comparative methodology for benchmarking e-learning, and a beta version of 1.0 used for a study of 12 comparable institutions to Manchester Business School. The full study is and remains confidential to MBS but a presentation of the highlights was made at the University of Sydney in November 2005.
Further information on Pick&Mix including a range of presentations and papers, and material on related methodologies, within the "critical success factors" tradition of benchmarking can be found here.
The recognised abbreviation for Pick&Mix (e.g. as used in tables) is "PnM", although "P&M" is sometimes seen also.
They are using the eMM methodology.
United Kingdom (UK)
In universities, the Higher Education Academy
is deploying a small number of methodologies. See their benchmarking page for an entry point to material.
The Higher Education Academy benchmarking blog can be accessed at http://elearning.heacademy.ac.uk/weblogs/benchmarking/.
The Higher Education Academy benchmarking wiki is also available - please note that this is oriented to material relevant to UK higher education.
Benchmarking
Benchmarking is the process of comparing one's business processes and performance metrics to industry bests and/or best practices from other industries. Dimensions typically measured are quality, time and cost...
is a management tool that has been applied in many areas of business but it is only in 2005-06 that there has been immense growth in its application specifically to university use of educational technology, initially in New Zealand
New Zealand
New Zealand is an island country in the south-western Pacific Ocean comprising two main landmasses and numerous smaller islands. The country is situated some east of Australia across the Tasman Sea, and roughly south of the Pacific island nations of New Caledonia, Fiji, and Tonga...
(Marshall, 2005), then in Europe including the UK under the auspices of the Higher Education Academy
Higher Education Academy
The Higher Education Academy is an independent organisation in the United Kingdom that supports higher education institutions with strategies for the development of research and evaluation to improve the learning experience for students.-History:...
http://www.heacademy.ac.uk/benchmarking.htm and most recently spreading to the US. Benchmarking e-learning is now seen in the UK as a key enabler of change in universities - some 40 universities and university-level colleges http://www.heacademy.ac.uk/eLBPhaseOne.htm, around one quarter of all relevant UK institutions, are now starting work on this, with a further 12 having recently completed a pilot exercise http://www.heacademy.ac.uk/4779.htm.
It is possible to trace some early work on benchmarking e-learning in universities back to 1996 - on dimensions of virtuality in virtual universities - but the first work under the name of benchmarking appears to have been in the 1999-2002 era on the BENVIC project http://www.benvic.odl.org/ and some specific benchmarking activities in English colleges.
History
In addition to the above remarks, for a more general history of e-learning including key dates of benchmarking e-learning events see the History of virtual learning environmentsHistory of virtual learning environments
A virtual learning environment is a system that creates an environment designed to facilitate teachers in the management of educational courses for their students, especially a system using computer hardware and software, which involves distance learning...
.
ACODE
ACODE is the eponymously named benchmarking scheme under development by the Australasian Council on Open, Distance and E-Learning, whose web site is at http://www.acode.edu.au/.Development of this started in 2004 as a pilot project. There is a project page with a useful history and several key documents. The scheme is now available in draft form while it awaits final external peer review.
It is a criterion-based system where criteria (divided into eight main benchmark areas) are scored on a 1-5 scale with the help of scoring statements. It takes a relatively wide view of e-learning, ensuring linkage with general learning and teaching, with IT and with staff development processes. The use of the word "alignment" in several criterion scoring statements suggests that it has been affected by the MIT90s approach described elsewhere in this article.
Other information
- There are ACODE press releases of 18–19 May 2006 and 16–17 November 2005.
- A potential ACODE scheme is mentioned in a presentation by Paul Bacsich in November 2005 at the University of Sydney, significant because Bacsich is a benchmarking analyst also cited in one of the ACODE press releases.
- An ACODE statement of January 2006 notes that in "December 2005: Dr Stephen Marshall (Victoria University of Wellington) surveyed members on how well supported students are with regard to IT support and access to helpdesk". Analysis of his institutional affiliation and email address makes it clear that this is the Dr Stephen Marshall the author of the e-Learning Maturity Model methodology for benchmarking e-learning.
BENVIC
BENVIC is a methodology for benchmarking e-learning developed under an EU project, also called BENVIC (in full, Benchmarking of Virtual Campuses) in the era 1999-2001. There is a project web site still at http://www.benvic.odl.org/ – but it has not been updated since February 2002. The BENVIC consortium was led by UOC, the Open University of Catalonia and had a strong set of partners (including University College London in the UK). However, for various reasons including retirement of key staff the work does not seem to have continued – or least web searches indicate that follow-up work is not evident.The BENVIC system has eight core meta-indicators. These are:
- Learner Services
- Learning Delivery
- Learning Development
- Teaching Capability
- Evaluation
- Accessibility
- Technical Capability
- Institutional Capability
All of them, with the exception of Accessibility, are the kind of top-level groupings that one finds in other methodologies. UK readers should also note that Accessibility does not mean only the narrow sense of SENDA but also covers many aspects of Widening Participation.
Each of these eight meta-indicators is associated with a range of assessment measurements (indicators) which enables BENVIC users to carry out an initial benchmarking diagnostic. The assessment measurements are of three types:
- structural measurements
- practice measurements
- performance measurements.
There are in total 72 structural and practice indicators – which is rather more than in many systems but less than some others. Whether or not this is an issue depends crucially on how difficult it is to score these indicators and whether they are all "compulsory".
Indicators (other than the performance ones which are metrically based, i.e. numeric) are scored on a scale of 0-2. Many other systems use a scale of 1-5, but one could use a natural mapping of 0 to 1, 1 to 3, and 2 to 5.
There is a more detailed analyis of BENVIC on the web.
CHIRON
CHIRON is an EU-funded project (under the Leonardo programme) whose aim is "to develop reference material presenting and analysing research outcomes, experiments and best practice solutions for new forms of e-learning, based on integration of broadband web-, digital TV- and mobile technologies for ubiquitous applications in the sector of non-formal and informal life-long learning". There is a CHIRON web site at http://semioweb.msh-paris.fr/chiron/. They tend to use the phrase "u-learning" rather than "e-learning", where "u" denotes "ubiquity",As part of this brief, CHIRON appears to be developing a benchmarking methodology. This is specifically referred to in Work Package 7.
There are 11 criteria, divided into a total of 216 indicators. The criteria are as follows:
- 01 Goals and Objectives of the course (12 indicators)
- 02 Institutional Support (14 indicators)
- 03 Course Development (50 indicators)
- 04 Course Structure (12 indicators)
- 05 Course Content (25 indicators)
- 06 Teaching/Learning (19 indicators)
- 07 Student Support (18 indicators)
- 08 Faculty Support (4 indicators)
- 09 Evaluation and Assessment (24 indicators)
- 10 Accessibility (26 indicators)
- 11 Language (12 indicators)
Most of the indicators are best described as specific and rather detailed e-learning standards and guidelines (for example on house style, usability, etc). The remaining few are drawn from a range of sources, including from the Quality on the Line criteria developed in the late 1990s by the Institute for Higher Education Policy in the US.
ELTI
ELTI is the name of one of the methodologies that was trialled in 2006 in the UK Higher Education Academy Benchmarking Pilot, by three universities:- University of BristolUniversity of BristolThe University of Bristol is a public research university located in Bristol, United Kingdom. One of the so-called "red brick" universities, it received its Royal Charter in 1909, although its predecessor institution, University College, Bristol, had been in existence since 1876.The University is...
- University of HertfordshireUniversity of HertfordshireThe University of Hertfordshire is a new university based largely in Hatfield, in the county of Hertfordshire, England, from which the university takes its name. It has more than 27,500 students, over 2500 staff, with a turnover of over £181m...
- University of Wales Institute, CardiffUniversity of Wales Institute, CardiffCardiff Metropolitan University is a university situated in Cardiff. It operates from three campuses: Llandaff on Western Avenue, Cyncoed, and Howard Gardens in the City Centre. The university serves over 12,000 students...
.
The version of ELTI on which the trials were originally based is the revised 2003 JISC version held at the JISC site http://www.jisc.ac.uk/index.cfm?name=project_elti. The document ELTI Audit Tools is the most directly relevant to what is commonly accepted as benchmarking, especially section 1 on so-called "Institutional Factors"; however, one at least of the pilot sites took a broader view.
The UK view of ELTI is given in the entry for ELTI in the Higher Education Academy Wiki.
Further details
The ELTI audit was originally developed as part of a JISC project and was designed to inform the process of embedding learning technologies, assist in developing appropriate institutional structures, culture and expertise and to encourage cross boundary collaboration and groupings.
The ELTI approach focuses on:
- 3 general areas for exploration: Culture, Infrastructure, and Expertise
- 12 key factors are identified, 4 in each area
- Up to 10 indicators are agreed, to reflect institutional context, for each factor
- Indicators are expressed as positive statements, which can be assessed according to a 1-5 scale but can also include qualitative statements.
During the e-Learning Benchmarking Pilot the three institutions used the tools and produced alterations and their own contextualised indicators.
The three institutional blogs are at:
- University of Bristol blog
- University of Hertfordshire blog
- University of Wales Institute, Cardiff blog.
Further reading for UK HE newcomers to the ELTI framework
- Higher Education Academy e-Learning Benchmarking Project Consultant Final (Public) Report, Peter Chatterton, September 2006 - available at http://elearning.heacademy.ac.uk/weblogs/benchmarking/wp-content/uploads/2006/09/Chatterton-final-public-report20060904_publish.doc. In this crisp 13-page report the main section of relevance to methodology is Section 3 (especially pages 7–8).
- Benchmarking e-learning: Embedding Learning Technologies Institutionally (ELTI) - a series of Reports from the University of Hertfordshire. This is a comprehensive and rather daunting set of reports with a mass of information. From the narrow standpoint of methodology the most useful are:
- Guide to Proforma Spreadsheets, available at http://perseus.herts.ac.uk/uhinfo/library/s77652_3.pdf. This is an introduction to the next document.
- UH Benchmarking Report - Annexe (of spreadsheets), available at http://perseus.herts.ac.uk/uhinfo/library/r77422_3.pdf. This has a series of Excel spreadsheets including a comprehensive revision of the ELTI audit tool with amended questions (including some new questions and many revised questions) and scoring statements.
- A department-based revision of the ELTI survey available via the University of Bristol blog or directly at http://www.survey.bris.ac.uk/ltss/elti.
eMM (e-learning Maturity Model)
The phrase "eMM" is the commonly used abbreviation for the longer phrase e-learning Maturity ModelE-learning Maturity Model
The E-Learning Maturity Model in software engineering is a model to assess the capability of e-learning processes.- Overview :eMM is a quality improvement framework based on the ideas of the Capability Maturity Model and Software Process Improvement and Capability dEtermination methodologies...
.
The E-Learning Maturity Model (eMM) is a quality improvement framework based on the ideas of the Capability Maturity Model (CMM)
Capability Maturity Model
The Capability Maturity Model is a development model that was created after study of data collected from organizations that contracted with the U.S. Department of Defense, who funded the research. This model became the foundation from which CMU created the Software Engineering Institute...
and SPICE (Software Process Improvement and Capability dEtermination) methodologies. The underlying idea that guides the development of the eMM is that the ability of an institution to be effective in any particular area of work is dependent on their capability to engage in high quality processes that are reproducible and able to be extended and sustained as demand grows.
The eMM provides a set of thirty-five processes, divided into five process areas, that define a key aspect of the overall ability of institutions to perform well in the delivery of e-learning. Each process is selected on the basis of its necessity in the development and maintenance of capability in e-learning. All of the processes have been created after a rigorous and extensive programme of research, testing and feedback conducted internationally. Capability in each process is described by a set of practices organised by dimension.
The eMM supplements the CMM concept of maturity levels, which describe the evolution of the organisation as a whole, with dimensions. The five dimensions of the eMM are:
- Delivery
- Planning
- Definition
- Management
- Optimisation
The key idea underlying the dimension concept is holistic capability. Rather than the eMM measuring progressive levels, it describes the capability of a process from these five synergistic perspectives. An organization that has developed capability on all dimensions for all processes will be more capable than one that has not. Capability at the higher dimensions that is not supported by capability at the lower dimensions will not deliver the desired outcomes; capability at the lower dimensions that is not supported by capability in the higher dimensions will be ad-hoc, unsustainable and unresponsive to changing organizational and learner needs.
Full details of eMM can be found at the master site http://www.utdc.vuw.ac.nz/research/emm/index.shtml. Updates and discussion appear on the eMM Blog
Version 2 of eMM has changed considerably from the Version 1 of 2003, as noted at http://www.utdc.vuw.ac.nz/research/emm/VersionTwo.shtml.
Note that the eMM and associated documentation is licensed under a Creative Commons
Creative Commons
Creative Commons is a non-profit organization headquartered in Mountain View, California, United States devoted to expanding the range of creative works available for others to build upon legally and to share. The organization has released several copyright-licenses known as Creative Commons...
Attribution-ShareAlike 2.5 License.
The eMM is being trialled in the Higher Education Academy Benchmarking Pilot, by the University of Manchester
University of Manchester
The University of Manchester is a public research university located in Manchester, United Kingdom. It is a "red brick" university and a member of the Russell Group of research-intensive British universities and the N8 Group...
. Additional projects applying the eMM are underway supported by the Scottish Funding Council in Scotland and ACODE in Australia. Development and application of the eMM in New Zealand was supported by the New Zealand Ministry of Education Tertiary E-Learning Research Fund.
E-xcellence
E-xcellence is an EU-funded project run by EADTU (the European Association of Distance Teaching Universities) with the assistance of 12 other partners. It has a project web site at http://www.eadtu.nl/e%2Dxcellence/.E-xcellence started in January 2005 and is due to conclude in November 2006, with a major launch at the EADTU conference in Tallinn, Estonia, 23–24 November 2006. Originally E-xcellence was not envisaged as a benchmarking methodology but as a quality monitoring tool, but at arbout a year into the project there was a shift in emphasis and benchmarking is now one of the aims envisaged for E-xcellence. In fact there are three orientations of the methodology:
- Assessment tool (at both institutional and programme level) (i.e. benchmarking)
- Quality improvement tool (internal quality care system)
- Accreditation tool for accreditation
Expected outcomes include:
- List of criteria for "good" e-learning (i.e. setting standards of excellence and indicators for validation)
- Manual on good practices (a web-based guide)
- Quality assurance system (internal validation based on the standard of excellence)
- Reports on pilots which will test the validation approach
- Establishment and training of a visitation team both for quality assurance and for accreditation (to be seen as distinct procedures)
There is little public information so far but it is believed that the basis of the E-xcellence benchmarking methodology is as follows:
- Based on criteria, with (at the time of writing) 20 threshold criteria and a further 30 excellence criteria
- Each criterion is a bundle of indicators
- Criteria are not yet scored, but this may be added by the time of the final release
- The threshold" level can be determined by a self-audit, but the "excellence" level requires in addition a visit from an expert team.
IQAT
IQAT - pronounced "eye-cat" - is a benchmarking and quality enhancement methodology developed by Hezel Associates, a well-known firm of e-learning consultants, in conjunction with a number of university partners. The methodology was formally launched in June 2006.It describes itself as "a web-based tool to track and benchmark institutional data systematically across time and among peer institutions".
The was undertaken in partnership with NUTN, the National University Telecommunications Network, and with sponsorship from Cisco Systems. NUTN has for some time had a major interest in quality and more recently benchmarking, as demonstrated for example by the topics and speakers at their 2006 conference. In particular there was a launch presentation of IQAT. One of the leading organisations in NUTN is Michigan State University, who have a leading role in organising an upcoming (October 2006) conference on quality in Beijing, China, entitled the First International Forum on Online Education: Quality Assurance, with a range of quality and benchmarking experts as speakers.
It is a commercial and proprietary tool. For additional information, inquiries should be directed to Hezel Associates.
MASSIVE
MASSIVE is an EU-funded project coordinated by the University of Granada - the web site is at http://cevug.ugr.es/massive/.The prime aim of the project is to design a model of support services for European universities engaged in e-learning. This is being done in a cooperative way involving a large network of organisations, the members of the MASSIVE consortium, tothether with stakeholders not directly participating in the project, who liaise via a Strategic Advisory Committee. The university members of MASSIVE include:
- Universidad de Barcelona
- University of Bergen
- Budapest University of Technology and Economics
- University of Edinburgh
- EuroPACE and its member universities
Student involvement is handled via ESIB, the National Unions of Students in Europe.
A key outcome of MASSIVE is to a promote a peer review evaluation approach, based on models widely tested in the university partners. Via Peer Review Visits, those in charge of the best support services practices will help each university to refne and improve their support services for e-learning. At this point the project becomes very similar to a benchmarking project.
MIT90s
The MIT90s framework has been used by the University of StrathclydeUniversity of Strathclyde
The University of Strathclyde , Glasgow, Scotland, is Glasgow's second university by age, founded in 1796, and receiving its Royal Charter in 1964 as the UK's first technological university...
(one of the 12 institutions in the Higher Education Academy Benchmarking Pilot) to assist in the structuring of its approach to benchmarking e-learning. For details of its use in Phase 1 see the Higher Education Academy Benchmarking Wiki entry for MIT90s.
The framework was developed by Michael Scott Morton as part of the work of the "MIT90s" initiative which flourished at MIT in the early 1990s. Michael Scott Morton is now Professor Emeritus at the MIT Sloan School of Management
MIT Sloan School of Management
The MIT Sloan School of Management is the business school of the Massachusetts Institute of Technology, in Cambridge, Massachusetts....
. The work is cited under various names: it is correctly entitled The Corporation of the 1990s: Information Technology and Organizational Transformation, edited by Michael Scott Morton with an introduction by Lester Thurow (Oxford University Press, USA), published February 1991, ISBN 0-19-506358-9.
There has been some confusion over the correct name of the initiative and the framework. Readers will find "MIT 90s", "MIT90" and even "MITs 90" in various references.
The MIT90s framework has been central to a number of JISC and related studies (including from DfES) on adoption and maturity. In UK post-16 e-learning terms, probably the most successful use of this is in the RAIIE study led by David Nicol of the University of Strathclyde, funded by JISC during the period April 2003 to May 2004, which produced a final report A Framework for Managing the Risks of e-Learning Investment. There is also a substantial strand of work in Australia associated with the names of Philip Yetton, Anne Forster, Sandra Wills and others, some of it funded by DETYA. This also used the concept of strategic alignment described below.
The MIT90s initiative developed several companion pieces of work o which two are from Venkatraman: transformation levels and strategic alignment. Professor N Venkat Venkatraman is now Professor of Management at Boston University School of Management (and also a visiting professor at London Business School) but was at the Sloan School of Management at MIT during the period of relevance. His work on IT-Induced business reconfiguration is Chapter 5 (pages 122-158) of the main book referred to above.
The Venkatraman thesis is that business use of IT passes through five levels, differing in both the degree of business transformation and in the range (and amount) of potential benefits. The levels are:
- Localised exploitation
- Internal integration
- Business process redesign
- Business network redesign
- Business scope redefinition.
Levels 1 and 2 are called evolutionary levels – levels 3, 4, and 5 are called revolutionary levels.
This has been applied to educational systems, especially in the schools sector, by Becta and DfES.
In passing, it is interesting that this is one of the first situations where a 5-point scale has been used in a situation akin to benchmarking.
The MIT90s framework could have relevance to e-benchmarking frameworks. In particular, the latest version (2.0) of the Pick & Mix methodology uses the MIT90s framework for tagging its criteria in the Pick&Mix 2.0 release of which a beta description is now available.
Further reading
In addition to the work from the University of Strathclyde - see their benchmarking blog for details - there are four main papers/reports that newcomers to the MIT90s framework are advised to read:
- Wills, Sandra, "Strategic Planning for Blended eLearning", paper presented to the IEE conference ITHET06, Sydney, July 2006, available at http://ro.uow.edu.au/asdpapers/36/. This is a succinct description of an application of the MIT90s approach in a university.
- Yetton, Philip et al., Managing the Introduction of Technology in the Delivery and Administration of Higher Education, DEETYA, available at http://www.dest.gov.au/archive/highered/eippubs/eip9703/front.htm. If you have time just to read just one longer report, read this one.
- Uys, Philip, Towards the Virtual Class: key management issues in tertiary education, PhD dissertation, Victoria University of Wellington, 2000, available at http://www.globe-online.com/philip.uys/phdthesis. See in particular chapters 2 and 10. This dissertation describes the value of the MIT90s framework in structuring and analysing a large implementation action research programme – and the heuristics derived. It is, along with the work of Wills, one of the few examples of serious feedback into the MIT90s framework.
- Segrave, Stephen, Holt, Dale and Farmer, James, “The power of the 6three model for enhancing academic teachers’ capacities for effective online teaching and learning: Benefits, initiatives and future directions”, Australasian Journal of Educational Technology (AJET) 21(1), 2005, available at http://www.ascilite.org.au/ajet/ajet21/segrave.html. This is a good read useful as a modern confirmation of the relevance of the MIT90s framework.
The original MIT90s book is:
- Scott Morton, Michael S. (ed), The Corporation of the 1990s: Information Technology and Organizational Transformation, Oxford University Press, 1991, ISBN 0-19-506358-9. It is still an excellent and quite fast read.
To fill in further details readers are also referred to:
- Pennell, Russ, and Wills, Sandra, “Changing horses in mid-stream: a new LMS plus improved teaching”, Ausweb06, available at http://ausweb.scu.edu.au/aw06/papers/refereed/pennell/paper.html.
- Venkatraman, N. and Henderson, J. C. "Strategic alignment: Leveraging information technology for transforming organizations", IBM Systems Journal Vol. 32, No. 1, 1993, available at http://domino.watson.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/b0d32b9975af5a2e85256bfa00685ca0?OpenDocument.
A book of great interest for those interested in seeing a comprehensive framework based on MIT90s is:
- Ford, Peter, et al., Managing Change in Higher Education: A Learning Environment Architecture, Society for Research into Higher Education and Open University Press, Buckingham, 1996, ISBN 0-335-19792-2 (hardback). This describes the OPENframework, developed by ICL in part based on MIT90s thinking, and its application to IT-driven change management. It was popular in JISC circles, in particular the MLE Steering Group, but there is no information on the web about its actual use in specific universities.
OBHE
OBHE is an eponymous benchmarking methodology run by OBHE, the Observatory on Borderless Higher Education. The Observatory is a joint initiative of ACU, the Association of Commonwealth UniversitiesAssociation of Commonwealth Universities
The Association of Commonwealth Universities represents over 480 universities from Commonwealth countries.- History :In 1912, the University of London took the initiative to assemble 53 representatives of universities in London to hold a Congress of Universities of the Empire...
and Universities UK
Universities UK
Universities UK began life as the Committee of Vice-Chancellors and Principals of the Universities of the United Kingdom in the nineteenth century when there were informal meetings involving Vice-Chancellors of a number of universities and Principals of university colleges...
, the association of all UK universities. The Observatory is now 100% funded by subscriptions and consultancy, and has over 130 institutional subscribers from more than twenty countries. It offers a wide range of services of which benchmarking is one. Within the benchmarking offering is a range of sub-offerings, of which one is deployed for the Higher Education Academy clients.
The OBHE methodology is a collaborative benchmarking methodology where a group of institutions get together and jointly agree relevant areasa of interest (in this case, within the e-learning space) and in a later phase, look for good practices. A succinct description of the variant of their methodology used for the Higher Education Academy Benchmarking Pilot is here.
The methodology goes back to benchmarking work done for European and Commonwealth universities in the 1990s. A related study to the Higher Education Academy Pilot is the work done for OECD to produce the report E-learning in Tertiary Education: Where Do We Stand? As reported in the Proceedings of the OECD Conference on Post-Secondary E-learning, "This was based on a detailed, qualitative survey of current e-learning practice in 19 institutions of higher education in North America, South America, Europe and the Asia-Pacific region. The information gathered through this survey was complemented by quantitative data collected by OBHE from the 500 members of the Association of Commonwealth Universities and of Universities UK."
Pick&Mix
Pick&Mix (in the past called "Pick & Mix" with spaces) is the name of one of the methodologies being trialled in the Higher Education Academy Benchmarking Pilot, by three universities:- University of ChesterUniversity of ChesterThe University of Chester is a public research university located in Chester, United Kingdom. The University, based on a main campus in Chester and a smaller campus in Warrington, offers a range of foundation, undergraduate and postgraduate courses, as well as undertaking academic research.Chester...
- University of LeicesterUniversity of LeicesterThe University of Leicester is a research-led university based in Leicester, England. The main campus is a mile south of the city centre, adjacent to Victoria Park and Wyggeston and Queen Elizabeth I College....
- Staffordshire UniversityStaffordshire UniversityStaffordshire University is a university with its main campus based in the city of Stoke-on-Trent, and with other campuses in Stafford, Lichfield and Shrewsbury.- History :...
The version of Pick&Mix on which the trials were originally based is version 1.0, first described in a public domain document from the ALT-C 2005 conference and refined slightly to version 1.1 for the Higher Education Academy. During the pilot, after the criterion-setting phase, this was updated to version 1.2.
The release being offered to HEIs for Phase 1 is version 2.0. A beta version of this is described here.
Version 2.0 has benefited substantially from input from the pilot user group of the Universities of Chester, Leicester and Staffordshire, who share in the moral rights of authorship. Released versions of Pick&Mix are in future to be put into the public domain via a Creative Commons
Creative Commons
Creative Commons is a non-profit organization headquartered in Mountain View, California, United States devoted to expanding the range of creative works available for others to build upon legally and to share. The organization has released several copyright-licenses known as Creative Commons...
license (as was version 1.0 and a summarised literature search).
Pick&Mix was first developed in 2005 after an extensive literature search to suit the needs of Manchester Business School
Manchester Business School
Manchester Business School is the largest department of the University of Manchester in Manchester, England. According to Bloomberg Business Week's ranking of the world's best business schools the MBS MBA is ranked third in the world...
for a comparative methodology for benchmarking e-learning, and a beta version of 1.0 used for a study of 12 comparable institutions to Manchester Business School. The full study is and remains confidential to MBS but a presentation of the highlights was made at the University of Sydney in November 2005.
Further information on Pick&Mix including a range of presentations and papers, and material on related methodologies, within the "critical success factors" tradition of benchmarking can be found here.
The recognised abbreviation for Pick&Mix (e.g. as used in tables) is "PnM", although "P&M" is sometimes seen also.
Specific countries
New ZealandThey are using the eMM methodology.
United Kingdom (UK)
In universities, the Higher Education Academy
Higher Education Academy
The Higher Education Academy is an independent organisation in the United Kingdom that supports higher education institutions with strategies for the development of research and evaluation to improve the learning experience for students.-History:...
is deploying a small number of methodologies. See their benchmarking page for an entry point to material.
The Higher Education Academy benchmarking blog can be accessed at http://elearning.heacademy.ac.uk/weblogs/benchmarking/.
The Higher Education Academy benchmarking wiki is also available - please note that this is oriented to material relevant to UK higher education.