Embedding assessment in course design: the case of a research methods course

In my last post on designing a new approach to research methods within a masters course I adopted a framework based on the use of threshold concepts. The focus of this theoretical approach is in aiding students to move between three stages in their understanding of ‘threshold’ concepts together with attendant knowledge and skills. This is a transition between:

  • pre-liminal
  • liminal
  • post-liminal

Part of the difficulty involved in assessing student understanding and learning within this framework is the complexity involved in making explicit the level of prior learning at the beginning of the module and the subsequent trajectory which individuals follow as they transcend each level. Land and Meyer (2010) argue that assessment should focus on the conceptual difficulties which students face whilst accepting that these are a natural part of the learning process, as such they argue:

‘… if, as we maintain, the transformations occasioned by threshold concepts are important, and require a rather different way of looking at the curriculum, then it follows that such transformations will require a more nuanced and generative model of assessment to help us purposefully identify variation in progress and understanding between individual learners.’ (Land and Meyer, 2010: 63)

They make clear that each individual will move through the process of ‘liminal shift’ at different rates and in different ways and therefore it is important that assessment is used to guide and establish for both teacher and student what the difficulties and emerging understandings are within their thinking and learning. This requires both a framework for understanding learning and also a way of attempting to make that learning visible to both teacher and student. One example which is given as a medium for achieving this is the use of concept mapping, drawing on the work of Kinchin and Hay (2000) (another very detailed consideration of concept mapping is that developed by Novak, 1998). Again, Land and Meyer (2010) suggest that the use of tools such as concept mapping allow for four distinct advantages in relation to making student understanding and learning explicit:

  1. they allow us to uncover what each student knows rather than attempting to anticipate this as might be the case in any test scenario where questions are created by the tutor thereby bounding and anticipating ‘correct answers’;
  2. they allow students to demonstrate what knowledge they possess, and importantly how they have arranged that knowledge within their own minds;
  3. they become a narrative on the developing understanding of the student rather than a series of basic sequential snapshots;
  4. they allow us to see which concepts remain resistant to change within the minds of students, whilst at the same time understanding how interrelations between concepts may have changed within the students’ thinking.

Importantly, the use of various tools such as concept mapping encourage the introduction of self-explanation theory which, dialogue with the self to encourage externalising of learning (for further consideration see Chi et al (1989) http://chilab.asu.edu/papers/ChiBassokLewisReimannGlaser.pdf, and for some approaches to developing self-explanation see Hausmann et al (2009) http://www.lrdc.pitt.edu/pubs/Abstracts/HausmannSelf-Explanation.pdf). In all of this consideration is the idea of understanding learning through the lens of variation theory. Theory of Variation is based on the work of Marton and Booth (1997) which argues there is no single way to understand, experience or think about a particular phenomenon, an argument which is based upon a phenomenographic tradition. Tong (2012: 3) emphasises that:

‘In learning, individual students make sense of new concepts in different ways, according to their existing understandings and frameworks of knowledge. This requires teachers to engage closely with their students to grasp the variations in understandings and knowledge so they can take account of this diversity in structuring the learning activities in a lesson (Marton and Tsui, 2004).’

Therefore, importance is given to the ‘object of learning’ which might be a particular concept or area of knowledge. Once identified, the object of learning needs to be understood in relation to its critical features, i.e. its particular characteristics which differentiate it from any other object of learning. Consultation with students through some medium is important here so as to understand the variation in prior learning and understanding which exists prior to teaching. Having understood both student conceptualisations and the features of the particular object of learning, activities can be designed which emphasise the features which are important in aiding student understanding. Ways in which this can be developed include the use of comparison making by use of contrast, separation, generalisation and fusion. In relation to assessment the central aspect here is the ability for both the teacher and student to be able to externalise learning thereby allowing networks of knowledge and understanding to be interrogated to allow for targeted and appropriate pedagogic approaches and subject content. It is in this way the assessment can begin to take the form of an embedded and constant element of learning rather than a staged snapshot.

By bringing together threshold concepts, variation theory and the principle of assessment as learning more practical ways of tracking student understanding become possible. Approaches might include regular concept mapping, the use of reflective diaries, the development of portfolios and the possibility of more technologically led capture of self-explanation. One example of this might be the use of screen capture technology such as Jing (http://www.techsmith.com/jing.html) or Screencast-O-Matic (http://www.screencast-o-matic.com/ ). This is following on from the work of Carl Simmons at Edge Hill University who has already made very positive use of video feedback to students. Students can be asked to create a PowerPoint slide or other artefact explaining a particular object of learning within, or at the end of, a session including a narration captured whilst the screen capture software is running which is then sent to the tutor. This allows for the capture of levels of conceptual difficulty and understanding, occurring as the students are involved in their learning, and embedding the notion of self-explanation. The advantage of this approach is that each reflection is a maximum of five minutes long (Jing only allows 5 minute captures), thereby allowing tutors to judge levels of misconception or emergent understanding very quickly. If linked to the use of simple podcast feedback on the part of the tutor very rapid support can be offered in a rich feedback environment. In addition, where a large proportion of students are obviously still struggling with the concepts and knowledge involved the tutor gains a more nuanced understanding of where the problems exist and can therefore adjust the features within the object of learning to aid in more targeted teaching in future sessions.

The use of tools for capturing the trajectory of student learning allow individuals to reflect upon those areas where they have begun to feel more confident whilst also highlighting areas where understanding is still partial at best. It also allows for more in-depth and nuanced discussions between individual students and tutors as there is a richer vein of evidence for understanding student learning, misconceptions and potential next steps.

By bringing together threshold concepts, attendant bodies of knowledge and systems based on assessment as learning and variation theory, a clearer, deeper and more critical framework can be developed in support of student learning. If ‘progression’ is seen from the perspective of trajectories from pre-liminal to post-liminal states then informal assessment becomes more focused on discussions concerning concepts, knowledge, and the relational aspects between them. This then gives a far more focused approach to regular feedback to students from tutors and also offers a clear framework for ever more critical approaches to self-explanation. In addition, where formal assessment is required, in our own case through the production of a conference poster and a formal written assignment assessment, assignment outlines can use the same language as that adopted within the wider course, and the feedback provided likewise; in this sense whilst still essentially summative these formal assessments then become a far more obvious extension of the course itself.


Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, M. W., & Glasser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182.

Hausmann, R., Nokes, T., VanLehn, K., & Gershman, S. (2009). The design of self-explanation prompts: The fit hypothesis. Proc. 31st Annual Conference of the Cognitive Science Society. pp. 2626–2631.

Kinchin, I. & Hay, D. (2000) ‘how a qualitative approach to concept map analysis can be used to aid learning by illustrating patterns of conceptual development.’ Educational Research, 42(1), 43-57

Land, R. and Meyer, J.H.F. (2010) ‘Threshold Concepts and Troublesome Knowledge (5): Dynamics of Assessment.’ In Threshold Concepts and Transformational Learning, J.H.F. Meyer, R. Land and C. Baillie (eds), Rotterdam: Sense Publishers, 61-79.

Marton, F. & Booth, S. (1997). Learning and awareness. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.

Marton, F. & Tsui, A.B.M. (2004). Classroom discourse and the space of learning. Mahwah,

NJ: Lawrence Erlbaum Associates, Inc., Publishers.

Novak, J.D. (1998) Learning, creating and using knowledge: Concept maps as facilitative tools in schools and corporations. Abingdon: Routledge.

Tong, S.Y.A. (2012) ‘Applying the Theory of Variation in Teaching Reading.’ Australian Journal of Teacher Education, 37(10), 1-19 (accessed at http://ro.ecu.edu.au/cgi/viewcontent.cgi?article=1800&context=ajte)


(Re)designing a research methods module for an education master’s

Over the past five years the MA that I have taught on has grown and gone from strength to strength. However, whilst most modules have become increasingly enriched, one module stands out as being a constant issue – research methods. In a full-time campus-based course research methods has a number of potential problems to overcome. It can sometimes feel that it is somewhat segregated from the rest of the course, and is often the first time our students (many of whom are international) have completed a module solely focusing on research methods. We’ve tried a number of different approaches, mainly focused on alternative ways of developing an in-course research project. However, I think there are wider issues in developing research methods within an education master’s degree centring on:

  • Differences between undergraduate experiences, particularly due to the different disciplines students have pursued and the epistemological traditions of different national backgrounds
  • How master’s degree provision maps onto doctoral studies. I think that master’s research methods courses are often ‘doctoral-lite’ with lots of breadth but less depth.
  • How emerging understanding is captured in practical application
  • How a core of knowledge, understanding and skills can be developed which are well embedded in groups which have a huge diversity of prior learning and cultural diversity with respect to research in general

This can lead to the development of a ‘Cook’s Tour’ approach where there is a list of content to get through which sketches out the vast majority of the research methods ‘oeuvre’ but which also leaves little time for real engagement and understanding. So how might a course be reconsidered and on what lines? This depends on what we want students to get from the experience of a research methods course. Do we just want them to have lots of knowledge, in which case the Cook’s Tour might be satisfactory, if of limited applied use, or do we want something different? I am going to argue for three main aims in a research methods course at master’s level:

  1. A clear and critical understanding of the ‘core’ of research methods at a conceptual level
  2. An emerging understanding and application of these concepts at a practical/applied level
  3. A good foundation on which to build at doctoral level whilst also providing a practical foundation for application beyond the academy for those who don’t further their studies.

This has led me to reconsider the development of a course using the idea of threshold concepts, explained as:

‘A threshold concept can be considered as akin to a portal, opening up a new and previously inaccessible way of thinking about something. It represents a transformed way of understanding, or interpreting, or viewing something without which the learner cannot progress. As a consequence of comprehending a threshold concept there may thus be a transformed internal view of subject matter, subject landscape, or even world view. This transformation may be sudden or it may be protracted over a considerable period of time, with the transition to understanding proving troublesome. Such a transformed view or landscape may represent how people ‘think’ in a particular discipline, or how they perceive, apprehend, or experience particular phenomena within that discipline (or more generally). (Meyer and Land, 2003: 1)

We are attempting to support individuals in becoming informed researchers, in transforming the way they think and act in relation to concepts such as evidence, argument and method. This cannot occur, I would argue, by simply presenting information, it can only occur through deep conceptual engagement. It is the difference between learning about research and becoming a researcher. One caveat is that I would argue that any one year course will not enable an individual to become an expert researcher, it will, at best, allow them to ‘cross the threshold’ into the world of research. Beyond this is a huge journey to grow and gain critical experience and understanding of the complexities by carrying out research over a long period of time. Becoming an expert researcher is a difficult, sometimes painful, and slow process.

In deciding the core threshold concepts on which to base a course, Kiley and Wisker (2010) offer a starting point. They set out a series of threshold concepts they believe are core to ‘learning to be a researcher’ (the title of their paper) through doctoral studies. They set out the following key concepts:

  • Argument
  • Theory
  • Frameworks
  • Knowledge creation
  • Analysis
  • Paradigms

These generally track onto a list which I had created before reading this paper after attending a Higher Education Academy social sciences conference which focused in part on research methods pedagogy.

concepts table final

With these core threshold concepts as the backbone of the course, the knowledge which is covered becomes the explication of those concepts. However, the concepts and the knowledge which overlays them need to be made concrete and need to be applied, and therefore research tools also need to be covered so as to allow the practical enactment of the emerging understanding which students gain. In a sense, the application of their learning is central to deducing for them and us the degree to which they have managed to grapple with the ‘troublesome knowledge’ (Perkins, 1999) of research methods.

In an earlier post (https://hereflections.wordpress.com/2014/05/12/thinking-about-course-innovation-and-evaluation-i/) I made it clear that I see curriculum, assessment and pedagogy as inseparable. And over coming posts I will consider the implications of this model for both assessment and pedagogy, but in terms of the curriculum element, the above argument leads me to a conceptualisation of a research methods curriculum as:

curriculu conceptulisation

Consequently, the basic curriculum map for the redesigned course is that given below:

RM outline

There are no longer any two hour seminars, one of the main features of the current course. Instead we have decided to have mainly whole, or multiple days at points across the year. This is to allow time to engage in different ways and at length. It also allows for more time for student engagement with texts and activities beyond the seminar room. Some core work will also be developed online as ways of revisiting and strengthening core threshold concepts such as epistemology and methodologies. Part of the research tools work in January will be used to develop a small-scale research project which students will need to pilot. What is central to the thinking here however, is that the ‘breadth’ of the course has been diminished to make way for depth and space to consider, tackle and play with troublesome knowledge and the core foundations of research methods. It is designed to create liminal spaces of difficulty and challenge to allow students to play and grapple with ideas as a way of helping them understand the nature of the thresholds they are attempting to cross and aid them in doing so. However, it is essential that approaches are found to link the various elements together explicitly to give a holistic picture, we need to guard against the insight from Eraut (2008, quoted in Land and Meyer, 2010: 75),

‘People always think that if you go into enough detail about something you’ll nail it. But you never can, and you lose sense of the whole context in which that something makes sense. You lose the big picture.’

It is important that in grappling with threshold concepts, we don’t lose sight of the bigger picture. This is where ‘assessment’ becomes important, and I am keen to develop an ‘assessment as learning’ model (see Dann, 2002) which attempts to

‘follow the movie of the personal journey rather than look at snapshots of it.’ (Land & Meyer, 2010: 65)

But more of that in the next post.


Dann, R. (2002) Promoting Assessment as Learning: Improving the Learning Process. Abingdon: Routledge.

Eraut, M. (2008) Research into Professional Learning: Its implications for the professional development for teachers in Higher Education. Unpublished seminar paper. HEDG Annual Conference, Madingley Hall, Cambridge, UK.

Kiley, M. and Wisker, G. (2010) ‘Learning to be a researcher: The concepts and crossings.’ In Threshold Concepts and Transformational Learning, J.H.F. Meyer, R. Land and C. Baillie (eds), Rotterdam: Sense Publishers, 399-414.

Land, R. and Meyer, J.H.F. (2010) ‘Threshold Concepts and Troublesome Knowledge (5): Dynamics of Assessment.’ In Threshold Concepts and Transformational Learning, J.H.F. Meyer, R. Land and C. Baillie (eds), Rotterdam: Sense Publishers, 61-79.

Meyer, J.H.F. and Land, R. (2003) Threshold Concepts and Troublesome Knowledge: Linkages to Ways of Thinking and Practising within the Disciplines Enhancing Teaching-Learning Environments in Undergraduate Courses, Occasional Report 4, May 2003. Accessed at: http://www.colorado.edu/ftep/documents/ETLreport4-1.pdf [last access: 25/5/2014]

Perkins, D. 1999. The many faces of constructivism. Educational Leadership, 57(3), 6–11.



Thinking about course innovation and evaluation – II

As I suggested in the first part of this reflection https://hereflections.wordpress.com/2014/05/12/thinking-about-course-innovation-and-evaluation-i/ developing a new course is a complex task, and deciding how to design and evaluate it is not easy. As I said towards the end of the last post,

‘..as I try to develop some form of evaluative research to help us understand the course we are developing, there appear to be some important lessons which we need to take into account:

  • Any research into curriculum, pedagogy, learning and assessment can only ever give us partial, but nevertheless useful, insights.
  • We need to research at different levels of activity, in terms of both scale (from the (sub-) individual to the community) and time (from individual sessions to the course as a whole).
  • We need to create ‘thick descriptions’ from the data which allow us to consider the complexity of the systems involved.
  • Any insights need to be seen as descriptive, rather than predictive, in nature. We can use data to consider how we might continue to evolve the course, but we can’t assume that these changes will automatically work. In addition, the results gained might be useful for tutors in other contexts, but again can only point to possible areas for enquiry, they are not a recipe to be followed.’

The second and third bullet-points are crucial in designing an evaluative framework which will give us a useful ‘window’ on what is happening and where we can develop the course as we move forward. In considering the idea of researching at different scales of activity, we need to consider aspects of both time and space. In the short term, say over a session, the tools we use will be necessarily different to those which we use to consider the emergence of student learning. This might lead to some tools focusing on small numbers of stufents in a single session coupled with larger scale consideration of how a whole group evolves over the year. Two examples of these types of research already exist.

Design-based Research (DBR)

Barab and Squire (2004: 1) set out a rationale for design-based research http://www.gerrystahl.net/teaching/winter12/reading3a.pdf when they state that:

‘Learning sciences researchers investigate cognition in context, at times emphasizing one more than the other but with the broad goal of developing evidence-based claims derived from both laboratory-based and naturalistic investigations that result in knowledge about how people learn. This work can involve the development of technological tools, curriculum, and especially theory that can be used to understand and support learning. A fundamental assumption of many learning scientists is that cognition is not a thing located within the individual thinker but is a process that is distributed across the knower, the environment in which knowing occurs, and the activity in which the learner participates. In other words, learning, cognition, knowing, and context are irreducibly co-constituted and cannot be treated as isolated entities or processes.’

They stress the importance of context in understanding pedagogy and the need to design pedagogic contexts and approaches as a way of helping understand the processes involved better. The approach is based on the development of new theories, artifacts and practices which have been trialled in authentic settings. They list the main features of design-based research as being:

  • a process which develops new theories of teaching and learning
  • a process which relies on intervention to test a new approach
  • a process which evolves in an authentic context (e.g. the seminar room)
  • a process which is iterative

In this way, each iteration is meant to alter particular elements of the process to allow for testing and experimentation of factors. Some elements of this approach I find slightly problematic whilst accepting the overall thrust of the methodology. In a naturalistic setting it is extremely difficult if not impossible to select and vary elements of the process in such as simplistic way as this suggests. As was outlined in the first post, if we are interested in the complexity of the relationships between actors (Ellis and Goodyear, 2010) then to attempt to hold some constant whilst testing others collapses the natural complexity of what you are researching, and may lead to inaccurate insights; it seems to be a dangerously reductive stance.

However, the overall notion of design-based research, based on developing new insights through iterative cycles of intervention taking place in an authentic context is a good one, particularly with the focus on curriculum and technological tools. Therefore, in beginning to develop a research/evaluative approach to a new research methods course, the following seem to be a useful starting point:

  • a focus on the validity and utility of a curriculum innovation
  • understanding how that innovation develops through iterations of evaluation, analysis and intervention

In ensuring that the approach does not become reductive, two additional practical elements may help us in capturing the emergent properties of the course system:

  • a range of data capture techniques to understand the system from different perspectives
  • the inclusion of direct student participation in discussing ideas for change (building on earlier work we have completed in a Lesson Study project with international master’s students http://leicls.weebly.com/emergent-thinking.html )

This then suggests a methods approach which covers the following data capture:

  • a staged, mixed methods approach which considers: experience/perceptions, artefacts of learning, outcomes through
  • questionnaires
  • interviews
  • documents (e.g. session resources, student notes, summative assignments)
  • reflective activities

However, to gain more detailed insights we need to understand activity at the level of the individual session.

Design Experiment (DE)

Oshima et al (2003: 107-108) developed a similar approach to DBR, again focusing on the complexity of authentic educational settings:

‘Once researchers leave their laboratories, they face complex learning contexts. In the real classrooms, they are often confused by the complexity of activities that students engage in, and cannot imagine how the new tools and knowledge they have been developing can be used to improve practice.’

Their research was executed through work focusing on the science curriculum, carried out with elementary school teachers. Because the teachers were already conversant with the use of Lesson Study, the researchers used this as a medium for their research, working alongside the teachers. However, they stress the difference they perceive between their DE approach and Lesson Study,

‘The difference between the lesson study and the design research, we suggest, is that teachers usually base new lesson plans on their experiences of previous practice, rather than on theoretical or disciplinary knowledge in the learning sciences…..Our task was to figure out how teachers’ existing practice-based knowledge could be integrated with our disciplinary knowledge in the learning sciences, by means of collaboratively designing for the classroom.’                                                                              (Oshima et al, 2003: 111)

This outline is in many ways similar, but at a different scale, to DBR. As with DBR, whilst it offers the notion of insights at the level of the session, it also holds implicitly the wider context; how does the session sit within the wider curriculum? How can the insights gained from designing one session inform wider practice?

As with DBR, the design experimental approach seems somewhat reductive in character. Take an area of theory, design an intervention around it, and then test it in a naturalistic setting whilst attempting to minimise or eradicate particular factors. If instead, we rely on the wider idea of Lesson Study as an approach to capturing some of the activity and learning which emerges over the course of a session, we can take a multiple methods approach which we have outlined elsewhere in previous projects (http://leicls.weebly.com/he-projects.html ) namely:

  • the recording of planning and evaluation meetings before and after a session
  • the retention of all written documents and resources for analysis
  • observation (including video capture) of the session
  • stimulated recall interviews with students who have been observed in the session

A synthesis and extension?

Considering the potential utility of DBR and DE is useful as it helps focus on the idea of collecting data at different scales as well as restating the need for iterative interventions to help gain insights for potential new ideas. However, if we characterise the changes in the course and its participants as a process of emergence then the reductive character of these methods becomes restricting. We are not carrying out an ‘experiment’, albeit in a naturalised setting, instead we are attempting to gain insights into a complex process and system which is itself emergent over time. Therefore, we can take some of the general ideas of these approaches and synthesise them into an approach which attempts to meet the four criteria set out at the end of the last post and the start of this.

RM research outline

The diagram above gives an outline of what a ‘curriculum development’ evaluative research framework might look like. It includes the iterative process at two levels, both at the level of sessions themselves (Lesson Study) and at the level of the course more generally (at the end/beginning of each cycle, begin roughly every 5-6 weeks). By operating at two levels, course and session, there is a genuine opportunity to develop a thick description concerning the emergence of the course system, and should allow us to create a detailed, if complete, description about the changing nature and dynamics of the course which will be invaluable for our own development, but may also present insights and ideas which might be useful as starting points for others.


Barab, S. & Squire, K. (2004) ‘Design-Based Research: Putting a Stake in the Ground’ The Journal of the Learning Sciences, 13(1), 1-14.

Oshima, J.; Oshima, R.; Inagaki, S.; Takenaka, M.; Nakayama, H.; Yamaguchi, E. & Murayama, I. (2003) ‘Teachers and Researchers as a Design Team: changes in their relationship through a design experiment using Computer Support for Collaborative Learning (CSCL) technology.’ Education, Communication and Information, 3(1), 105-127.


Thinking about course innovation and evaluation – I

In a previous post I outlined the potential design-based research (DBR) might have in curriculum change and the development of a new research methods course. Since then a number of ideas have started to emerge, the result of reading and reflection, but more importantly as the result of discussing initial insights from research I’ve been involved in with two colleagues, one from the School of Education, the other from our English Language Teaching Unit.

The focus of course development for next year is a new research methods module which we hope to design and resource over the next two or three months ready to trial in the autumn. But this has led to a critical question, how do we understand the experience, impact and issues inherent in developing a new course? This is a complex issue and as a result this is the first of two posts on the ideas I’m grappling with relating to this question. An important insight for me in terms of researching the changes we intend to make comes from a review of the link between education and complexity offered by Kuhn (2008). In her work she states that,

‘A complexity approach acknowledges that all levels of focus, whether this is the individual, class, school, national or international associations, reveal humans and human endeavour as complex, and that focussing on one level will not reduce the multi-dimensionality, non-linearity, interconnectedness, or unpredictability encountered.’ (Kuhn, 2008: 174)

If we are to begin to gain any impression of what is happening with respect to student experience and learning as they navigate their way through the course, we need to be able to gain insights at different levels of activity and over different time periods. To begin to think simultaneously about the development of the course and how we might understand and evaluate it, we need to consider a basic set of principles or a model through which we might start to build a course framework.

Shepard (2000) wrote a paper focusing on the changing role of assessment in classroom cultures in the USA which was influential in much of my own early research. My interest in this paper stemmed from the way in which curriculum, learning (here I use ‘pedagogy’) and assessment are brought together as a single, coherent whole.

curic system

Since starting to use this model I have come to see all three elements as intimately linked. One element cannot be seen as more important than the other two as they are inherently symbiotic. How I develop my curriculum links to how I see the pedagogy which emerges through that curriculum. In addition, when designing the course and the pedagogies which will be used, both need to inform the assessment framework which will be used. To attempt to subordinate any element to any other will restrict and skew the educative process. However, whilst this helps to develop a more holistic view of education, there is obviously one element which is missing from the framework, the tutor(s) and the students, both of whom I see as being at the confluence of the three elements above. The level of complexity that this model portrays makes any evaluation or insight difficult to capture if it is to be of any real worth in helping identify the positives and issues as a new course evolves.

This is where I believe that we have to accept that the educative process is to a degree opaque. We cannot observe and analyse the processes which come together to make the course in their entirety; this much seems obvious given the quote of Kuhn above highlighting the multi-level and interdependent nature of the systems involved. Therefore, all we can do is capture as rich a picture as possible. It is also important from a complexivist perspective to resist easy, but inherently unhelpful, false dichotomies in an attempt to produce simplistic understandings. As Ellis and Goodyear (2010: 16) state:

‘It is important to avoid polarised thinking that makes apparently simple but logically indefensible contrasts: between ‘the new’ and ‘the traditional’, between cognitive and cultural, technical and human, etc. Indeed, as we will try to show, adopting a perspective that foregrounds relationships rather than differences turns out to yield clearer insights into a number of thorny issues about the place of e-learning in the student experience.’

The situation becomes even more complex as the activity, learning and experience within the course is not static. As students begin to learn, both individually and together, as tutors begin to make sense of the new course, discuss ideas and issues with students, and use formative processes from work and discussion to alter the curriculum and pedagogy as it occurs, the whole nature of the system constantly shifts. And this only describes the complexity of the seminar room; master’s students do much of their learning beyond formal learning settings, leading to the idea of ‘learning ecologies’ discussed in detail by Ellis and Goodyear (2010).

The complex processes of change which the above description highlights is illustrated through the concept of emergence. Mason (2008) describes emergence as being the result of systems where the level of complexity leads to the occurrence of new, and often unexpected, properties and behaviours, i.e. the sum becomes bigger than the sum of the parts. Therefore, behaviours and outcomes are not easily, if at all, predictable and any evaluation cannot assume that one set of data collected early in a course will be a good predictor of elements and outcomes of the course at the end of the year.

So as I try to develop some form of evaluative research to help us understand the course we are developing, there appear to be some important lessons which we need to take into account:

  • Any research into curriculum, pedagogy, learning and assessment can only ever give us partial, but nevertheless useful, insights.
  • We need to research at different levels of activity, in terms of both scale (from the (sub-) individual to the community) and time (from individual sessions to the course as a whole).
  • We need to create ‘thick descriptions’ from the data which allow us to consider the complexity of the systems involved.
  • Any insights need to be seen as descriptive, rather than predictive, in nature. We can use data to consider how we might continue to evolve the course, but we can’t assume that these changes will automatically work. In addition, the results gained might be useful for tutors in other contexts, but again can only point to possible areas for enquiry, they are not a recipe to be followed.

In the second post I will outline a research framework which I currently think might help us gain useful insights based on some of the ideas above.


Ellis, R.A. & Goodyear, P. (2010) Students’ Experiences of E-Learning in Higher Education: The Ecology of Sustainable Innovation. New York: Routledge.

Kuhn, L. (2008) ‘Complexity and Educational Research: A critical reflection’ in Complexity Theory and the Philosophy of Education, Mason, M. (ed.) Chichester: Wiley-Blackwell, 169-180.

Shepard, L. A. (2000) ‘The Role of Assessment in a Learning Culture.’ Educational Researcher, 29(7), 4-14.



Collaboration and pedagogic literacy

The idea of collaboration as a core element in the development of pedagogic expertise is a relatively recent idea within an HE context. In Shulman’s (1993) short paper on pedagogic solitude he highlights the fact that collaboration between academics is common place – in the case of research. However, when it comes to pedagogy he argues that most activity is carried out behind closed doors. His ideas around the concept of Scholarship of Teaching and Learning are meant as an antidote to this. More recently, Hargreaves and Fullan (2012) have developed the concept of Professional Capital as a framework for professional and pedagogic growth. They start from the position of arguing that there are three messages from educational research evidence which need to be considered when taking forward the process of pedagogy:

  1. Teaching like a ‘pro’ means continuously inquiring into and improving one’s own teaching
  2. Teaching like a ‘pro’ means planning and improving teaching often as part of a wider professional team
  3. Teaching like a ‘pro’ means being part of the wider teaching community and contributing to its development.

From a consideration of these ideas emerges a simple ‘equation’,

Professional Capital = Human Capital + Social Capital + Decisional Capital

  • Human Capital – the valuable knowledge and skills that can be developed in teachers. One example might be the idea of Pedagogic Content Knowledge (Shulman, 1986, 1987) where explicit consideration is given of how to ensure the use of the most appropriate pedagogies for explaining and teaching any particular subject knowledge within any given context.
  • Social Capital – quality and quantity of interactions between teachers to understand and develop pedagogic understanding and insights.
  • Decisional Capital – the opportunity to make authentic and professional decisions about pedagogic approaches.

The argument is made within their work (Hargreaves and Fullan, 2012) that collaboration between teachers can have a major positive impact on pedagogic practice. By adding the argument of Shulman (1993) that pedagogic practice has to be communal so as to allow others to discuss and evaluate the claims which are made for improving practice, a process of inquiry, reflection, discussion and evaluation begins to emerge as a process for collaborative work.

Little (1990) identifies a spectrum of collaborative practice from ‘weak’ collaboration, based on the exchange of ideas and anecdotes, through a sharing of materials and strategies, to a ‘strong’ form of collaboration which involved joint work including planning, teaching and inquiring together. However, some forms of collaboration can have a negative impact. Again, Hargreaves and Fullan identify collaborative problems such as ‘Balkanisation’ where separate and competitive groups begin to develop within an organisation which can lead to a lack of communication and a temptation to begin to look for power. Collaboration is not necessarily a universal good.

Understanding the development of professional learning through collaboration has tended to occur through the lens of social learning theories, the most often used being Situated Learning (Lave and Wenger, 1991) and Communities of Practice (Wenger, 1998). In the latter, collaboration is a process of creating and sustaining professional and cultural norms, often through shared language and practices. When new members join a community, they start at a ‘peripheral’ position, slowly migrating to the core as they become inculcated into the dynamics of the community. In Communities of Practice, mutual engagement, interaction and thinking together (Wenger 1998), are all important concepts. Wenger (2000, 227) describes such collaboration as ‘doing things together, talking, producing artefacts’ to encourage and develop shared meaning. The result of engagement and working together is a ‘joint enterprise’ (Wenger 1998, 73) making it possible to produce shared resources or a ‘shared repertoire’ (Wenger 1998, 73). However, collaborative activities are complex. As I’ve suggested in an earlier post, we must never lose sight of the fact that individuals will collaborate for different reasons and will take different insights from collaborative activities, we need to avoid the narrowing impact which ‘group think’ can bring.

So where do these insights take us? Collaboration can be a potent element in the growth of pedagogic understanding. Our own research on Lesson Study (Cajkler et al 2013, 2014; Wood & Cajkler, 2013a, 2013b) is producing a growing body of evidence that teachers find the opportunity to work together over a period of time in authentic problem-solving situations a liberating and positive experience. Our own experience of conducting cycles of Lesson Study on our practice with international master’s students has led not only to small-scale insights concerning points of learning and teaching in specific contexts, but has also led to much more fundamental and larger-scale reflections on curriculum, research and academic practice. However, I would argue that the literature and our evidence also suggest that certain characteristics of collaborative working might be important if it is to be truly useful:

  • Authenticity. The collaboration needs to emerge amongst practitioners who have a genuine reasons to work together. Imposition of collaboration in terms of group membership and pedagogic focus from outside is likely to inhibit any growth in pedagogic understanding and practice.
  • Decisional capital. Linked to authenticity, the group need to have a level of freedom to make professional decisions about the direction and development of their work. Again, the imposition of restrictive external frameworks will stunt utility. An example might be the advocating of a pre-determined view of ‘excellent’ pedagogic practice – surely this must be the work of the group to decide through discussion, experimentation and reflection?
  • Time and emergence. If a group is to generate new insights into their pedagogic work, they need time to discuss, plan, execute and evaluate. Time constraints, particularly those which require some form of identifiable outcome in a limited timeframe, for example, improvement of results by one additional ‘level’ over one half-term or semester may well collapse the opportunity for true pedagogic growth. I still find it amazing how often we are told that it is difficult to make time for collaborative work by more senior managers during endless meetings which focus on data, policies and paperwork. There needs to be an understanding that policies and data are marginal to affecting pedagogic change; this can only come from sustained focus on transforming practice through authentic collaborative endeavour, a process of emergent insight best engaged with over long periods of time.
  • Sharing and evaluating. As Shulman (1993) made clear, collaborative pedagogic work is of little additional use if it is not then shared more widely, rather it will merely lead to pedagogic Balkanisation rather than pedagogic solitude – hardly a transformative step forward. Time needs to be made to regularly share ideas and reflections from  collaborative work. This ensures that positive insights are made more widely available within a ‘pedagogic community’ but also allows for scrutiny, ‘positive critique’ and a sustained emergence not only of pedagogic literacy but also research literacy and research practice.
  • The relationship between the group and the individual. As highlighted earlier in this post, I have already made the case for understanding that individuals within a collaborative group will bring and take different insights from the process. This should be expected as they may have different values, attitudes and philosophies, different beliefs concerning pedagogy, and they will almost certainly be at different points in the growth of their pedagogic literacy. As such they will integrate different ideas into their emergent practice as a consequence of working with others.

Collaboration is essential to aiding the growth of pedagogic literacy, emerging through professional engagement and the use of research (both literacy and practice) to inform and generate pedagogic insights. However, merely stating that collaboration is essential without thinking about the dynamics and links to other elements of teacher growth does not ensure that it is a positive process for change.


Cajkler, W. Wood, P. Norton, J. and Pedder, D. (2014) Lesson study as a vehicle for collaborative teacher learning in a secondary school. Professional Development in Education (http://dx.doi.org/10.1080/19415257.2013.866975)

Cajkler, W.; Wood, P.; Norton, J. & Pedder, D. (2013) ‘Lesson Study: towards a collaborative approach to learning in Initial Teacher Education?’ Cambridge Journal of Education, vol. 43(4), pp. 537-554.

Hargreaves, A. and Fullan, M. (2012) Professional Capital: Transforming Teaching in Every School. New York: Teachers’ College Press.

Lave, J. & Wenger, E. (1991) Situated Learning. Legitimate peripheral participation, Cambridge: University of Cambridge Press.

Little, J.W. (1990) ‘The persistency of privacy: Autonomy and initiative in teachers’ professional relations.’ Teachers College Record, 91(4), 509-536.

Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15, 4-14.

Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57, 1-22.

Shulman, L.S. (1993) ‘Teaching as Community Property: Putting an End to Pedagogical Solitude.’ Change, 25 (6), 6-7. (http://www.iub.edu/~tchsotl/part4/shulman%20community%20property.pdf)

Wenger, E. (1998) Communities of Practice: Learning, Meaning, and Identity. Cambridge: Cambridge University Press.

Wenger, E. (2000) Communities of Practice and Social Learning Systems. Organization 7(2), 225-246.

Wood, P. & Cajkler, W. (2013a) ‘Understanding Learning – Exploration of the use of Lesson Study as an approach to developing learning with International Masters students’ European Conference on Educational Research, Istanbul, 10-13 September 2013

Wood, P. & Cajkler, W. (2013b) ‘Fast to Slow: Encouraging exploratory dialogue through the use of Lesson Study’ Third International Conference on Value and Virtue in Practice-Based Research Influencing Policy through Enhancing Professionalism, York, 9-10 July 2013

Some initial thoughts on restructuring and reconceptualising research methods in education master’s courses

Over the past 8 months I’ve been involved in a lesson study project with two colleagues which has focused on understanding the experiences and learning of international master’s students on an MA in International Education. We’ve been specifically looking at issues relating to study skills which has led us to focus on sessions on assignment writing, academic presentations and dissertation planning.

Researching these three seminar sessions has led to a large amount of data, video footage, around 20 hours of meeting and interview recordings, capture of in-session student interaction and documentation including lesson plans, observation notes and student notes. We have only just started to look at this data in detail – a task which will take several months. However, our own meetings notes and discussion about future research into research methods pedagogy has led to some preliminary thinking about master’s level work.

One question which has started to emerge is centred on the nature of progression in research methods provision from undergraduate through master’s level work to doctoral study. What is distinctive about master’s level beyond any summary QAA criteria? To what extent is master’s level research provision really just a ‘thin’ version of doctoral study? Is such an approach sustainable and reasonable? These are questions for future posts as a new approach to our master’s research methods provision emerges. However, one issue has already started to resolve itself, the role of ‘study skills’.

Our lesson study research this year has led us to question the idea of ‘study skills’ on two counts. Firstly, study skills tend to be seen as relatively generic in nature covering, for example, critical reading and critical writing, ‘skills’ which are first made explicit in a ring-fenced set of sessions before being used within the work of the wider course, often in an unpredictable and organic form. Secondly, the nature of such provision can lead to study skills being seen as somewhat remote to the work of the remainder of the course. This is not to suggest that such skills are seen as irrelevant, but they are often located in a marginal position. As a result the format of a master’s course might end up looking somewhat like that below:


trad masters

The ‘main’ course is the subject content often in the form of core material followed by specialist modules and then a dissertation. A research methods module often sits parallel to this together with a somewhat remote study skills module.

However, it should be the case that master’s level ‘study skills’ is seen as central to post-graduate study as they include crucial issues. We have identified five basic areas:

  1. Critical reading
  2. Critical writing
  3. Speaking, i.e. presentational skills etc
  4. Listening, i.e. involvement in discussion and critical engagement with the presentations of others
  5. Immersion into a research community

These are all core skills required for interrogating and engaging with research. Therefore, we are beginning to think that study skills should be replaced by the concept of ‘research literacy’. Research literacy would still cover the 5 elements given above but would be positioned at the core of a master’s degree rather than as an add-on, and would work alongside the emergence of an understanding and application of research methods. Both of these strands then work together with the subject content which is obviously also central to developing an understanding of education. Therefore, rather than seeing a course which is composed of a series of allied but separate blocks, it becomes a mutually supportive set of strands, such as that shown below.

new masters

This means that early in a course critical reading and writing would be considered, and subject content work would make explicit links to this so that in all elements of the course students would begin to understand the ‘anatomy’ of educational research in terms of how it is carried out and also how it is reported. This then allows for a solid basis for beginning to learning the ‘language’ of research which in turn leads to the foundations for developing competence in grappling with research as a practical pursuit (research methods). By bringing these three strands together more consciously, we wish to test the degree to which there is a greater opportunity for the emergence of a more critical and holistic view of how to engage with research (research literacy), to develop opportunities to carry out well planned research (research methods) informed by a developing insight and understanding of subject content. What this might look like in practical terms will be the focus of future posts.