Thinking about course innovation and evaluation – II

As I suggested in the first part of this reflection https://hereflections.wordpress.com/2014/05/12/thinking-about-course-innovation-and-evaluation-i/ developing a new course is a complex task, and deciding how to design and evaluate it is not easy. As I said towards the end of the last post,

‘..as I try to develop some form of evaluative research to help us understand the course we are developing, there appear to be some important lessons which we need to take into account:

  • Any research into curriculum, pedagogy, learning and assessment can only ever give us partial, but nevertheless useful, insights.
  • We need to research at different levels of activity, in terms of both scale (from the (sub-) individual to the community) and time (from individual sessions to the course as a whole).
  • We need to create ‘thick descriptions’ from the data which allow us to consider the complexity of the systems involved.
  • Any insights need to be seen as descriptive, rather than predictive, in nature. We can use data to consider how we might continue to evolve the course, but we can’t assume that these changes will automatically work. In addition, the results gained might be useful for tutors in other contexts, but again can only point to possible areas for enquiry, they are not a recipe to be followed.’

The second and third bullet-points are crucial in designing an evaluative framework which will give us a useful ‘window’ on what is happening and where we can develop the course as we move forward. In considering the idea of researching at different scales of activity, we need to consider aspects of both time and space. In the short term, say over a session, the tools we use will be necessarily different to those which we use to consider the emergence of student learning. This might lead to some tools focusing on small numbers of stufents in a single session coupled with larger scale consideration of how a whole group evolves over the year. Two examples of these types of research already exist.

Design-based Research (DBR)

Barab and Squire (2004: 1) set out a rationale for design-based research http://www.gerrystahl.net/teaching/winter12/reading3a.pdf when they state that:

‘Learning sciences researchers investigate cognition in context, at times emphasizing one more than the other but with the broad goal of developing evidence-based claims derived from both laboratory-based and naturalistic investigations that result in knowledge about how people learn. This work can involve the development of technological tools, curriculum, and especially theory that can be used to understand and support learning. A fundamental assumption of many learning scientists is that cognition is not a thing located within the individual thinker but is a process that is distributed across the knower, the environment in which knowing occurs, and the activity in which the learner participates. In other words, learning, cognition, knowing, and context are irreducibly co-constituted and cannot be treated as isolated entities or processes.’

They stress the importance of context in understanding pedagogy and the need to design pedagogic contexts and approaches as a way of helping understand the processes involved better. The approach is based on the development of new theories, artifacts and practices which have been trialled in authentic settings. They list the main features of design-based research as being:

  • a process which develops new theories of teaching and learning
  • a process which relies on intervention to test a new approach
  • a process which evolves in an authentic context (e.g. the seminar room)
  • a process which is iterative

In this way, each iteration is meant to alter particular elements of the process to allow for testing and experimentation of factors. Some elements of this approach I find slightly problematic whilst accepting the overall thrust of the methodology. In a naturalistic setting it is extremely difficult if not impossible to select and vary elements of the process in such as simplistic way as this suggests. As was outlined in the first post, if we are interested in the complexity of the relationships between actors (Ellis and Goodyear, 2010) then to attempt to hold some constant whilst testing others collapses the natural complexity of what you are researching, and may lead to inaccurate insights; it seems to be a dangerously reductive stance.

However, the overall notion of design-based research, based on developing new insights through iterative cycles of intervention taking place in an authentic context is a good one, particularly with the focus on curriculum and technological tools. Therefore, in beginning to develop a research/evaluative approach to a new research methods course, the following seem to be a useful starting point:

  • a focus on the validity and utility of a curriculum innovation
  • understanding how that innovation develops through iterations of evaluation, analysis and intervention

In ensuring that the approach does not become reductive, two additional practical elements may help us in capturing the emergent properties of the course system:

  • a range of data capture techniques to understand the system from different perspectives
  • the inclusion of direct student participation in discussing ideas for change (building on earlier work we have completed in a Lesson Study project with international master’s students http://leicls.weebly.com/emergent-thinking.html )

This then suggests a methods approach which covers the following data capture:

  • a staged, mixed methods approach which considers: experience/perceptions, artefacts of learning, outcomes through
  • questionnaires
  • interviews
  • documents (e.g. session resources, student notes, summative assignments)
  • reflective activities

However, to gain more detailed insights we need to understand activity at the level of the individual session.

Design Experiment (DE)

Oshima et al (2003: 107-108) developed a similar approach to DBR, again focusing on the complexity of authentic educational settings:

‘Once researchers leave their laboratories, they face complex learning contexts. In the real classrooms, they are often confused by the complexity of activities that students engage in, and cannot imagine how the new tools and knowledge they have been developing can be used to improve practice.’

Their research was executed through work focusing on the science curriculum, carried out with elementary school teachers. Because the teachers were already conversant with the use of Lesson Study, the researchers used this as a medium for their research, working alongside the teachers. However, they stress the difference they perceive between their DE approach and Lesson Study,

‘The difference between the lesson study and the design research, we suggest, is that teachers usually base new lesson plans on their experiences of previous practice, rather than on theoretical or disciplinary knowledge in the learning sciences…..Our task was to figure out how teachers’ existing practice-based knowledge could be integrated with our disciplinary knowledge in the learning sciences, by means of collaboratively designing for the classroom.’                                                                              (Oshima et al, 2003: 111)

This outline is in many ways similar, but at a different scale, to DBR. As with DBR, whilst it offers the notion of insights at the level of the session, it also holds implicitly the wider context; how does the session sit within the wider curriculum? How can the insights gained from designing one session inform wider practice?

As with DBR, the design experimental approach seems somewhat reductive in character. Take an area of theory, design an intervention around it, and then test it in a naturalistic setting whilst attempting to minimise or eradicate particular factors. If instead, we rely on the wider idea of Lesson Study as an approach to capturing some of the activity and learning which emerges over the course of a session, we can take a multiple methods approach which we have outlined elsewhere in previous projects (http://leicls.weebly.com/he-projects.html ) namely:

  • the recording of planning and evaluation meetings before and after a session
  • the retention of all written documents and resources for analysis
  • observation (including video capture) of the session
  • stimulated recall interviews with students who have been observed in the session

A synthesis and extension?

Considering the potential utility of DBR and DE is useful as it helps focus on the idea of collecting data at different scales as well as restating the need for iterative interventions to help gain insights for potential new ideas. However, if we characterise the changes in the course and its participants as a process of emergence then the reductive character of these methods becomes restricting. We are not carrying out an ‘experiment’, albeit in a naturalised setting, instead we are attempting to gain insights into a complex process and system which is itself emergent over time. Therefore, we can take some of the general ideas of these approaches and synthesise them into an approach which attempts to meet the four criteria set out at the end of the last post and the start of this.

RM research outline

The diagram above gives an outline of what a ‘curriculum development’ evaluative research framework might look like. It includes the iterative process at two levels, both at the level of sessions themselves (Lesson Study) and at the level of the course more generally (at the end/beginning of each cycle, begin roughly every 5-6 weeks). By operating at two levels, course and session, there is a genuine opportunity to develop a thick description concerning the emergence of the course system, and should allow us to create a detailed, if complete, description about the changing nature and dynamics of the course which will be invaluable for our own development, but may also present insights and ideas which might be useful as starting points for others.

References

Barab, S. & Squire, K. (2004) ‘Design-Based Research: Putting a Stake in the Ground’ The Journal of the Learning Sciences, 13(1), 1-14.

Oshima, J.; Oshima, R.; Inagaki, S.; Takenaka, M.; Nakayama, H.; Yamaguchi, E. & Murayama, I. (2003) ‘Teachers and Researchers as a Design Team: changes in their relationship through a design experiment using Computer Support for Collaborative Learning (CSCL) technology.’ Education, Communication and Information, 3(1), 105-127.

 

Advertisements

One thought on “Thinking about course innovation and evaluation – II

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s