Thinking about course innovation and evaluation – I

In a previous post I outlined the potential design-based research (DBR) might have in curriculum change and the development of a new research methods course. Since then a number of ideas have started to emerge, the result of reading and reflection, but more importantly as the result of discussing initial insights from research I’ve been involved in with two colleagues, one from the School of Education, the other from our English Language Teaching Unit.

The focus of course development for next year is a new research methods module which we hope to design and resource over the next two or three months ready to trial in the autumn. But this has led to a critical question, how do we understand the experience, impact and issues inherent in developing a new course? This is a complex issue and as a result this is the first of two posts on the ideas I’m grappling with relating to this question. An important insight for me in terms of researching the changes we intend to make comes from a review of the link between education and complexity offered by Kuhn (2008). In her work she states that,

‘A complexity approach acknowledges that all levels of focus, whether this is the individual, class, school, national or international associations, reveal humans and human endeavour as complex, and that focussing on one level will not reduce the multi-dimensionality, non-linearity, interconnectedness, or unpredictability encountered.’ (Kuhn, 2008: 174)

If we are to begin to gain any impression of what is happening with respect to student experience and learning as they navigate their way through the course, we need to be able to gain insights at different levels of activity and over different time periods. To begin to think simultaneously about the development of the course and how we might understand and evaluate it, we need to consider a basic set of principles or a model through which we might start to build a course framework.

Shepard (2000) wrote a paper focusing on the changing role of assessment in classroom cultures in the USA which was influential in much of my own early research. My interest in this paper stemmed from the way in which curriculum, learning (here I use ‘pedagogy’) and assessment are brought together as a single, coherent whole.

curic system

Since starting to use this model I have come to see all three elements as intimately linked. One element cannot be seen as more important than the other two as they are inherently symbiotic. How I develop my curriculum links to how I see the pedagogy which emerges through that curriculum. In addition, when designing the course and the pedagogies which will be used, both need to inform the assessment framework which will be used. To attempt to subordinate any element to any other will restrict and skew the educative process. However, whilst this helps to develop a more holistic view of education, there is obviously one element which is missing from the framework, the tutor(s) and the students, both of whom I see as being at the confluence of the three elements above. The level of complexity that this model portrays makes any evaluation or insight difficult to capture if it is to be of any real worth in helping identify the positives and issues as a new course evolves.

This is where I believe that we have to accept that the educative process is to a degree opaque. We cannot observe and analyse the processes which come together to make the course in their entirety; this much seems obvious given the quote of Kuhn above highlighting the multi-level and interdependent nature of the systems involved. Therefore, all we can do is capture as rich a picture as possible. It is also important from a complexivist perspective to resist easy, but inherently unhelpful, false dichotomies in an attempt to produce simplistic understandings. As Ellis and Goodyear (2010: 16) state:

‘It is important to avoid polarised thinking that makes apparently simple but logically indefensible contrasts: between ‘the new’ and ‘the traditional’, between cognitive and cultural, technical and human, etc. Indeed, as we will try to show, adopting a perspective that foregrounds relationships rather than differences turns out to yield clearer insights into a number of thorny issues about the place of e-learning in the student experience.’

The situation becomes even more complex as the activity, learning and experience within the course is not static. As students begin to learn, both individually and together, as tutors begin to make sense of the new course, discuss ideas and issues with students, and use formative processes from work and discussion to alter the curriculum and pedagogy as it occurs, the whole nature of the system constantly shifts. And this only describes the complexity of the seminar room; master’s students do much of their learning beyond formal learning settings, leading to the idea of ‘learning ecologies’ discussed in detail by Ellis and Goodyear (2010).

The complex processes of change which the above description highlights is illustrated through the concept of emergence. Mason (2008) describes emergence as being the result of systems where the level of complexity leads to the occurrence of new, and often unexpected, properties and behaviours, i.e. the sum becomes bigger than the sum of the parts. Therefore, behaviours and outcomes are not easily, if at all, predictable and any evaluation cannot assume that one set of data collected early in a course will be a good predictor of elements and outcomes of the course at the end of the year.

So as I try to develop some form of evaluative research to help us understand the course we are developing, there appear to be some important lessons which we need to take into account:

  • Any research into curriculum, pedagogy, learning and assessment can only ever give us partial, but nevertheless useful, insights.
  • We need to research at different levels of activity, in terms of both scale (from the (sub-) individual to the community) and time (from individual sessions to the course as a whole).
  • We need to create ‘thick descriptions’ from the data which allow us to consider the complexity of the systems involved.
  • Any insights need to be seen as descriptive, rather than predictive, in nature. We can use data to consider how we might continue to evolve the course, but we can’t assume that these changes will automatically work. In addition, the results gained might be useful for tutors in other contexts, but again can only point to possible areas for enquiry, they are not a recipe to be followed.

In the second post I will outline a research framework which I currently think might help us gain useful insights based on some of the ideas above.

References

Ellis, R.A. & Goodyear, P. (2010) Students’ Experiences of E-Learning in Higher Education: The Ecology of Sustainable Innovation. New York: Routledge.

Kuhn, L. (2008) ‘Complexity and Educational Research: A critical reflection’ in Complexity Theory and the Philosophy of Education, Mason, M. (ed.) Chichester: Wiley-Blackwell, 169-180.

Shepard, L. A. (2000) ‘The Role of Assessment in a Learning Culture.’ Educational Researcher, 29(7), 4-14.

 

 

Advertisements

2 thoughts on “Thinking about course innovation and evaluation – I

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s