Using Evidence from Student Work for Program Improvement

Note from the Director of the CTL:

Using Evidence from Student Work for Program Improvement

Dear Colleagues:

As many of you, like us at the CTL, are continuously working to analyze and improve the quality of your programs, I thought I would take this opportunity to share some ideas and recent research aimed at improving graduate programs through a focus on student learning. As in all of our work at the CTL, we strive to examine our programs from the perspective of scholarship on student learning and academic “best practice.”  To that end, I have recently been reviewing new research and major disciplinary discussions on student learning and graduate program improvement. I hope this brief overview might be of help as you formulate your own strategies. I also would welcome the opportunity to meet with you at any time to discuss specific program goals or other questions of student learning and program improvement.

As you may know, an important body of research in higher education on the design and assessment of academic programs focuses on the use of student work as data to inform program improvement. Unlike traditional standardized assessments, this more scholarly approach is, I believe, more consistent with our concerns as doctoral supervisors and faculty directors of graduate programs. Essentially, student work of the sort we are accustomed to examining for evidence of intellectual development, can also be used as direct evidence to help us identify aspects of our programs that are working well and, conversely, areas that may need to be revised. [1]

Taking this approach allows us to start with the program’s established learning goals or objectives, and then examine a sample of advanced student projects to identify trends relative to one or more the program’s goals for student learning. Examples of such student projects include: 

  • Master’s thesis
  • Major projects
  • Comprehensive examinations
  • Doctoral dissertations
  • Capstone courses
  • Dissertation defense
  • Portfolio evaluation of student work
  • Disciplinary conference presentations
  • Peer-reviewed submissions for publication

How programs choose to approach the analysis of student work obviously will vary across disciplines and programs, both in accordance with individual program objectives and in relation to discipline-specific protocols. But I would like to suggest several general principles that might be useful.

Guiding Principle:  Keep it simple.

One of the most compelling research findings in looking at student learning and program improvement is that meaningful change is more likely to result from targeting a very few student learning objectives and collecting relatively small amounts of data in any given year. Some of the underlying principles of continuous quality improvement of student learning at the program level are:

  • Target one or two high-priority student learning outcomes: “A program does not have to assess every outcome every year to know how well it is doing toward the attainment of student outcomes.”[2] Program faculty often find it useful to start by identifying the two or three highest-priority outcomes (knowledge or skills that students graduating from the program should develop and be able to demonstrate). Programs can then target their inquiry about how the students are “measuring up” in relation to these outcomes, one at a time, over several successive years. Keep in mind that the purpose is not to collect massive amounts of data; the purpose is simply to gather enough evidence from student work to identify trends that can be used to inform improvements at the program level.
  • Collect data selectively: You do not have to collect data on every student in every course. Because the purpose of this inquiry is continuous improvement of the program, and not an evaluation of individual students or individual courses, you might, for example, collect information from only two or three upper-level courses where your targeted outcome is “covered.”
  • Examine student projects using a simple rubric or a list of two or three statements that describe evidence of exemplary learning. In looking at a master’s thesis for evidence of student learning, to give an example, the program might decide to target two or three features that demonstrate mastery of specific program goals:  a) “Applies sound research methods/tools to problems in an area of study and describes the methods/tools effectively;” b) “Communicates research clearly and professionally…” [in written form appropriate to the field]; and c) “Has demonstrated capability for independent research in the area of study, applying substantial expertise in that area and to making an original contribution to it.” [3]
  • Conduct an annual program faculty meeting to discuss the evaluative evidence, analyze what it means for the program, and define any next steps.

 

As a final note, let me please invite you, again, to meet to discuss this process or other aspects of teaching and learning that might interest you. Please email me directly or contact the CTL if you would like to set up an individual or small-group meeting with me or with one of my faculty colleagues at the CTL. We welcome the opportunity to learn more about the specific activities of your programs and to share other aspects of our teaching and research with you.

Sincerely,

Sally Schwager, Director

 


[1] Direct assessment refers to measures of student learning “that require students to display their actual knowledge and skills (rather than report what they think their knowledge and skills are). Because direct assessment methods tap into students’ actual learning (rather than perceptions of learning), they are often seen as the preferred type of assessment.”

Shamima Ahmed, “The MPA Capstone Course: Multifaceted Uses and Potentialities in Program Assessment,” Teaching Public Administration (accessed September 14, 2015): 6.

http://tpa.sagepub.com/content/early/2014/08/14/0144739414542714

For major professional initiatives and discussion of using direct student evidence to improve programs in various disciplines, see Chris M. Golde and George E. Walker, eds., Envisioning the Future of Doctoral Education: Preparing Stewards of the DisciplineCarnegie Essays on the Doctorate (San Francisco: Jossey Bass Wiley, 2006); and Carnegie Initiative on the Doctorate (CID) Collection (accessed September 14, 2015): http://gallery.carnegiefoundation.org/cid/

[2]  Accreditation Board for Engineering and Technology (ABET), “Continuous Quality Improvement of Student Learning” Module 4, p. 4 (accessed April 14, 2016): http://www.abet.org/network-of-experts/for-current-abet-experts/refresher-training/module-4-quality-improvement-of-student-learning/ 

[3] For a simple rubric, see example from Duke University, Graduate Program in Biomedical Engineering (accessed May 5, 2016): http://bme.duke.edu/sites/bme.duke.edu/files/BME-Rubric-MS-ThesisDefense.pdf

Sample Evaluation Rubric: Masters Thesis

See also, “Rubric for Evaluating PhD Dissertation and Defense” (accessed April 16, 2016): http://www.units.miamioh.edu/celt/assessment/grads/Dissertation_and_Defense.pdf