Revising or creating a program learning assessment plan module 3: Drafting an assessment plan

Part of a series of modules supporting UD faculty who are developing a program assessment plan.

Module goals

After successfully completing this module, you will be able to:

  • Describe common characteristics of good program learning assessment plans
  • Help colleagues reflect on the role that students and other constituents can play in program learning assessment
  • Collaborate with colleagues to draft a program learning assessment plan

Steps and recommended actions

1. Compare available examples of program learning assessment plans to develop a shared understanding of good plans

Recommended actions: Review examples of program learning assessment plans, including some located by CTAL (Ohio State University Women’s, Gender and Sexuality Studies BA, University of Florida Music BM, and University of South Carolina Marine Science BS) and those you collected while completing during module 1. Explicitly note the shared characteristics of plans you collectively determine are good.


It is critical that faculty teams engaged in this work have a shared understanding of what makes a particular learning assessment plan a “good” plan. Purposefully examining examples and discussing specific parts of them that team members believe are especially good (“we should include something like that!”) or bad (“we have to avoid including anything like this!”) can be a very helpful way of establishing this shared understanding.

During these discussions, it is critical that team members explain why they believe that particular plans or parts of these examples are especially good or bad. It is very important to identify the commonalities in these why statements to generalize beyond this collection of specific examples – those are the ideas that you can likely follow when you revise or create your plans.

2. Review the criteria and descriptions in CTAL's program learning assessment plan rubric

Recommended action: Compare your individual and collective understanding of the characteristics of good PEGs with the criteria and descriptions in CTAL’s program learning assessment plan rubric.


Just as you and your colleagues have collected examples and reviewed relevant literature and resources to determine common characteristics of “good” program learning assessment plans, so too have your colleagues in CTAL. We have drawn from a large pool of interdisciplinary literature and many examples to develop our own set of characteristics that a really good sprogram learning assessment plan should possess:

  • Practiced: Assignments that faculty grade should provide students and faculty with evidence of student learning and not be rote, administrative activities unrelated to learning.
  • Organized and systematic: Assessments of and within programs are the responsibility of all faculty in the program, not the sole responsibility of a small number of faculty members or one individual.
  • Aligned: The assessments that occur in the program must be clearly aligned with the educational goals of the program, including those that are imposed from outside of the program e.g., General Education goals, goals required by program accreditation.
  • Meaningful: Data collected assessments and used to provide feedback about the effectiveness of a program are only useful if faculty routinely analyze those data.

We have summarized these characteristics in a rubric for evaluating program learning assessment plans. This rubric includes further explanation of each criterion as well as a set of descriptions for what each criterion looks like in a very good assessment plan.

In addition to reviewing this rubric, it is also helpful to compare it with the set of characteristics and other notes that you and your colleagues compiled when examining materials and examples specific to your program and discipline. Is there anything missing from the CTAL rubric that should be added for your specific program? Was there anything that you didn’t see in your materials and examples that the CTAL rubric has brought to light?

3. Review the common components of a program learning assessment plan

Recommended action: Compare your individual and collective understanding of the characteristics of good PEGs with the criteria and descriptions in CTAL’s program learning assessment plan rubric.


Moving ahead from the initial process of submitting program educational goals and assessment plans, programs will be responsible for regularly assessing program educational goals. This process includes at least four distinct steps after the initial planning for assessing program educational goals is complete: collecting evidence, analysis of that evidence, reflection upon what your program has learned from the assessment and strategy for moving forward, and reporting out of the results of program educational goal assessment.

Collecting Evidence

  • What evidence would be convincing for faculty in this program?
  • How will the evidence be collected?
    • How feasible is it to collect necessary student artifacts for assessment of program educational goals?
    • Are these student artifacts capable of being stored for up to seven years?
    • Who will be responsible for compiling and storing evidence
    • Who will be responsible for assessing student artifacts?
    • Is there data that is already collected – employment data, licensure pass rate, etc. – that could be used?
  • Does your program need to develop new direct or indirect measures of student learning?
  • Is the workload for the continued collection and reporting upon assessment information sustainable?
  • Will some evidence require multiple years to collect e.g., student papers written in the sophomore and senior years?

The assessment plan should not only include a plan for collecting evidence but also for systematically storing, evaluating, and reporting upon direct and indirect sources of evidence tied to distinct program educational goals. Because assessment information must be collected and reported upon in assessment plans regularly on a rolling basis, existing sources of evidence of students meeting the program educational goals coming from the courses they take in a program will be incredibly beneficial in the continued assessment of program educational goals. Thus it is important to locate existing sources of both direct and indirect sources of evidence to minimize the time and resources it might take to develop new measures. While direct evidence of student learning is a stronger than indirect evidence alone, the strongest kinds of assessment evidence combine direct and indirect measures.

Examples of direct and indirect sources of evidence:

Program educational goal: Students will be able to assess nutritional status of individuals in various life-cycle stages and determine nutrition-related conditions and diseases by applying knowledge of metabolism and nutrient functions, food sources, and physiologic systems.

  • Direct: Clinical exam scores, capstone papers
  • Indirect: Surveys or interviews with employees of program graduates

Program educational goal: Improve critical ability to read and interpret primary texts, and to analyze and write effectively about philosophical problems.

  • Direct: e-Portfolios, reflective essays
  • Indirect: Focus groups, student surveys

Program educational goal: Develop a skill set and research record such that they can secure employment at universities, federal agencies, private companies, or non-governmental organizations where they can apply the skills and knowledge acquired during their time in the program.

  • Direct: Publication in peer-reviewed journals, masters and doctoral theses
  • Indirect: Employment outcomes, exit interviews

    Analysis

    Data are only meaningful if meaning is made of them. Regardless of how you collect the data, you should employ analytical methods that are convincing for you and your colleagues. These are most likely methods that are commonly employed in your scholarly work or otherwise familiar in your disciplinary work.

    • What methods of data analysis will be understood and well-received by faculty in this program?
    • Who will carry out the analyses?
    • When will the analyses be conducted?
    • Can or should this be done by a team?

    Analyzing evidence

    Although we are strongly recommending the use of student artifacts that have already been graded and thus do not need to be evaluated again, sometimes programs choose or are required to collect additional direct and indirect evidence of student learning (papers, reading reflections, portfolios, capstone projects, surveys, interviews, focus groups) that needs to be evaluated or analyzed differently. A few tips for analyzing evidence can help guide the process:

    • Whenever possible, use rubrics to analyze how well evidence of learning meets PEGs. CTAL’s website has a collection of AAC&U VALUE Rubrics you can use as templates or starting points.
    • Collaborate when appropriate. By creating and analyzing “collective portfolios” of student work like essays from a variety of courses, small groups of faculty can assess how well students are meeting PEGs using content analysis or scoring rubrics
    • It’s not all about the numbers. Resposible and appropriately rigorous qualitative analysis of student learning can be informed by observations or content analysis of student work and interpreted using narrative analysis.

    Reflection and strategy

    • When and how will the results be shared with program faculty for discussion and reflection?
    • Who will coordinate, schedule, moderate, and take notes of that discussion? (This does not have to be just one person!)
    • Who needs to be involved and to what extent do they need to be involved?

    Reporting

    In a practical sense, some of the “when” and “how” of reporting is being provided by the university: assessment information will be reported each fall beginning in 2023 using a newly developed webform. But other important questions remain, particularly who will compile and submit this summary each year.

    As a reminder, here are the questions that were recommended by the Task Force on Learning Goals and Assessment for inclusion in the collection tool/form:

    • Which educational goal(s) have you elected to assess?
    • Briefly describe the evidence collected to evaluate student learning related to the above goal(s).
    • Describe the results of your review.
    • Describe the process you used to disseminate these results to your program faculty and the process to facilitate discussion to determine curricula changes, actions, etc.
    • What changes are you making, if any, as a result of your assessment? Such changes may be at the course, program, or department level.
    • What type of support or resources, if any, are needed to make the changes at the course, program, or department level?
    4. Consider the role of students and other constituents as partners in assessment

    Recommended action: Determine if students and other key constituents (e.g., employers, alumni) can play a substantive, meaningful role in analyzing assessment information and making recommendations for action.


    In the past decade, many scholarly investigations into academic assessment have documented the importance, power, and usefulness of meaningfully involving students in program-level assessment of learning. They can provide critical context when making sense of information that has been collected, particularly in terms of the student experience in the courses or other learning experiences that are being examined. If these students are broadly representative of the divere groups and experiences in the program, they can help ensure that assessment practices are valid and equitable across many populations and student characteristics and experiences.

    Meaningfully including students in appropriate program- or department-/school-level committees engaged in assessment is one common approach to this work. Much of the most impactful collaboration involves more meaningful collaboration with students included in the teams analyzing assessment data, making recommendations for action and follow up, writing summary documents and reports, and presenting and discussing assessment results with relevant faculty, students, staff, and others. Some scholars and practitioners (e.g., Healey, Flint, & Harrington, 2014) explicitly refer to students as valuable partners in teaching and learning, a description that illustrates their intended role as active and equal participants in those processes. The programs who have travelled furthest down this path have empowered students to conduct their own assessments of their program or a specific aspect of it, balancing scholarly independence and guidance provided by faculty mentors.

    In addition to students, other key constituents are sometimes included in program learning assessment plans not only as sources of information but as partners in planning assessment, making sense of information, and recommending follow up actions and changes. Employers and alumni are among the most common constituents included in program learning assessment, commonly by including assessment reports and discussions in meetings of advisory boards. It is worth considering, however, whether including some of them earlier in the process would be helpful for you and informative for them.

    5. Draft a program learning assessment plan for your program

    Recommended action: Use the available worksheet with guiding questions to draft revised or new program educational goals for your program.


    Now that you have examined what is already published about your program in the academic catalog, compared examples of program assessment plans, reviewed discipline-specific materials, discussed the common components of an assessment plan, and developed a common understanding of the characteristics of good assessment plans, it’s finally time to revise or create a learning assessment plan for your program. We have created a worksheet for this important work with some helpful guiding questions.

    Resources