Skip to content

Warner School of Education Assessment Plan

Last updated August 26, 2021

Mission Statement

At the Warner Graduate School of Education and Human Development, we believe that education can transform lives and make the world more just and humane. This vision informs our teaching, research and service as a research school of education.

Program Goals

PREPARE practitioners and researchers who are knowledgeable, reflective, skilled and caring educators, who can make a difference in individual lives as well as their fields, and who are leaders and agents of change;

GENERATE and disseminate knowledge leading to new understandings of education and human development, on which more effective educational policies and practices can be grounded;

COLLABORATE–across disciplines, professions and constituencies–to promote change that can significantly improve education and support positive human development.

Our diverse work in each of these domains is informed by the following underlying beliefs: the improvement of education is in pursuit of social justice; development and learning shape and are shaped by the contexts in which they occur; the complexity of educational problems requires an interdisciplinary and collaborative approach; and best practices are grounded in research and theory, just as useful theory and research are informed by practice.

Assessment at the Warner School

At the Warner School, we believe that assessment is a critical component of student learning. Courses have carefully designed formative and summative assessments specific to the learning goals set for the course. Warner programs have developed an assessment system that includes the articulation of a conceptual framework and targeted proficiencies, as well as major summative assessments of students’ achievement of these proficiencies by graduation. Assessment data are systematically collected and analyzed as part of our commitment to maintain and continue to improve the high quality of our programs.

Warner Assessment Principles and Practices

To ensure that Warner students are evaluated fairly, accurately, consistently, and free of bias we have taken the following steps:

  • Assessment is aligned with course goals and overall program standards.
  • Multiple assessments are used, at different points in courses.
  • We provide multiple ways for candidates to demonstrate their proficiencies, by employing multiple forms of assessments across the program–including various kinds of performance assessments, self-evaluations, expert opinions based on long-term observations, surveys, portfolios, etc., as shown by the different types of assessment (as documented in each main program assessment system document).
  • Assessment involves multiple evaluators, including a combination of internal evaluators (candidates, Warner faculty, university supervisors) and external evaluators (site-supervisors, employers) (as documented in each main program assessment system document)
  • With the exception of doctoral candidates (counseling doctoral candidates are assessed per CACREP standards) where assessment is more individualized and holistic, and feedback is provided in detailed narrative form for each individual, key assessments are evaluated using program-specific rubrics which reflect the proficiencies we have set as our targets for candidates in each program, which in turn are in line with professional and state standards.
  • Most main assignments in a core (i.e., required) course are also evaluated using a rubric, which is provided in advance to candidates in order to communicate the instructor’s expectations and to ensure that grading is consistent.
  • Candidates are given the opportunity to revise many of their assignments to ensure that they achieve mastery.
  • Each course syllabus identifies how the grade for the course will be determined, explicitly providing the weight assigned to specific assignments and criteria.
  • As assignments are given throughout each course, the course instructor can use the information gathered to both (a) provide timely feedback to candidates about their progress in the course and take specific actions when needed, and (b) make adjustments in their teaching to maximize candidates’ learning opportunities.
  • Every time a course is offered, it has been our faculty’s practice to review and revise the course activities, readings and other assignments, and assessment rubrics, based on past performances.
  • Final course grades of one F or two C’s will cause a candidate’s withdrawal from the program.
  • Each program area conducts a review of the progress of their doctoral candidates at specific points in their program; candidates who are not showing good progress receive a letter from their advisor and program chair articulating the specific concerns raised as well as conditions that need to be met in order to continue in the program. Candidates who fail to meet these conditions within the specified time period are involuntarily withdrawn from the program.

Assessment Data Collection, Analysis, and Use at Warner

Assessment at the Warner school is a significant part of the culture and our routine. The majority of degrees/programs have formal assessment systems structured to be consistent with the requirements of our major professional accreditations (AAQEP and CACREP) and NYS teaching, counseling, and administrative certification requirements. At the core, Warner’s assessment system has four main components: a conceptual framework; targeted proficiencies; how targeted proficiencies are addressed; and key summative assessments. The few programs that do not fall under an accreditation association or state certification jurisdiction have strategies to build/maintain assessment processes utilizing models derived from these core processes.

Data Collection, Analysis, and Reporting Processes

Student Assessment Data – Formative:
  • Data collection: Students’ assessment data from formative assessments taken as part of their coursework are collected by the course instructor at specific points in each course, as indicated in each course syllabus. These assessment data, however, are not collected nor recorded centrally.
  • Data analysis, reporting and use: Assessment data are analyzed by the course instructor–both on an individual basis to provide feedback to each student and plan remediation if needed, and by looking at results across the class to identify trends that may suggest the need for modifications in the course design or delivery. Assessment results are communicated to each student so that s/he can benefit from the feedback to inform learning. In many cases, students who do not perform satisfactorily in a formative assessment are given the opportunity to revise/redo and resubmit the assignment, so as to gain and demonstrate mastery on the content assessed.
Student Assessment Data – Major Summative Assessments:
  • Data collection: Major summative assessments (also referred to as key assessments) used to make decisions at key transition points and/or required for accreditation are either embedded in specific required courses or collected at major transition points. These assessment data are collected by the faculty member via Taskstream, a data management system for the collection, storage, and reporting of key assessment data for all of our accredited programs.
  • Data analysis, reporting and use: As in the case of formative assessments, whenever a major summative assessment is part of a course, the course instructor will review the results and communicate them promptly to the student so that s/he can benefit from the feedback to inform learning and/or progress in the program. In addition, the information provided by these data is used by the student’s advisor and other program faculty to inform the decision associated with the related transition point. Assessment data from key assessments are recorded in Taskstream, which is designed to generate reports that summarize the scores received with respect to each rubric and/or standard (reporting both the average scores and the number of students who received a specific score for each rubric); reports can be generated by semester or academic year, cumulative over time, and for different subgroups of students. These reports are made available to program faculty and administrators to inform their annual program review, and whenever needed for the compilation of accreditation reports.
Student Assessment Data – Alumni and Employer Surveys:
  • Data collection: We are moving from administering AAQEP and CACREP program alumni and employer surveys every 3-5 years to every 1-2 years to ensure on-going feedback from our alumni and external stakeholders. In addition, We also survey the counseling site supervisors, after the practicum/internship is complete (one time per year for the previous cohort). To maximize the information received, we use electronic data collection. We supplement the survey data with information gathered from annual or biannual program-specific advisory board meetings composed of local practitioners, including employers, field supervisors, alumni, and other external partners.
  • Data analysis, reporting and use: Electronic surveys are sent via e-mail, which then automatically provides summaries of the data. Survey results are distributed to program faculty and appropriate administrators and posted on our website. These data can be used, along with other information, to inform annual program reviews as well as to compile information for accreditation and other reports, as needed.
Unit Assessment Data – Individual Courses:
  • Data collection: Course syllabi are collected at the beginning of each semester. Course evaluations are collected at the end of each course using a form that is the same for all Warner courses. The form is administered electronically through The University’s AEFIS system to ensure confidentiality; in order to maximize the possibility of honest and unbiased responses, an electronic link to the course evaluation form is sent via email to students during the second to last week of the semester and reminders are sent every few days after that for two to three weeks. Course instructors also are encouraged to provide time during their last class session of the semester for students to complete the course evaluation.
  • Data analysis, reporting and use: At Warner, course syllabi are provided to the Faculty Support Office where they are available for review; each course syllabus is posted on the Warner Intranet so that it can be more easily accessible to all faculty and administrators. Warner course evaluation results are shared with the course instructor (after all course grades have been turned in), chair, and the Associate Dean of Graduate Studies; each chair examines course syllabi and course evaluations as part of the yearly evaluation of the faculty they supervise, as well as to make decisions about re-hiring specific adjunct instructors.
Unit Assessment Data – Instructional Programs:
  • Data collection: Course syllabi and course evaluations are collected each time a course is taught (see details above). In addition, counseling students evaluate the site supervisors for the practicum/internship. We also use summary reports of students’ assessment data at key transition points, as generated by our assessment database. Additionally, chairs informally collect (and act on as needed) feedback shared spontaneously by faculty, students and staff.
  • Data analysis, reporting and use: So far, these data have been used to make both minor modifications in courses and assessments (mostly in response to problems identified in anecdotal feedback and course evaluations) and in the context of major program reviews initiated as a result of accreditation or new regulations from NYS requiring re-registration of specific programs. Programs hold an annual meeting where the data listed above are examined by the faculty to identify the potential need for program changes. If a need for major program change is identified, program area faculty are charged with making recommendations to the program chair, who forwards these recommendations to Warner’s school-wide academic policy committee (APC) for review. The committee provides input and brings the recommendations to the entire faculty for discussion and approval.

How Assessment Data are Shared with Students, Faculty, and Other Stakeholders.

  • Each student has access to their assessment results.
  • The course instructor has access to each student’s assessment data collected in the course, as well as the summaries that can be produced by our database, and to the course evaluations.
  • Faculty in each program have access to summaries of assessment data for all students in the program, as well as aggregate results of alumni and employer surveys–which are provided to them at specific times, or upon request.
  • The Associate Dean of Graduate Studies and Associate Dean for Academic Affairs have access to all course evaluations. Program chairs have access to course evaluations for their program faculty and adjuncts teaching in their program.
  • Summaries of student assessments are shared in official program reports, as required by guidelines for program reviews.
  • Summaries of data for all key assessments and course evaluations for all required courses in AAQEP and CACREP programs are prepared as exhibits, and thus are accessible to accreditors and for state review.
  • Other stakeholders are given access to specific subsets of aggregate assessment data, as appropriate.

Information Technologies Used in Assessment

Because of the different nature, sources, and use of the assessment data described so far, Warner uses a combination of databases:

  • All students’ demographic information, course grades, and other key academic information reported in the transcript are maintained in the University-wide implementation of Workday Student–which generates transcripts and reports for the entire university.
  • The Warner School also has a school-wide database (Central 360) that allows us to store and manipulate additional data that is important for internal reporting, planning and decision-making. In addition, Warner utilizes SLATE to capture prospect and applicant information. Students’ electronic files, and programs of study are among the data collected and stored in the Central 360 system.
  • Warner utilizes a unit-wide database (Taskstream, by Watermark) specifically to record and report students’ assessment data for key assessments. Course instructors complete Taskstream rubrics for all key assessments, as previously stated.

Use of Assessment Data for Improvement

Assessment data are used to motivate and inform improvement at a variety of levels –that is, individual students’ academic performance, individual faculty teaching performance, individual courses/clinical practice experiences, instructional programs, and the school as a whole.

Use of Assessment Data to Make Improvements at the Individual Student Level

Data collected from a specific assessment are shared with the student; given the detailed rubrics used in each of the key assessments and at key transition points, and their direct relationship with standards and/or other proficiencies targeted by the program, this information provides immediate feedback to the student–as well as the instructor–about potential deficiencies that need remediation. In particular, students who received an insufficient score in a specific assessment, or who want to improve their score, are usually allowed to revise and resubmit that assessment (following certain guidelines and limitations). As instructors review the set of assessment data collected from all students in their course, this provides valuable information to identify potential weaknesses in the design of the course and suggest ways to improve future implementations of the course.

Use of Assessment Data to Make Improvements at the Individual Instructor Level

Similarly, because course evaluation data are shared with each instructor and their program chair shortly after the semester is over, these data are used to improve the future design and delivery of the course–especially when considered in conjunction with the summaries of students’ performance in the key assessments related to the course. As the chairs and the Associate Dean of Graduate Studies and the Associate Dean for Academic Affairs also receive copies of course evaluations every semester, if they notice any problem or consistent student dissatisfaction, the chair meets with the course instructor to discuss the problem and determine agreed-upon possible solutions, and then monitors the implementation of these decisions through on-going discussions.

Use of Assessment Data to Make Improvements at the Course/Clinical Practice Level

As described in the previous section, student assessment data, course evaluations, and when available, additional feedback gathered through instructor-developed surveys, are used by instructors to informally evaluate the quality of their course/internship design and teaching practices and make decisions about possible changes on an on-going basis. These decisions are the prerogative of the course instructor, provided that the proposed changes still comply with the basic course goals and official course descriptions agreed upon by the program faculty. When changes in course/internship goals or in the official course/internship description seem to be called for, however, the entire program faculty need to discuss and approve the proposed changes, and these program decisions need, in turn, to be examined by Warner’s APC to ensure school-wide consistency in meeting professional, state, and institutional standards and quality expectations.

Use of Assessment Data to Make Improvements at the Instructional Program Level

Information gained from reports of students’ assessment data at key transition points, course syllabi and course evaluations, employer and graduate surveys, as well as anecdotal feedback received from students, staff and colleagues, and observations made by faculty in the program, are all used by program faculty and the chair to identify whether any minor adjustments or major changes in the programs are needed. Minor adjustments (i.e., changes in specific assessments and/or courses) can be decided and implemented with the approval of the program faculty. If major changes are called for, instead, a program review is initiated with program faculty reviewing the program and making recommendations for improvement; these recommendations are discussed first at the department/program level with the chair, then brought to the attention of the appropriate school-wide committee (APC) for review, and finally discussed and approved by the full Warner faculty.

Use of Assessment Data to Make Improvements at the School Level

Each administrative office within Warner routinely uses assessment data and reports to make forecasts that help with planning, and to make decisions about how to best use our limited resources. Data related to key metrics identified in our respective strategic plans are used to monitor whether we are making progress as expected towards our strategic goals, and/or if changes in the plan are called for.