Static Page / Guidebook Data: Chapter Four

Guidebook Data: Chapter Four



How to Use Student-Level Data to Improve Postsecondary Student Outcomes

July 2015

While it is very important for colleges to track and report how student cohort groups are doing—such as the graduation rate of the Class of 2015, the retention rate of African-American students, or remediation outcomes among first-generation students—it is just as important to have robust student-level data. When colleges know when an individual student is starting to struggle, this information can trigger interventions to help students navigate academic, financial, or personal barriers and make continuous progress through their program.

Data on individual students can help at various points, including: course selection or enrollment in supportive programming at the start of a semester; progression through individual courses and sequences of coursework to meet requirements for transfer or a program of study; and counting credits and awarding degrees appropriately when students leave an institution.

A variety of tools make it easier to collect these data. Institutions increasingly use online systems to share information more quickly and widely, freeing up person-to-person communication between students and faculty or advisors for more personalized, in-depth conversations. Through the Gates-funded initiative to support Integrated Planning and Advising Services (IPAS), more institutions are using online tools holistically, enabling greater collaboration across offices and programs to support student success. Below are a few examples of postsecondary student-level data tools:

Educational Planning Tools. Do your institutions want to provide students with customized education plans, based on their educational objectives as well as a recommended time frame for completing their goals? Educational planning tools suggest degree-appropriate courses and ensure students are taking their classes in the correct sequence. This helps students accumulate credits more efficiently, saving them both time and money. Transfer students, who often struggle to integrate their past credits into their current education plan, may find these tools particularly useful.

Early Alert Systems. Do your institutions want to help students who are at risk of not completing by intervening early and often? Early alert systems include online warning tools that identify students who are at risk of veering off track due to potential road blocks like falling grades, missing gateway classes, insufficient course loads, or erratic attendance. Once identified, these students receive an alert with a suggested course of action. These systems can also employ risk-based statistical models that identify at-risk students based on continuously updated information about their financial, personal, and academic variables. Often, academic advisors, faculty, and support staff are notified if their students are at risk of failing or falling behind, enabling them to intervene in a timely manner. Research has shown that this kind of “intrusive advising” increases students’ likelihood of persisting in college and graduating on time.

Degree Audit Systems. Do your institutions want to make sure students receive their degrees in a timely manner? These systems provide students and their advisors with information about degree requirements. They also help monitor students’ progress toward earning their degrees, ensure that students receive credits and degrees they are eligible for, and locate potential degree earners—individuals who left school before completing and may be targeted by outreach campaigns to come back and finish.

This section of the guidebook features interviews with leaders at a west-coast community college and east-coast four-year university who share details about their student-level early alert and educational planning tools and their effect on school staff and students. In addition, we provide a brief degree audit manual to help community leaders understand whether their local postsecondary institutions might be ready to adopt IHEP’s Project Win-Win degree audit model to identify near-completers and students who left without receiving the appropriate degree, improving both individual and institutional outcomes. Finally, this chapter ends with a list of additional resources where you can find more information on postsecondary student-level data tools.

Shasta County, Calif: How to Support Postsecondary Student Success through an Early Alert Advising System and an Educational Planning Tool

  • Kevin O’Rorke, Ph.D., Vice President of Student Services and Dean of Students, Shasta College
  • Kate Mahar, Ed.D., Project Director for Shasta College Community Partnership for Attainment initiative and Associate Dean of Foundational Skills and Adult Education, Shasta College

IHEP spoke with Kevin O’Rorke and Kate Mahar from Shasta College, a community college serving the rural region of Redding in northern California. Their institution has used an early alert advising system since 2008, which impacts not only the information and support services to which students have access, but also the perspectives of faculty on the value of tracking students’ progress. They are currently developing an automated educational planning tool to help students with degree planning, freeing up time for counselors to focus on other important areas of success in meetings with students. Read this interview to learn how to shop for software systems, how faculty and counselors come on board with these new tools, and the importance of establishing strong connections with IT departments.


IHEP: What kind of early alert system do you have in place at Shasta College?

During the fifth and tenth week of each semester, we send out an early alert notice to faculty. They go into their electronic grading book and put an X next to any student who may be struggling in a course. We get that list of students, and those students get a letter from the counseling department; some receive personal phone calls from our counseling system telling them their instructor identified them as struggling. Then they work with the students to get them into the tutoring lab and figure out the barriers to succeeding in class. We’ve had this system in place since 2008.

IHEP: What were the college’s primary goals when you decided to start developing and using this early alert system?

We certainly wanted to increase our retention. We also wanted a more proactive counseling department where we would reach out to students. We wanted to move away from the “prescriptive” model of counseling, like the old joke says, “Take these two classes and see me next year.” We wanted to collaborate more with the faculty and let them know how we could support them and to identify what struggles our students were experiencing. Were they related to finances, time, academics, or life circumstances? We wanted to collect more information.

IHEP: You’re also working on developing an automated educational planning tool at the college. Tell us about that tool and the goals you have in mind.

We recently implemented a degree audit system. Now we’re moving toward automating an educational planning tool so students will be able to do degree shopping online. We’re planning on releasing it next year. Students have an audit tool to run a grade check, but we can move toward identifying the student’s end plan, where they want to transfer, and telling students what classes to take. In California, there isn’t a single state system; colleges have to send transcripts back and forth. Course numbering and course titles are different across institutions as well.

The idea is that students will be able to update their educational plan automatically, and our counselors will be able to spend a lot more time with students doing something other than developing education plans by hand. They would be able to work with them on career counseling, personalized assessment, time management skills, and other aspects of student success. Counselors are excited to have more time to talk about these things; it’s more professionally rewarding for counselors to use their master’s degree-level skills for something other than just going through the course schedule with student after student. The students, meanwhile, will hopefully develop a good relationship with their counselors. They also won’t have to schedule an appointment and drive in to see a counselor for degree planning if they can just open up this tool online.


IHEP: Who was involved in setting up the early alert system?

The counselor coordinator at the time and I [Dr. O’Rorke] brought the idea of an early alert system to the Matriculation Committee, now called the Student Success Committee. It included faculty members, administrators, and staff. We had instructional coordinators from the English and Math departments on the committee, and during department meetings and meetings with the Academic Senate, they explained what the system was and promoted it. IT people worked on configuring the screen in our student information system. There was also someone from the Institutional Research Office on the committee.

The counseling department worked on the letter going to struggling students and how to follow-up with students who received letters. When faculty mark a student as struggling in our system, this triggers a letter from our office to the student notifying them that a faculty member is concerned about his or her performance and encourages the student to contact the instructor or a counselor. In addition, the counseling office personally contacts each student to respond to individual circumstances and needs.

After a semester we had the system up and running, but it took a long time to grow.

IHEP: Let’s talk more about that process of growth. What was it like bringing faculty on board to use the early alert system?

The faculty weren’t jumping on board and using the system right away, but eventually it caught on. Our goals were to increase the number of faculty using it, and we hit that goal. We talked for a while about whether the system should just be for our basic skills classes, and we decided to open it up for everybody to use. We did have to make it clear to the students that this system was voluntary among faculty, otherwise some students thought that if they were failing they would automatically get a notice about it from any class. The Academic Senate was key in making sure faculty were on board with participating.

This can’t be a top-down issue. The faculty are the ones who are going to have to be the drive behind using an early alert system. You can create an awesome system, but if they are not on board, they are not going to use it. It’s important to get some of the key faculty members with a lot of influence at the college to help develop and share it, and this will vary by institution. If departmental coordinators are assigned through an administrator or there’s no faculty vote, you may have to step back and see who the leaders are in the Academic Senate. If you can get the basic skills faculty on board, you’re going to hit a large percentage of students. Finally, if you’re getting some resistance from current faculty, then I would go right to recent faculty hires and ask them to make it part of their orientation in the sections they will teach.

IHEP: Who has been involved in working on the automated educational planning tool?

The primary driver behind this tool has been the counseling department. Before we could use the education planning tool, we had to build the degree audit system over the last couple of years, which has been a collaboration between Admissions and Records, Counseling, and instructors. The Instruction Council has been charged with working with faculty to identify key courses and course sequences so they can work with counselors to make sure everything is clear for students in the system. They are trying to look at it as a four-semester sequence for everything and that has led to a fantastic conversation about when we offer different courses and why we have some prerequisites in the spring and fall.

We have paid for some programming and IT support. The counselors went through demos of different software programs. We finally selected the one we wanted and purchased it, and now we’re in the process of building it. Our IT Department and Students Services are merging every day. It’s almost becoming one department, no longer separated in our daily operations. We paid for an IT position to be housed within Student Services to help us work on this, because we learned that it is easier to hire someone with expertise in IT and then teach them about Admissions and Records for the position, rather than to hire someone with expertise in transcripts and then teach them about IT. In order for the content and user interface to make sense to students and instructors, it was imperative that our IT person was part of Student Services at the time.

IHEP: How do you gain buy-in for the education planning tool?

The counselors are the people who are going to be using it on a day-to-day basis; they need to have the most input on what tools they’ll use and what they’ll look like. We offered them options. That probably helped develop into a feeling of ownership for them, and the idea that the tool would sink or swim based on their input and buy-in.

This can’t be a top-down issue. The faculty are the ones who are going to have to be the drive behind using an early alert system. You can create an awesome system, but if they are not on board, they are not going to use it.


IHEP: What technological resources were needed to put the early alert system into place?

We made the change in the existing electronic grading book, the My Shasta system. We just added the 5th and 10th weeks to enable instructors to mark if students were struggling at those points. It was low to no cost, aside from some IT programming time.

IHEP: What were some of the leading qualities you looked for when choosing a software system for your education planning system?

We use Colleague, commonly called Datatel, by Ellucian. We wanted a web interface that anybody could use. It needed to be mobile-friendly, since a lot of students have mobile phones and may access the tool through their phone rather than a desktop computer. We wanted something portable and easy to read and understand. It was also important for it to be able to incorporate transfer units, so if students transferred from another institution or had college credit to bring in, that would also be posted on their plan.

Institutions shopping for a system may also consider customization issues. The Datatel system gets updated annually, and if we as an institution customize anything, those get erased and you have to re-do them with the new version. But they have representatives who they send out to all their colleges to work with the institution’s IT department. A lot of colleges just use the overall framework rather than doing too many customizations. Colleges might also choose to use a home-grown system that they can customize however they want, but the downside is that it will eventually grow beyond capacity and will need to be replaced by a larger system. We’ve seen that happen at many institutions, and it’s a lot of work for schools to switch their student information systems.

IHEP: What kind of investment does the education planning tool require?

It is costly. The philosophy here has been that we are going to take some money that would typically have been used to provide staffing and counselors to invest in technology that supports our counselors’ efforts. We can’t underestimate how challenging those conversations may be, because not everyone agrees with that decision. A counseling group may have an idea about what’s best for counselors, and administrators will think of what’s best for administration. But it all boils down to what is best for students, and that will usually break the tie.

[The tool] needed to be mobile-friendly, since a lot of students have mobile phones and may access the tool through their phone rather than a desktop computer. We wanted something portable and easy to read and understand. It was also important for it to be able to incorporate transfer units.


IHEP: What has the response been to the early alert system at Shasta College?

One of the biggest benefits of the early alert system was that it opened up the conversation with our faculty about their responsibility in gauging how their students are doing early on in the semester and paying attention to that moving forward. We had some faculty who were assessing their students once with a mid-term and once at the end of the class, who would say they couldn’t use the early alert system because they weren’t getting information on student grades until mid-semester. That opened up a conversation about pedagogy and whether we actually should only be assessing students twice per course, or whether we needed to have assessments earlier in the semester.

There was an old philosophy that students “have a right to fail”—that this is an institution of higher learning and faculty are not going to “babysit” them. That view has subsided. When we hire new faculty now, we generally want them to be enthusiastic about tracking their students’ progress.

The students seem to appreciate the early alerts. A lot of it depends on whether they are connecting with their counselor and whether they enjoy working with them. Our retention rate is getting better, which could be tied to various things. It’s not just that we have an early alert system, but that we have been making pedagogical changes, including earlier assessments, for students in the classroom.

Looking Forward

IHEP: Do you see the early alert system being used differently in the future?

I think we can expand it, and it would be helpful to ask the faculty for a little bit more in-depth information when they submit the early alert. Rather than giving us a generic alert, we can drill down and see why the alert is being created to let us know what we could provide to students to remedy the situation. The faculty could tell us if the student is struggling because of excessive absences, or if they are attending class every time but still failing, or being disruptive.

IHEP: Would you like to offer any last words of wisdom to CPA communities seeking to use new tools like these to promote a culture of student success?

When we introduce new tools, we go into it with the understanding that there will never be complete consensus. It’s important to set ground rules up front, where everybody has a voice and will be recognized, but there are some things we need to move forward on even without full consensus. Moreover, I [O’Rorke] kept using the word “pilot.” That’s the most important word I would suggest to other schools to use. Many times people will dig their heels in, either in support of or in opposition to something. When using the term “pilot,” we’re saying we’re going to take a look at it, go in with an open mind, and then evaluate it and determine if we want to do it. I think that saves some heartache down the road.

One of the biggest benefits of the early alert system was that it opened up the conversation with our faculty about their responsibility in gauging how their students are doing early on in the semester… There was an old philosophy that students have “a right to fail”— that this is an institution of higher learning and faculty are not going to ‘baby-sit” them. That view has subsided.

Philadelphia, P.A.: How to Use a Risk-Based Statistical Model to Improve Postsecondary Student Retention

  • Peter Jones, Ph.D., Senior Vice Provost for Undergraduate Studies, Temple University

IHEP spoke with Peter Jones from Temple University to learn about the risk-based statistical model he has developed to help improve student retention. Jones describes how he utilized data the university already had to build a risk instrument that would identify students most at risk of not completing their postsecondary education. He also discusses the importance of building an academic advising corps to respond to these data and help direct students towards the supports that best meet their specific needs. Read this interview to learn how this risk instrument works and how you could develop a similar one with the data you already have at your disposal.


IHEP: What was your main goal in building a risk-based statistical model at Temple University?

I have actually been developing risk models for application in my field of criminal justice. If you have a particular outcome in mind, and you have a specific population that you need to provide services to but you have a fixed capacity to do that because of resources, then the “risk/needs principle” applies: if you could identify those people that are most at risk, rather than spreading resources evenly across an entire population, you can be much more strategic and focused. Not only do you identify the population, but you identify the factors that might be associated with the outcome.

It is pretty well documented in the literature that empirically-based risk models significantly outperform any clinical type of models. The challenge at Temple was not, of course, to reduce offenders as in the field of criminal justice, but to improve retention. But the same logic applies. We have a fixed capacity to intervene, so we really need to know with which students we need to intervene the most and what types of interventions would be most appropriate to their specific risk-based needs.


IHEP: Could you please explain who was involved in the development of this model?

The initial development of the model involved myself and a senior director of institutional research (IR) working together. I first had to present a concept paper to the provost to convince him of the fact that this was worth doing, and after he agreed that it was, I worked with IR to create a dataset.

One of our main problems was that Temple had a bunch of different systems that didn’t talk to each other. So just getting these data together would have been a huge task. We had one set of data in student registration, another set was in admissions, another set in housing, another set of data in finance, etc.

To begin with, we had to manually put those together. Fortunately, while we have been doing this, the university changed to a student information system, called Banner, where we now have all of these data systems essentially unified. So we can download all these data easily as one dataset, so that’s made it much more feasible to do this, going forward.

Every advisor in both units agreed to participate. Each was given a caseload, and by the end of the first year in both of them, we had more than halved the attrition rate. When we were able to share that information back with the deans and the provost, the immediate decision was made that we would need to scale this up university-wide.

IHEP: How were you able to convince the provost this was a worthwhile tool to pursue?

[In my concept paper], I argued that we would save the university a significant amount of lost money. I made estimates of what the value would be of being able to prevent the attrition of a certain number of students. If a student dropped out in the middle of the freshman year, we would lose one semester of tuition, and if they were in university housing, which most freshman are, we would possibly lose one semester of housing. We’d basically have an empty bed.

I then argued that every student who drops out is essentially a loss of the admissions investment in each student, which we averaged out at about $700 per student. Add that to the potential loss of our financial aid commitment to a student.

So when you add those costs together, especially for out-of-state students, and you start thinking that of the 500 that were in the highest risk category, it wouldn’t be too difficult to try and prevent attrition of perhaps 100 or 200 of them—then you are talking about literally millions of dollars. And that gets people’s attention pretty fast.

IHEP: Did you have to bring anyone else to the table to support this tool?

At the same time that I was pitching the idea to the provost, I was also pitching the idea to the deans and to the advising directors.

The College of Liberal Arts (CLA) was the first to say they’d try this out with me.

We focused on the undecided undergraduate population, because this advising unit (Division of University Studies, or DUS) reports directly to me. Then I asked all the advisors in each unit (CLA and DUS) to volunteer. They didn’t have to participate if they didn’t want to. And we had discussions about what that should be like, and we agreed on two basic principles: 1) Neither central administration nor I would dictate what the interventions were like—that should be done at the college level; 2) the minimum composition of those interventions should be five confirmed contacts with the student during each semester, with three of those contacts being in person and the other two could be by phone or by e-mail.

Every advisor in both units agreed to participate. Each was given a caseload, and by the end of the first year in both of them, we had more than halved the attrition rate. When we were able to share that information back with the deans and the provost, the immediate decision was made that we would need to scale this up university-wide.


IHEP: What data did you use to develop this model?

We started off by developing a freshman risk model, and the outcome measure was trying to predict retention to second year. At the time, we lost 16 percent of the students in the first year, and we wanted to know if we could predict who those students were going to be.

We did not develop any additional datasets. If you can develop a reasonable risk model with the data that you have already available, it may not be worth the effort to go creating new datasets just to improve your predictive ability by a marginal amount.

The datasets that we had available to us told us about students’ high school performance, the major they were coming into, gender, race, financial information, housing, and if they were in-state or out-of-state. We also had a lot of self-reported data from the freshman survey, including their parents’ education, attitudinal questions about their drive to attend Temple in particular and their engagement with the university before they got here. Engagement, commitment, and connection are all key features of risk modeling with regard to retention.

We did not develop any additional datasets. If you can develop a reasonable risk model with the data that you have already available, it may not be worth the effort to go creating new datasets just to improve your predictive ability by a marginal amount.

IHEP: Can you tell us how you used these data to develop the risk instrument?

We used basic multiple regression models to identify what subsets of variables would predict the outcome of a student remaining until the second year. The next evolution of the model realized that we needed to replace the single snapshot at the beginning of the freshman year with two models—one from freshman fall and another from freshman spring—because then we know their fall performance—the grades they got, the number of credits they registered for, whether they dropped out of any courses, and the mid-semester evaluations.

So now we’ve got two risk models predicting retention to the next semester, and we repeated both models for the two semesters of their sophomore year. Then we decided to also focus on the first and the second semester of a transfer student’s time at Temple.

So we went from one model to six models, and we recognized that a student who was at risk in the first model may no longer be at risk in the second model. A student, for example, who is identified as at-risk starts to work with academic advisors. So we recognized that students can move in and out of target populations, and that’s where we are now with a new model.

We also changed the methods from regression models to an approach called “configural analysis,” where we are now putting together time series datasets so that we can advance the modeling from six snapshots of a student to a real-time dynamic model that will essentially track a student from the day they arrive right through the first two years. To give you an example, a student who comes in who is not high-risk may in the sixth or seventh week get a mid-semester evaluation from the faculty reporting that things are not going well. Or we may hear from financial services that the student has come in and asked for more money because the father has lost a job. That combination of factors may push the student into a very high-risk category, and so rather than wait for the beginning of the spring semester to do a re-evaluation of risk, the risk model will change immediately, and that will come to our attention. And the intervention needs to begin as quickly as possible. Now that’s where we want to get, but we’re not there yet.

IHEP: Can you explain how the risk-based statistical model works?

So what happens is, if you are starting with a freshman, the amount of information that you know about that student is restricted to a set of variables about their high school record. Regression models try to identify variables that have predictive power across the entire population and will have little value when applied to small subsets in a population.

What we need is a model that allows us to identify risk factors that differentiate for subsets of the population, so we decided not to use generic regression models and instead use dendrograms, or configural analysis. This method searches through the various predictors, identifies which predictor is the best one on its own of predicting the outcome, and then ticks that variable.

But let’s say that variable is broken into three different groups. It then asks for each one of those subgroups, “What’s the next best predictor?’ So the model becomes fairly complex quite quickly because the combination of predictors might be different for different subgroups of the population.

Let’s take an example: The very first predictor for freshmen in their fall semester was Pell Grant receipt. The overall attrition rate for that semester is around about eight percent. The Pell predictor then divided into four further categories: 1) no Pell eligibility, 2) students who had full Pell, 3) recipients who got a very small amount of Pell, and 4) students who were getting between about $1,200 and $2,800 in Pell. This last group meant the families were poor enough that they qualified for substantial Pell, but they didn’t have enough disposable, available income to cover the shortfall between Pell and the true, full cost of education. For these students, the attrition rate was much higher than the other subgroups at almost 18 or 19 percent.

And then for each of those Pell sub-groups, other predictors come into play as you keep developing the model. In the end, you might have a model that says, for ”no Pell eligibility” students, the next best predictor is whether or not they intended to work. Then that also breaks down into three further groups: those who do not intend to work, those who are going to work less than 20 hours, and those who are going to work more than 20 hours and so on. The idea is that the model keeps on breaking predictors and groups down until, in the end, you get some fairly well-defined small groups.

The point is, with this model, you may find that even though the non-Pell-eligible population, as a whole, only has an attrition rate of six percent, there could be subgroups in the same population where the attrition rate is significantly above the overall base rate.

As we learn more from semester to semester, we change the model, and that’s the logic behind it. Risk models should not be generic. I know that people have the opportunity to go and purchase risk models from vendors, but they should be very cautious about that. What works at one institution may not work at another. They should change over time as you learn more about your population.

IHEP: How much does this modeling cost to develop? Did you engage a third party vendor?

There is no cost to the creation of the risk models, other than people’s time. We chose not to engage a third party vendor. Many third party vendors typically provide a dashboard. They look at the data you’ve got and provide you with a risk model, but you still have to provide the guts of the model. Otherwise, they will give you somebody else’s logic, and that doesn’t help. There is no way that you can avoid developing an institution-specific risk model that is validated and tested, and it has to change each year.

There was one area that did cost us some money. I realized from the outset that the front line of the intervention needed to be the academic advisor. Advisors could typically handle most of the issues themselves, but in other cases, they were also crucial to be able to direct students to other support programs that could meet their needs. We didn’t have enough advisors, and we also had retention problems amongst the advisors. Our one-year turnover rate for academic advisors, six years ago, was 18 percent.

I convinced the president and the provost that we did need to invest in the interventions through advisors. Otherwise, this whole thing was an empty shell. In a period of five years, we basically doubled our academic advising staff, from about 53 or 54 advisors to about 105.

As we learn more from semester to semester, we change the model, and that’s the logic behind it.

Risk models should not be generic. I know that people have the opportunity to go and purchase risk models from vendors, but they should be very cautious about that. What works at one institution may not work at another. They should change over time as you learn more about your population.


IHEP: How much impact do advisors have on supporting student success?

The other thing that we did to address advisor retention was focus on their professional development. Advisor salaries were essentially dependent on whatever college they were in. Some were reasonably paid; others were very poorly paid. If you were an advisor and you stuck to the job for 25 years, your title and job description never changed. Your relationship to the faculty was essentially perceived very much as a clerical worker rather than a colleague. With the help of Temple’s Human Resources department, we developed a professional ladder, and we’ve grown the one level of academic advising into multiple levels (Advisor 1, Advisor 2, Senior Advisor, and Principal Advisor). We developed university-wide guidelines for the expertise required at each of those levels, as well as salary minimums. Since then, our advising retention has improved dramatically and we had a turnover of just four percent last year.

So we send the lists of at-risk students to the advisors in each college, and we identify what the risk factors are that put the specific student at risk. We then enabled advisors to better support those students by creating a large enough number of advisors and allowing them to do their jobs with some respect. We have made some policy decisions here at Temple that have taken away the need for advisors to do mindless clerical tasks, registering students for classes because they have holds and so on, when students can register for the classes themselves easily.

IHEP: Can you give an example of how the risk model has led to connecting at-risk students with supports?

Our coordinator of risk prevention programs runs monthly networking meetings where we bring together stakeholders from across the university to discuss what they are doing, what’s working, what’s not working.

For example, the Business School did not feel comfortable approaching students at risk and saying, “You have been identified as an at-risk student.” So what they did is they created a program called the Future Leaders Program. They then contacted the at-risk students, never mentioning the word “risk” at all, and said, “We have a program called Future Leaders, and we think that this would be a great fit for you if you’d like to participate.” Of course, many of the students said yes. So now they meet with faculty, engage with advisors, as well as prepare for careers and so on. It is hugely successful from a retention point of view.

IHEP: Speaking of retention, have you seen your retention rate increase as a result of using this data tool to identify at-risk students?

Temple’s firstand now second-year retention rates have been improving. When you look at national figures on freshman retention, they just do not budge, and if they do, it’s by a tiny amount. Ours has gone from 84 percent, which is by no means bad for a large public urban, to almost 90 percent after six years. We have also seen increases in sophomore retention, and both of those are now beginning to translate into improvements in our four-year and our six-year graduation rate, all of which has had the impact of moving us up in the rankings. So the provost is extremely happy. It didn’t really cost much in the way of “investment and we have seen tremendous gains.

[Our retention rate] has gone from 84 percent, which is by no means bad for a large public urban, to almost 90 percent now.

Looking Forward

IHEP: Would you like to offer any last words of wisdom to CPA communities who are thinking about developing a similar risk-based statistical model to help identify and support at-risk students?

My main advice for any institution thinking to do this is: do not do it ad hoc. Do not rely on an external consultant, who will give you a list of 25 risk factors that have been culled from the literature, tell you to look at your incoming population, and identify any student who has more than 10 of these 25 traits as at-risk. First of all, that means you could well be wrong. There is no evidence to suggest that those risk factors apply in your institution. Secondly, if you do this, there is a good chance that you are going to miss the students who are really at risk, and you are going to include in your intervention students who are not at risk.

Institutions should sit down and plan this conceptually. They should ask themselves important questions to help determine if they can build a risk instrument: Do I have the data? What can I do with what I have? How well do they work? Don’t worry about going out and getting other datasets. Work with what you have. That’s number one.

And then when I know who is at risk and I have a sense of how many those are, I can choose how many I want to intervene with. More important questions to ask yourself: Is it going to be centrally based? Is it going to be decentralized? Are the advisors buying in, or do they not want to do it? Asking those questions and assessing your capacity is really crucial. If we hadn’t addressed the basic structural problems we had with academic advisors, I don’t think we would have had the success we’ve had.

Project Win-Win
How to Use a Degree Audit to Improve Institutional Outcomes

Sixty-one postsecondary institutions in nine states participated in Project Win-Win.

IHEP’s Project Win-Win can help communities increase associate’s degree completion rates by helping their local institutions identify students who have left college without obtaining a degree. Community college students may “stop out” with a significant number of credits (often even more than the 60 typically required for an associate’s degree), and in good academic standing, for various personal or financial reasons. Sometimes minor bureaucratic issues are standing between students and their degrees, like an unpaid parking ticket or an “opt-in” institutional policy that requires students to fill out additional paperwork and pay extra fees for a degree in their completed coursework.

“Degree audits” (i.e. full reviews of student transcripts) can help institutions turn students previously viewed as non-completers into completers, and help strengthen communities by awarding their hard-working students with the degrees they deserve. Audits also reveal that many stop-outs were just 15 credits or fewer shy of completion, and can prompt more rigorous outreach efforts to bring those former students back to cross the finish line toward an associate’s degree.

Project Win-Win is an evidence-based strategy that helps instituitons identify these students to award degrees and/or bring them back to finish degrees. It’s a win for institutions, who increase their graduation rates and increase system efficiency. It’s also a win for former students who can see the economic and personal benefits of holding a degree, as well as for current students who may avoid the same barriers to degrees in the future. This fact sheet provides high-level guidance on how institutions can implement the Win-Win model on their campuses. The work and results of this endeavor, involving 61 institutions awarding associate’s degrees (including community colleges and four-year institutions), can serve as a guide for other institutions and communities seeking to replicate it. For two years, institutions tracked, sorted, contacted, recruited, and supported former students to help them earn their associate’s degrees. By August 2013, 60 institutions reported retroactively awarding associate’s degrees to 4,550 former students and re-enrolling 1,668 near-completers. We present key strategies, highlights and lessons learned below.

Institutions that do not meet the recommended criteria below can still participate in degree audits, but should seek assistance in meeting these criteria or the audits may be more difficult. Once the institution meets the stated criteria, it is primed to begin moving through the following Win-Win implementation sequence. Along each step of the way, institutions are likely to encounter various challenges. Based on the experience of our Project Win-Win partners, we have recommendations for how to navigate around these barriers and keep the work moving steadily forward.

Is Your Community Ready for Win-Win?

Institutions in your community can prepare for a degree audit initiative by first assessing whether they meet the following recommended criteria:

  • The institution is a full member of the National Student Clearinghouse
  • The software governing the institution’s student data system has not been changed since September 2008
  • The institution has identified its stated policy for awarding degrees
  • The institutional data system contains the following elements:
    • Student IDs that can be matched to both state systems and National Student Clearinghouse records
    • Term dates for student’s first attendance;
    • Transfer flags indicating whether the student was a transfer-in, the number of credits transferred in, and the type of school (e.g. private 2-year) from which the student transferred
    • Dates for the most recent term in which the student was enrolled
    • The aggregate number of credits counted towards a degree that were earned by the student
    • Cumulative student GPA in courses with credits that count towards a degree
    • Student’s race/ethnicity and gender
    • Student’s date of birth, so that age at the date of first enrollment can be determined

Step 1: Identify the Students Your Institution Wants to Consider (Your Universe of Interest)

Set parameters for the criteria that must be met by former students to include them for consideration in the degree audit, such as completion status, number of credits earned, dates of attendance, and GPA.

  • Institutions may encounter missing data, variables, or links between datasets when seeking students who fall into the project’s parameters
  • Data inaccuracies and duplicates can create issues
  • The default set of parameters included in Win-Win were: students who earned 60 credits or more, had a cumulative GPA of 2.0 or higher, never earned a credential from another institution, and had not been enrolled for the most recent three semesters
  • Set aside students with major barriers to completion like significant debt, low GPA, or an insufficient number
  • of credits
  • Identify your universe of interest in less than one week

Step 2: Remove Students Who Received Degrees from Other Institutions or Re-Enrolled Elsewhere from the Audit

Use National Student Clearinghouse data and state data systems to identify students who can be removed to refine the list of students under consideration

  • State data systems may offer limited usefulness or responsiveness in providing data to institutions for matching
  • Go directly to the National Student Clearinghouse for data matching
  • Institutions must comply with FERPA privacy laws while sharing data with states and the National Student Clearinghouse

Step 3: Perform Degree Audits to Identify Students Eligible for Degrees and Students Near Degree Completion

Designate the degree types and course catalog requirements to use. Your audit team will need to be knowledgeable about course requirements and able to dedicate time and attention to see this work through.

  • Over-reliance on software during audits may result in accuracy issues
  • Missing institutional data, such as transfer flags, and inconsistent data markers may create confusion when reviewing records
  • Be prepared to manually review student records for accuracy
  • Assess and improve data systems for clearer record-keeping and smoother tracking of students’ degree completion status Set more inclusive course requirements, especially for college math

Step 4: Award Degrees to Eligible Students and Re-Enroll Students Near Completion

Locate and contact students using a variety of resources and incentives, leaving time for meaningful engagement efforts.

  • Figuring out how to locate and contact potential degree recipients can be difficult; some institutions used white page websites with limited success, while other institutions asked state governments to contact students.

Lessons Learned: Tips for a Successful Degree Audit

  • Get the right team in place: You’ll need • experienced registrars, research officers, academic officers, counselors, and advisors— and individuals who are each able to see the audit through from beginning to end
  • Determine and build data capacity from the start: Student-level data systems must include markers like transfer flags, first date of attendance, and GPAs in majors, and be tested for accuracy and consistency before being used to audit individual students’ degrees
  • Know what is needed to track students and build a tracking system: Students are highly mobile, and data-sharing agreements and National Student Clearinghouse membership help institutions find students to award degrees or invite them to re-enroll
  • Move at a deliberate pace, aiming to complete the project in 18 months: Taking longer may result in duplicative work
  • Record time and resources spent on the work: Use this information for future costbenefit analyses so audit work can continue for other students’ records
  • Shift from opt-in to opt-out policies so that students must actively decline an offered degree rather than fill out additional paperwork (and pay a fee) to actively accept an offered degree
  • Incentivize and support students close to degree completion to re-enroll and continue education
  • Show potential re-enrollers that the institution cares by sending personalized letters and making phone calls

Additional Resources

Learning from High-Performing and Fast-Gaining Institutions: Top 10 Analyses to Provoke Discussion and Action on College Completion (2014: The Education Trust)

This practice guide describes how campus leadership can use data management systems to help underserved students complete college. It demonstrates how data are key to understanding problems, designing interventions, facilitating ongoing inquiry, and monitoring student progress. The guide presents case studies from eight colleges, and focuses on monitoring and addressing credit accumulation, remediation, gateway courses, and degree completion.

Searching for Our Lost Associate’s Degrees: Project Win-Win at the Finish Line and Project Win-Win at the Finish Line: Issue Brief (2013: Institute for Higher Education Policy)

This report and issue brief companion reveal the results of Project Win-Win’s national efforts to help colleges identify former students in order to retroactively award the associate’s degrees they had earned; colleges also reached out to former students who were close to qualifying for a degree to invite them back. These resources contain a step-by-step breakdown of the WinWin and degree audit process, best practices, and lessons learned that institutions can use to implement Win-Win at their own schools.

Data Collection and Use at Community Colleges (2010: The National Center for Higher Education Management Systems)

This paper details the process that community colleges can follow to collect data on their students over time in order to track student outcomes during and after enrollment, as well as to better design their curricula and academic interventions. It describes how colleges can use longitudinal data systems to track cohorts of students over time as they progress academically and graduate, or after they leave or transfer out of programs. It also describes the many remaining challenges in data collection and use at community colleges.

Integrated Planning and Advising Services: A Benchmarking Study (2014: Educause)

This study provides higher education leaders with an evaluation of Integrated Planning and Advising Services (IPAS); these online tools provide holistic information for college students, faculty, and staff in an effort to promote timely degree and credential attainment. IPAS is comprised of four major components: advising, early alerts, educational progress tracking, and degree auditing. This study details how IPAS-related tools have been used in institutions and provides recommendations for higher education leaders.

Download PDF on   Scribd

© Institute for Higher Education Policy 2015.
1825 K Street, NW, Suite 720, Washington, DC 20006
Phone: (202) 861-8223 |