Now that college is, typically, the second-largest financial expenditure for a family—only buying a home is typically more expensive—parents and students are increasingly taking a harder look at the costs and payoffs of any college they consider.
To assist them, MONEY has sifted through the research and advice of dozens of the nation’s top researchers on education quality, financing, and value to develop a new, uniquely realistic, and practical analysis of the likely costs of colleges and their contribution to how much graduates ultimately earn.
The result is our new Best Colleges rankings. This is the first list of colleges that combines the most accurate pricing estimates available with estimates of likely earnings that take into account a student’s economic background, test scores, and major. Our goal: to give students and parents a much better indication of which colleges will provide real value for their tuition dollars and enhance a student’s earning potential.
To ensure we used the best data and methodology, we partnered with one of the nation’s leading experts on higher education data and accountability metrics. Mark Schneider is the former commissioner of the National Center of Educational Statistics, and currently a vice president at the American Institutes for Research, and president of College Measures, a for-profit partnership of AIR and Matrix Knowledge Group. Our data was collected and analyzed by College Measures, which collects and publishes public data comparing a student’s educational record and later earnings to help drive improvements in higher education.
The final methodology decisions were made by the Money editorial team, in consultation with Schneider and College Measures.
In building our rankings, Money focused on the three basic factors that surveys show are the most important to parents and students:
- Quality of education
Because these three factors are so inter-related and crucial to families, we gave them equal weights in our ranking.
In each of these three major categories, we consulted with our advisors to identify the most reliable and useful data to assess a school’s performance. In each of the categories we also balanced basic data such as graduation rates, student loan default rates, and post-graduation earnings with a “value-added” measure that analyzed each school in light of the economic and academic profile of its student body and, for earnings, the mix of majors, so we could compare its performance to that of schools with similar students.
To gauge whether a college is outperforming its competitors, we gathered federal data on the average test scores of students at each college, as well as the percentage of students with incomes low enough to qualify for Pell Grants (about 90% of which go to families with incomes below $50,000). We then used a statistical technique called regression analysis to determine the impact of a student’s test scores, economic background and major on key factors, such as graduation rates and future earnings. That enables us to see how much better or worse a particular college performed than would be expected given the characteristics of its student body.
We used this estimate of relative performance by the college as an important part of the ranking, as you’ll see below.
MONEY assigned each indicator a weight based on our analysis of the importance of the factor to families, the reliability of the data, and our view of the value of the information the data provided.
To avoid overloading readers with too many choices or too much data, and to ensure we fairly compared apples with apples, we decided to only analyze colleges that, in our view, passed these minimal quality and data tests. We decided to rank only colleges that met these four conditions:
- It was a public or not-for-profit four-year college or university.
- It had enough cost, quality and outcomes data available to provide at least moderate confidence in our assessment.
- It was not in financial trouble, as indicated by a below-investment grade rating for its bonds by Moody’s.
- It had a graduation rate at or above the median for its type of school (public or private), or if the rate was below the median, it had a graduation rate at least 25% above what would be expected given the incomes and test scores of its students.
This eliminated some colleges that may be good values, but may be facing temporary financial difficulties or that may have too few alumni reporting their incomes to PayScale, our salary data source, for us to evaluate them. But it left us with a robust universe of 665 colleges. In our view, even the lowest-ranked of these schools demonstrate that they provide at least some value for your tuition dollars.
To determine which of the 665 provided the biggest bang for the tuition buck, we used the following data and methodologies in three broad categories to create our ranking:
QUALITY OF EDUCATION: 33.3% weighting
We used six reliable indicators that relayed meaningful information about the quality of a school’s instruction, and were weighted within this category as follows:
- Graduation rates: 25%. Education experts and college officials generally agree that one of the most important reflections of a college’s quality is its graduation rate. (The American Association of State Colleges and Universities calls it “a legitimate indicator” of college quality). Many rankings use this commonly cited federal statistic on the percentage of freshmen that graduate within six years. Because of its importance and wide acceptance, we assigned this measure a comparatively heavy weight.
- Value-added graduation rate: 25%. Many education experts and college officials point out that the basic graduation rate number, while useful, is an insufficient indicator of a college’s value because research shows that wealthier students and students who got good grades in high school are more likely to finish whatever college they attend. So elite, expensive schools such as, say, Harvard, would be expected to have high graduation rates. So we also calculated each school’s relative performance after accounting for the economic background and academic preparation of its students. The higher a school’s graduation rate was above the rate that would predicted for a school with that particular mix of students, the more value that particular college is assumed to have added. This “value-added” graduation rate analysis is widely accepted. (A 2013 OECD paper said that such “value- added measurement provides a ‘fairer’ estimate of the contribution educational institutions.”) Because of its reliability and acceptance, we weighed this factor heavily.
- Peer quality: 15%. Decades of research have shown that undergraduates have a major impact on their peers. Students who room with better students get better grades, for example. By contrast, students surrounded by less conscientious peers—for example, heavy drinkers—study less and get worse grades. And students who room or socialize with more successful students tend to get better jobs upon graduation.
To estimate the “quality” of undergraduates at an institution, we use the federal data on accepted students’ scores on standardized tests such as the ACT and SAT reported by colleges. We also analyzed the overall relationship between test scores and grade point averages for all colleges that reported both data points, and then used those averages to fill in an estimated test score for schools that don’t make their students’ test scores public. This is an imperfect measure. And there is controversy over the usefulness and validity of standardized tests. But the SAT and ACT tests currently provide the only nationally comparable data on student abilities. In addition, many studies have found a high correlation between test scores and academic success.
Research shows that students who get more individual attention from faculty tend to achieve more, both in college and after graduation. (See, for example, the recent Gallup-Purdue Index.) While there is no nationally comparable data that reveals the number and quality of student interactions with faculty, we found two data points that cast light on this characteristic:
- Student: faculty ratio: 10%. This is a standard federally published metric used bymany ranking organizations.
- Quality of professors: 5%. Money asked the nation’s largest independent source of student ratings of professors, Ratemyprofessor.com, to calculate the average overall rating for all professors at each school for helpfulness, clarity and quality. We did not include students’ ratings of the professors’ “hotness” or “easiness,” which is also collected by the site. Although research has found that students do tend to give higher marks to easier professors, independent investigators have also found that Ratemyprofessor quality ratings are generally reliable and provide students “with useful information about quality of instruction.”
- Yield: 20%. The federally reported “yield” is the percentage of accepted students who enroll in a given college. The higher the yield, the more likely it is that applicants have such high perceptions of the college’s quality that they have made the school a top choice.
AFFORDABILITY: 33.3% weighting
We used five reliable data points that relayed meaningful information about a college’s affordability, and weighted them as follows:
- Net Price of a degree: 40%. MONEY has developed a unique, and many experts tell us, more accurate estimate, of college prices. We started with the “sticker” price provided by the college to the federal government. The full cost of attendance includes tuition, fees, room, board, books, travel and miscellaneous costs. For public colleges, we used the in-state tuition and fees. We then subtracted the average amount of institutional aid provided per student by the college, including need-based grants, merit aid, and athletic scholarships. (The aid data is provided by colleges on the Common Data Set, which we accessed through Peterson’s.) That gave us an estimate of the net price the college charged the average student for the most recent academic year.
We then used federal data on the percentage of students who graduate from that college in four, five, and six years to calculate an average time to degree. We multiplied the net price of a single year by the average number of years it typically takes students to finish at that school (which ranges from four to six years, depending on the school) and added in an inflation factor (since tuition prices typically rise every year) to estimate the total net price of a degree for freshmen entering that school in the fall of 2014.
College counselors and financial aid experts have told us that the MONEY calculation is more realistic and helpful to parents and students than other popular price estimates, most of which relay only on the cost of a single year, not the full degree. Sandy Baum, a senior fellow at the Urban Institute and one of the nation’s leading researchers on college costs and aid, says MONEY’s estimates of the net prices of individual institutions “provide good benchmarks.” She added an important caveat: “Students at each institution face a wide range of net prices, so no individual student should assume that the schools on this list with highest net prices would end up being most expensive for them.”
By providing an up-to-date estimate for the entire cost of a degree that reflects the reality that students often take more than four years to graduate, we believe that families will find the MONEY estimate to be an improvement over the typical “average net price,” calculation—for example, the one reported by the federal government. As of June 1, 2014, the Department of Education’s College Navigator site listed as each college’s “net price” the amount a typical student paid for the single academic year of 2011-12. Education Secretary Arne Duncan himself has acknowledged that parents need, and have not yet been provided with, a good cost-of-degree estimate. “The difficulty families have trying to figure out basic things like grants versus loans, or one-year costs versus four-year costs is mind-boggling,” Duncan said last year. The federal net price estimate for a year’s costs is lower than Money’s estimate because the U.S. Department of Education subtracts almost all grants—federal, state and institutional—while Money only subtracts the institutional aid provided by the school, for reasons described below.
The main criticism of Money’s estimate is that it may exaggerate the actual cost to families in two ways:
- Time-to-degree: While many students take more than four years to finish a degree, they aren’t necessarily paying full tuition for the five or six years before they graduate, since they may, for example, take a year off to work. MONEY attempted to account for this by adjusting the estimated time to degree for all schools with large and established co-op work programs, such as Northeastern University. In addition, MONEY is not adding to the cost of a degree any amount for “opportunity cost,” which is the amount in earnings a student loses by not finishing a degree on time and delaying entry into the higher-paying job market of college graduates. So while we may, in some cases, be overestimating the price of a degree, we are also underestimating the total economic expense to a student of schools that don’t speed them to graduation.
- Federal, state or private grants: It is crucial to understand that while the MONEY price estimate is the average price charged by the college, you and your family will pay less than that if your student receives any federal, state, or private scholarships. As an analogy, if you’re buying a can of soup, you have to pay what the grocery store charges, unless you have a coupon. Just as coupons can be used at competing supermarkets, most federal, state, and private scholarships can be used at many competing colleges. So we help you identify which college has the lowest net price at which you can apply any additional scholarships.
MONEY gave the net price sub-factor a very heavy weighting because surveys show that the cost of college is now one of families’ biggest worries. A 2013 Harvard Institute of Politics survey reported, for example, that 70% of young adults say finances were a major factor in their college decision.
Surveys show educational debt is one of families’ biggest worries. We weighed this factor equally with net price because of those concerns and because we believe it is an indicator of how fairly colleges distribute their grants and scholarships. Schools with comparatively low net prices but high borrowing are likely giving grant aid to wealthier students but shorting the packages of needier students, thus forcing them to borrow more. The higher the debt factor, the lower the school ranked. We divided up this factor into two.
- Student borrowing: 20%. The federal government reports the percentage of freshmen who borrow, and their total average amount of federal and private student loans. We combined these data to estimate the average debt per student and then multiplied that number by the average number of years it takes students at that college to earn a bachelor's degree to calculate a total estimated indebtedness for graduating seniors.
- Parent borrowing: 20%. The federal government reports the total amount of parent PLUS loans awarded to parents at each college each year. We divided this number by the school’s enrollment to calculate an average parent PLUS debt per student. While other organizations generally don’t include parental debt in their rankings, MONEY believes parent educational borrowing is a financial burden, and should be an important consideration.
The ability to repay loans taken out to finance college education is another indication of a school's affordability for its students. We evaluated this in two ways:
- Student Loan Default Risk Index: 10%. Each year, the federal government publishes the number of former students who left college three years ago and have since defaulted on their federal student loans. Using a methodology proposed by The Institute for College Access and Success, MONEY adjusts these numbers for the share of students at the college who take out federal student loans. TICAS says this is a fairer and more accurate indicator of the odds that any particular student at the college will end up defaulting on a student loan.
- Student Loan Default Risk Index–Value Added: 10%. We calculated the relationship between schools’ SLDRI and average student test scores and the percentage of student body from low-income households. Schools that had a lower default rate than would be expected given their student population were ranked more highly.
OUTCOMES: 33.3% weighting
We used seven reliable data points that provide meaningful information about the salaries of graduates and the assistance that the schools provide to students in obtaining good jobs. One data point we could not use: the percentage of graduates who find jobs within a year of earning their degree. Although many colleges claim that a high percentage of their new grads are employed, the data is generally not considered reliable enough to compare one college against another. That's because each college has a different method of surveying graduates, and because unemployed graduates are considered less likely to answer such surveys, which would tend to make a college's employment rate look higher than it really is.
We weighted the data points we used as follows:
- Caseload of career services staffers: 10%. Surveys show that the single most important reason students now give for attending college is to increase their odds of landing a good job and launching a career. Unfortunately, most colleges don’t provide much career coaching or job assistance. At the typical four-year college, there is only one professional career services staffer for every 1,000 undergraduates, which equates to an average availability of just one hour of personal attention per undergraduate per year. While we couldn’t get reliable data on the quality of the career services that schools provide, we could at least tell parents how overworked the staff is. So MONEY included in its ranking the number of full-time equivalent staffers in its career services office that colleges reported by Peterson's. (We called several hundred schools that hadn't reported the data to Peterson's to fill in the missing staffing data.) We then adjusted those numbers for their enrollment to calculate a caseload per staffer. The lower the caseload, the higher the college is ranked.
We also looked at the salaries earned by alumni of each school. The earnings information came from PayScale, which provided MONEY with aggregated data on more than 1.4 million people who in the last three years have filled in an online form that asks what their job is, what their salary is, where they went to college, what they majored in, and when they graduated. This is the largest national database of earnings by college attended available in the country. Matthew Chingos, a Brookings Institution economist who has studied the PayScale data notes that it is currently “the only game in town” for anyone who wants to compare colleges’ outcomes nationwide. (Find more details on PayScale’s data here.)
The PayScale data is admittedly imperfect. For example, the data does not reflect any unemployed workers, which tends to artificially inflate the average earnings reported for each school. Offsetting that, at least in part: PayScale lists earnings data only for people who stopped their education at a bachelor’s degree, thus excluding high earners such as doctors, lawyers and MBAs.
We weighed the Payscale earnings reports as follows:
- Early career earnings: 20%. PayScale classifies as “early career” those who have graduated college within the last five years. The average salary reported in this data point is earned by someone who graduated two years ago. We weighed early earnings more heavily than mid-career earnings because Trey Miller, a RAND Corporation economist who has studied the relationship between college choice and earnings, noted that a college choice has a much stronger impact on the type and pay of a first job after graduation than it does on job type and pay for a person who has been in the workforce for, say, 10 years, when experience and skills acquired will also play an important role.
- Mid-career earnings: 10%. PayScale classifies anyone with at least 10 years’ work experience as “mid-career.” The average earnings report for this group is for someone who graduated 15 years ago.
We also evaluated the earnings data in light of the mix of majors at each school. In other analyses of which colleges seem to produce the highest earners, engineering and tech-oriented colleges dominate because computer scientists and engineers tend to earn very high salaries. But what if your child isn’t fated to be an engineer or work in Silicon Valley? Which school produces the highest earners for other majors? MONEY used a regression analysis to estimate the average impact on reported earnings of each of seven “buckets” of majors: humanities; visual and performing arts; behavioral and social sciences; science, technology, engineering and math (STEM); education; business; and "other" (everything else).
We calculated the average earnings for each group of majors. Then, using federal data on the number of graduates in each major at a college, we calculated what the expected earnings would be for that school if each student earned an average salary for their field of study. Colleges where graduates earn more than the predicted salary are ranked more highly. We weighed this factor in this manner:
- Early career earnings after adjusting for majors: 20%.
- Mid-career earnings after adjusting for majors: 10%. We weighed early career earnings more heavily because, as previously noted, where a worker went to school has a greater impact on salary earlier in one’s career.
We also included a "value-added" analysis, as we did in the educational quality and affordability categories. Despite the hope that the U.S. is a “land of opportunity,” there is growing evidence that children tend to follow in the socioeconomic footsteps of their parents. Disadvantaged children have relatively low odds of becoming wealthy. Children of wealthy parents tend to find high-paying jobs. So colleges that take in wealthier students tend to produce graduates who earn high incomes. MONEY wanted to find colleges that did what education is supposed to do – help hardworking students from any background break into a good career. So we calculated the impact of test scores and of coming from a low-income family on graduates’ earnings. Then, using federal data on the test scores and economic background of each school’s student body, we calculated what the expected earnings would be for each school. Colleges where graduates earned more than would be predicted, given their student body, appear to be creating more opportunity and thus are ranked more highly. We weighed this factor as follows:
- Early career earnings after adjusting for student economic and academic characteristics: 20%.
- Mid-career earnings after adjusting for student economic and academic characteristics: 10%. We weighed early career earnings more heavily because, as previously noted, where a worker went to school has a greater impact on salary earlier in one’s career.
While we believe these rankings are better than any others, we know these rankings are not perfect. Here are some caveats that we are aware of, and hope to address in future rankings.
- Lack of data on student learning. An important component of educational quality is what students actually learn. That’s a big unanswered question in higher education. Very few colleges try to measure learning, and very few of the ones that do release the results publicly. In addition, we were not able to find good data on basic indicators of academic rigor, such as the number of pages of writing or reading required per assignment. We will continue to explore data in the hopes of finding useful indicators of student learning.
- Lack of geographical adjustment of wages. Some colleges in low-wage areas, such as parts of the South and Midwest, get lower rankings because we have not adjusted the PayScale data for cost of living. Wages are higher in New York, for example, because rents and other living costs are much higher, but that doesn’t mean the graduate’s lifestyle is better. We will consider making geographic adjustments of earnings data in the future.
- Alumni satisfaction. The data currently available on alumni satisfaction – surveys and donation rates – is incomplete for many of the colleges on our list, so we were unable to include this measure. We will look for ways to improve the alumni data and include it in future versions.
- Out-of-state-public college tuition. Many students are interested in attending public colleges out of state. But public colleges charge higher tuition to out-of-state students. We will consider developing a cost and value comparison for out-of-state students.
For each of these data points, for each college, we calculated a “Z-score”—a statistical technique that turns lots of seemingly different kinds of data into a standard set of numbers that are easily comparable. Each Z-score tells you how far away any particular number—such as, say, a graduation rate of 75% —is away from the average for the entire group under consideration. We added up each school’s Z-score for each of the factors we used, and normalized that score into an easily understandable five-point scale, so that the top schools would have scores between four and five, just like the top students have GPAs of four and above. We then ranked the schools according to their total score.
David Morton and Gareth Harper of College Measures collected and analyzed our data.
Katie Bardaro, the lead economist for Payscale.com, produced unique earnings reports and analyses for Money, and advised us on ways to account for the impact of majors on earnings.
Matthew Camp, PhD candidate at Teachers College, Columbia University, served as Money’s research assistant, analyzing data and conducting statistical tests.
Among the many experts who volunteered data advice and suggestions:
- Anthony Carnevale, director and research professor, Georgetown University Center on Education and the Workforce.
- Trey Miller, associate economist, RAND Corporation
- Jennifer Lewis Priestley, statistics professor, Kennesaw State University
- Matt Reed, program director, The Institute for College Access and Success
- Joseph Yeado higher education research and policy analyst, The Education Trust
Among the dozens of experts whom we interviewed or consulted:
- Beth Akers and Matthew Chingos, Brookings Institution education researchers
- Robert Archibald, economics professor, William & Mary
- Jeffrey Arnett, research psychology professor, Clark University
- Sandy Baum, senior fellow, Urban Institute
- Douglas Bennett, former president, Earlham College
- Charles Blaich, director, Center of Inquiries at Wabash College. Also at the Center, Kathleen Wise, assistant director.
- Brandon Busteed, executive director, Gallup Education
- Corbin Martin Campbell, associate professor, Columbia University’s Teachers College
- Jesse Cunha, assistant professor, Graduate School of Business and Public Policy Naval Postgraduate School
- John Curtis, director of research and public policy, American Association of University Professors
- William Destler, president, Rochester Institute of Technology
- Marilyn Emerson, past president, Independent Educational Consultants Association
- Greg Fenves, provost, University of Texas
- David Figlio, director, Northwestern University Institute for Policy Research
- Douglas Harris, associate economics professor, Tulane University
- Bradon Hosch, assistant vice president for institutional research, planning & effectiveness, Stony Brook University
- Mark Kantrowitz, publisher, Edvisors.com
- Terry Hartle associate vice president of the American Council on Education. Also at ACE: senior vice president Dan Madzelan
- David Hawkins, director of public policy and research, National Association for College Admission Counseling. Also at NACAC: president Kay Murphy and president-elect Jeffrey Fuller
- Kerry Healy, president, Babson College
- Lisa Heffernan, author, GrowandFlown.com
- Michael McPherson, President, Spencer Foundation
- Ben Miller, senior policy analyst, New America Foundation
- Josipa Roksa, associate director, Center for Advanced Study of Teaching and Learning in Higher Education, University of Virginia
- Joyce Serido, research professor, University of Arizona
- National College Advising Corps - several staffers.
- Michael Violtt, president, Robert Morris University Illinois
- Carl Wieman, professor in the physics and in the education departments at Stanford University, Nobel Prize-winner and former Associate Director for Science at the White House Office of Science and Technology Policy