Our first blog post on the click-bait site known as Rate My Professors was an introduction to the anti-intellectual nonsense that Rate My Professors spreads. That post was primarily qualitative, dealing with the unethical and irresponsible behavior of the site and its leadership. In this post we dig into the numbers and we will show quantitatively that Rate My Professors is nothing more than poor students showing their disdain for the learning process. We will show that ratings on Rate My Professors are likely nothing more than a “hardness” rating. We will show that when compared to true college evaluations women professors are rated worse than men in every field. We will show that when compared to true college evaluations STEM professors (math, computer science, engineering, etc.) are rated worse than those in qualitative subjects (theater, art, humanities, etc.). In other words, difficult courses or courses taught by women are “awful.” If a course is easy, then the professor is “awesome.” Go figure.
Conclusions
Before we get to the details and supporting evidence, here is what we found.
- Professors who are rated easy are also rated “awesome” and vice versa. Quality ratings are really just a rating of easiness.
- Female professors are consistently rated worse (28% worse on average!) than male professors when compared to their true college evaluations.
- STEM professors are consistently rated worse (19% worse on average!) than non-STEM professors when compared to their true college evaluations.
- All professors are rated 10% worse on Rate My Professors than their true college evaluations.
- Professors that are rated in the lower 3rd on Rate My professors (all those so-called “awful” professors) are on average rated as “above average” in true college evaluations.
- Conclusion: Rate My Professors is strongly biased against science and technology, and women, and almost all so-called “awful” professors are in fact generally above average in their true evaluations. Rate My Professors will claim it’s the students who are doing the rating, and it’s not their fault, but they facilitate it, don’t stop it, and allow it for their profit. Yes, it is their fault, and more importantly, their responsibility and their liability.
- You can find supporting evidence for everything stated here in an upcoming study made by the Academic Integrity Group.
The Data
The Academic Integrity Group has downloaded all the data Rate My Professors exposes on their site, all 1.72 million professors; all their data, their ratings, schools, departments — everything. They organized that data in much the same way Rate My Professors does. To that they added data that Rate My Professors either does not have or does not expose. They extracted the school names and the departments and cross referenced them with the overall academic ratings of the schools. They classified the departments into hard and soft sciences, hard and soft disciplines, and quantitative and qualitative fields. They also added gender information to all 1.72 million professors using historical birth records from 1910 to 2018. From this they were able to build a comprehensive database which we are sure is more accurate than anything available to Rate My Professors.
Academic Integrity Group servers continuously visit Rate My Professors and collect data. Their analysis is not simply gathering a sample set, and inferring statistics. They continuously visit every professor profile and update their information, capture ratings that violate the terms of service (they have thousands already), constantly recalculate their statistics, and present them to the public.
The Academic Integrity Group knows more about Rate My Professors data than they do. And it’s not pretty.
“Quality” vs “Difficulty”
Data analysis of Rate My Professors data shows two very obvious correlations. First, as you would expect, the rating of “Take Again” was very strongly and positively correlated to “Quality.” The correlation overall was .83 and as high as .88 for some categories. This is not surprising. If a student rated a professor high on quality, then the student should also want to take the professor again. Makes sense. The second correlation that is more interesting is “Difficulty” versus “Quality” ratings. These were strongly inversely correlated, especially at high numbers of ratings. This correlation averaged -.63 overall but was as high as -.89 for some categories (we suspect the correlation is much higher but is being influenced by professors rating themselves to protect themselves). Since “Quality” and “Take Again” are strongly positively correlated, we can also see that “Difficulty” and “Take Again” are also strongly negatively correlated.
What does that all mean? Simply put, students simply don’t like hard classes. So that’s all Rate My Professors is, it’s just a rating of difficulty. Any professor or subject that is “difficult” is rated as “awful.” This alone delegitimizes Rate My Professor as a tool to rate professors. Students are not rating professors! They are rating “hardness.” The key to the entire system is “Difficulty.” When you see a poorly rated professor, odds are you are simply looking at a professor who is challenging or a subject that is challenging. Ratings are simply made by students who don’t like being challenged. Take this student for example…
Rate My Professors also allows you to “tag” a professor with tags like “Funny,” or “Extra Credit” or “Tough Grader.” The most popular tags for professors who are rated as “awful” are “Lots of Homework” and “Skip Class, You Won’t Pass.” There is a .87 correlation between “awful” and those tags. You don’t need anything else to tell you who is rating professors as “awful.” Lazy and unqualified students who probably don’t belong in the class in the first place. Is that who you want advising you? Students who don’t like homework and want to skip class? Good luck with that.
We have presented correlation statistics, but as every good data scientist knows, correlation does not imply causation. We can very likely say that “Difficulty” and “Quality” are causally related (it’s the same student rating both), but we cannot say which one causes the other. However we make the claim that in this case it makes no difference. The relationship between “Difficulty” and “Quality” in either direction draws the same conclusion. If students rate a professor as poor quality because the professor or the topic is difficult, then Rate My Professors is nothing more than a collection of students who couldn’t cut it and they are lashing out at the professor. If students are rating professors as low difficulty because they liked the professor, then Rate My Professors is nothing more than a popularity contest.
With causality in either direction, the conclusion is the same; Rate My Professors is inaccurate, highly biased, and favors either the worst student opinions or the most personally likable professors. If you’re basing your academic decisions on either of those factors, you’re a fool.
“Quality” versus “Difficult” Subjects
Academic Integrity Group parsed all subjects that professors teach into “hard” and “soft” sciences, and “quantitative” and “qualitative” subjects. They define hard science subjects such as math, computer science, biology, chemistry, engineering, etc., and soft science as subjects like psychology, sociology, history, etc. They define quantitative as topics of relatively pure math such as statistics, accounting, finance, computer science, engineering, etc., and qualitative as subjects like art, theater, dance, languages, etc. What they found was a strong and consistent reinforcement of the conclusion from above. The more quantitative a discipline, the more a professor was likely to be rated as poor quality and high difficulty. The data conclusively shows this relationship. Students are simply biased against hard work, and when the hard work is in areas with strong quantitative components it’s even worse.
Disciplines which have hard, quantitative answers had a 26% stronger negative correlation between “Difficulty” and “Quality!” Disciplines that were open to interpretation such as art, had a 20% weaker correlation between “Difficulty” and “Quality.” This relationship shows true across all the data in the study conducted and is even stronger as the number of ratings increase per professor.
There is nothing in the post above about this math professor’s teaching. This student levels a charge of racism against the professor without any proof. Not only does this violate Rate My Professors terms of service (and they obviously don’t moderate anything as they claim), it labels the professor a racist and this now ranks at the top of a Google search for that professor. How would you like this to happen to you for just doing your job? If a student truly believes the professor is a racist, the accusation should be taken to the institution itself for disciplinary action, not made on a for-profit website such as Rate My Professors. Do you know why it’s made on Rate My Professors? Because it’s probably a lie.
To make this point even stronger, when the analysis is restricted pure quantitative disciplines like math, engineering, and computer science, the negative relationship between “Difficulty” and “Quality” rose another 12% above the same relationship for pure qualitative subjects like art and theater! This reinforces our previous conclusion, when you see a poorly rated professor, odds are you are simply looking at a professor who is challenging or a subject that is challenging, and the ratings are nothing more than weak students who couldn’t rise to the challenge of the subject.
“Difficulty” versus Female Professors
Here is where is gets ugly. Rate My Professors facilitates sexism and we can prove it. Don’t forget, this is a site that for 17 years had a “hotness” rating that was regularly used to disparage and humiliate female professors. They took it down only after a female professor’s story of humiliation went viral. They didn’t take it down for ethical or moral reasons, they did it to avoid a lawsuit and a boycott. But the sexism remains and we can prove it.
Similar to how “Quality-Difficulty” was analyzed versus discipline, the data was split into male and female and also cross referenced against discipline. It was found that female professors are rated up to 28% lower than males across all disciplines and all number of comments, when compared to their real college evaluations. When discipline is added into the analysis, female professors in hard sciences were rated up to 38% lower than their male counterparts! These numbers are shockingly consistent. All the data suggests students are simply rating on ease and personal prejudice and bias. And Rate My Professors facilitates it for profit, just like the “Hotness” ratings.
It’s hard enough achieving a PhD in any topic, let alone in a quantitative subject, and there is no denying the additional difficulty women face in every aspect of society (especially fields like math, science, and engineering). These numbers are a testimony to that fact. Women are discriminated against simply for being women, especially in the STEM fields, and Rate My Professors is right there to profit from that discrimination. If you participate on Rate My Professors then you are contributing to a culture of discrimination against women, especially women in science, technology, and math. It’s as simple as that.
Another comment that violates the so-called terms of service for this trash site. Female professors are regularly humiliated and embarrassed on Rate My Professors and called “bitch,” “cunt,” and every other name in the book.
Rate My Professors, keeping women in their place since 1999.
“Difficulty” versus Ethnicity and Race
Academic Integrity Group is currently conducting studies in race versus “Difficulty” and “Quality,” and their initial findings indicate an even stronger negative correlation between “Difficulty” and “Quality” than they saw for women when the professor is part of an ethnic minority. Their initial data suggests (but does not yet prove — they are still investigating), that Black professors in predominantly white colleges and universities have a substantially lower quality rating than all other classifications. Interestingly, this does not appear to show up at predominantly black colleges and universities, which would further the conclusion that in addition to facilitating sexism, Rate My Professors facilitates racism. Stay tuned.
Final Words
At best, Rate My Professors is nothing more than a popularity contest and “easiness” rating. At worst it’s a tool of discrimination against women and minorities. Either way, it’s nothing more than a collection of clickbait nonsense. The ratings are unreliable and unethical. Rate My Professors contributes to a culture of anti-intellectualism and sexism. It is apparent that negative comments on Rate My Professors are overwhelmingly made by students who simply can’t cut it. When you use Rate My Professors to pick your professors, all you are doing is letting some weak, poor-quality student influence your decision about your academic career. Sound smart? Probably not.
If you are a college student, then you are in school to learn. So learn. You have the data now. Rate My Professors is clickbait garbage designed to use you for clicks at the expense of your professors who are there to help you.
Reject this trash site. Boycott Rate My Professors.
Brought to you by Rating Professors.