Applicant Rating System

Developer’s Description

CodeX Barcode Control for ActiveX
The Applicant Rating System is designed to assist committees in the hiring process. This application helps force committees to follow your hiring procedures. The application allows for Required and Preferred Criteria. Each Preferred Criteria can have a weight assigned to it. The application allows you to give the committees a list of examples for both Required and Preferred Criteria they can select from, or they can add their own. Each committee member can enter point values for each applicant. The application computes the weighting and rating data (points for each applicant) which eliminates the possibility of mathematical errors

Introduction

As some of you may have seen, I’ve recently been pioneering a new system that helps applicants figure out where they stand with respect to medical school admissions as well as giving them a place to start when it comes to creating a school list. My system is a comprehensive algorithm that takes into account all of the major (and some of the minor!) factors that go into building a successful application! This post aims to elucidate the process by which this method scores an applicant as well as get community input on the algorithm to attempt to strengthen it even more.

When I first started building this system, I used a google doc spreadsheet to make notes and create the initial versions of some of the formulas that go into this program. In order to do this, I scored myself, other applicants I knew in real life, and many applications I found in the What Are My Chances (WAMC) forum to adjust rating scales to try and create a generalized model that placed applicants into appropriate discrete categories.

Once I had my initial quantitative rating system in place, I wrote a Python script that allowed me to easily score an applicant based on factors normally included in their WAMC thread and which gives the output that I normally post in these threads. This is the point at which I started posting in threads as well to see how well my formulations matched up with community suggestions.

Finally, after some more tweaking, I created a comprehensive Excel document that contains instructions, qualitative descriptions of each factor that are then reduced to a numerical score, a place to input score values and receive a score in addition to a category level and school breakdown, and a page that displays which schools are in which categories. This document is available for download.

I will go through each of these factors in this post to articulate how they fit into the overall scoring paradigm as well as solicit input from the SDN community about how to increase the accuracy of this system.

The LizzyM System

This system was originally created as a supplement to, not a replacement for, the already widely-utilized LizzyM scoring system. As a reference, the LizzyM score is defined as (GPA*10)+MCAT and may contain a +1 or -1 modifier in certain situations. The applicant’s LizzyM score is then compared to the LizzyM score for a school to determine whether or not the applicant is statistically competitive for that school. However, the inherent simplicity of the LizzyM score, while making it quick and easy to generate and apply, also creates problems endemic to systems that reduce and generalize. The two major simplifications are the reduction of an entire application to two (already numerical) metrics and the assumption that the LizzyM score accounts for the majority of, if not all of, the variability attributed to selectivity.

While there is merit to these assumptions, which is why the LizzyM score is so widely used, there are also deficiencies that need to be addressed in order to create a more accurate system for assessing an application. One of these deficiencies is that certain schools with similar LizzyM schools may be in very different levels of competitiveness. For example, although UVA and Duke have identical LizzyM scores, it is clear that Duke is a much more selective school than UVA. Additionally, small differences in LizzyM score become significant when using this metric to assess competitiveness for two similar schools. For example, Duke has a LizzyM score of 75, while Yale has a LizzyM score of 76; both schools are similarly selective, but someone might (very mistakenly) advise a applicant with a 3.9/36 that they are more competitive for Duke than they are for Yale. Finally, the LizzyM score is used as a way to tell if someone is statistically competitive for a single school and is significantly less useful for helping an applicant come up with a list of schools.

The Applicant Rating System – Overview

The WedgeDawg Applicant Rating System (ARS) was created to address these deficiencies. It takes into account most of the factors that make up an application to medical school, gives an applicant a separate score for each one, and then gives an applicant a numerical rating. This numerical rating is then translated to a category level and a profile of schools to apply to is created based on that category.

One of the major assumptions of the ARS is that applicants can be broadly classified in terms of competitiveness into one of 6 categories. Within these categories, distinctions between applicants are might lower than the differences between applicants that are in separate groups. Much of the variability that occurs between two applicants in the same group comes from subjective parts of the application that are not taken into account here, namely the personal statement, letters of recommendation, secondary essays, and their interviews. Because the purpose of the ARS is to create a starting point for a school list, these factors are not yet relevant. Indeed, the ARS does not assess where an applicant will be accepted; rather, it determines the best collection of schools for the applicant to apply to maximize chances of success at the best schools realistically possible.

The following factors are taken into account by the ARS:

  1. GPA
  2. MCAT
  3. Research
  4. Clinical Experience
  5. Shadowing
  6. Volunteering
  7. Leadership and Teaching
  8. Miscellaneous
  9. Undergraduate School
  10. Representation in Medicine
  11. GPA Trend

Candidate evaluation forms are to be completed by the interviewer to rank the candidate’s overall qualifications for the position to which he or she has applied. Under each heading, the interviewer should give the candidate a numerical rating and write specific job-related comments in the space provided. The numerical rating system is based on the following:

Leave a Reply

Your email address will not be published. Required fields are marked *