Tall and dark-haired, the third-year medical student always seemed to be the first to arrive at the hospital and the last to leave, her white coat perpetually weighed down by the books and notes she jammed into the pockets. She appeared totally absorbed by her work, even exhausted at times, and said little to anyone around her.
Except when she got frustrated.
I first noticed her when I overheard her quarreling with a nurse. A few months later I heard her accuse another student of sabotaging her work. And then one morning, I saw her storm off the wards after a senior doctor corrected a presentation she had just given. “The patient never told me that!” she cried. The nurses and I stood agape as we watched her stamp her foot and walk away.
“Why don’t you just fail her?” one of the nurses asked the doctor.
“I can’t,” she sighed, explaining that the student did extremely well on all her tests and worked harder than almost anyone in her class. “The problem,” she said, “is that we have no multiple choice exams when it comes to things like clinical intuition, communication skills and bedside manner.”
Medical educators have long understood that good doctoring, like ducks, elephants and obscenity, is easy to recognize but difficult to quantify. And nowhere is the need to catalog those qualities more explicit, and charged, than in the third year of medical school, when students leave the lecture halls and begin to work with patients and other clinicians in specialty-based courses referred to as “clerkships.” In these clerkships, students are evaluated by senior doctors and ranked on their nascent doctoring skills, with the highest-ranking students going on to the most competitive training programs and jobs.
A student’s performance at this early stage, the traditional thinking went, would be predictive of how good a doctor she or he would eventually become.
But in the mid-1990s, a group of researchers decided to examine grading criteria and asked directors of internal medicine clerkship courses across the country how accurate and consistent they believed their grading to be. Nearly half of the course directors believed that some form of grade inflation existed, even within their own courses. Many said they had increasing difficulty distinguishing students who could not achieve a “minimum standard,” whatever that might be. And over 40 percent admitted they had passed students who should have failed their course.
The study inspired a series of reforms aimed at improving how medical educators evaluated students at this critical juncture in their education. Some schools began instituting nifty mnemonics like RIME, or Reporter-Interpreter-Manager-Educator, for assessing progressive levels of student performance; others began to call regular meetings to discuss grades; still others compiled detailed evaluation forms that left little to the subjective imagination.
Now a new study published last month in the journal Teaching and Learning in Medicine looks at the effects of these many efforts on the grading process. And while the good news is that the rate of grade inflation in medical schools is slower than in colleges and universities, the not-so-good news is that little has changed. A majority of clerkship directors still believe that grade inflation is an issue even within their own courses; and over a third believe that students have passed their course who probably should have failed.
“Grades don’t have a lot of meaning,” said Dr. Sara B. Fazio, lead author of the paper and an associate professor of medicine at Harvard Medical School who leads the internal medicine clerkship at the Beth Israel Deaconess Medical Center in Boston. “‘Satisfactory’ is like the kiss of death.”
About a quarter of the course directors surveyed believed that grade inflation occurred because senior doctors were loath to deal with students who could become angry, upset or even turn litigious over grades. Some confessed to feeling pressure to help students get into more selective internships and training programs.
But for many of these educators, the real issue was not flunking the flagrantly unprofessional student, but rather evaluating and helping the student who only needed a little extra help in transitioning from classroom problem sets to real world patients. Most faculty received little or no training or support in evaluating students, few came from institutions that had remediation programs to which they could direct students, and all worked under grading systems that were subjective and not standardized.
Despite the disheartening findings, Dr. Fazio and her co-investigators believe that several continuing initiatives may address the evaluation issues. For example, residency training programs across the country will soon be assessing all doctors-in-training with a national standards list, a series of defined skills, or “competencies,” in areas like interpersonal communication, professional behavior and specialty-specific procedures. Over the next few years, medical schools will likely be adopting a similar system for medical students, creating a national standard for all institutions.
“There have to be unified, transparent and objective criteria,” Dr. Fazio said. “Everyone should know what it means when we talk about educating and training ‘good doctors.’”
“We will all be patients one day,” she added. “We have to think about what kind of doctors we want to have now and in the future.”
Doctor and Patient: Why Failing Med Students Don’t Get Failing Grades
This article
Doctor and Patient: Why Failing Med Students Don’t Get Failing Grades
can be opened in url
http://throwingnews.blogspot.com/2013/02/doctor-and-patient-why-failing-med.html
Doctor and Patient: Why Failing Med Students Don’t Get Failing Grades