This last week in hospital, the third week-long admission following a bowel obstruction with nausea and vomiting, has given me another perspective on what patients want from their healthcare experience, particularly in terms of timeliness and time spent by clinicians.
From my experience as a patient over the past 16 months, I've been willing to wait, patiently, to see various professionals and to have tests done, trusting that this is necessary to diagnose and better treat my condition.
I have appreciated the students, nurses, and doctors who have taken the time to take a thorough history from me and to answer my questions and concerns about the relative benefits of colonoscopy, gastroscopy, and laparoscopy in the last four weeks.
I have appreciated their candidness in discussing the pros and cons of various scans and procedures.
I have been willing to submit to invasive procedures that have been necessary in the diagnostic process, and I am glad we decided on not pursuing a laparoscopy when none of the surgeons could see how it would be helpful, even though the cause of my symptoms was still unknown. When they recurred, I had a laparotomy to correct the twisted bowel which was only apparent on the first X-ray in early May.
I understand that the doctors and nurses did not want to put me through unnecessary suffering or procedures. The care and compassion shown by the clinicians makes unpleasant, invasive procedures, and the waiting times, bearable.
I was fasted from Tuesday to Friday for two consecutive weeks waiting for procedures, first a gastroscopy, then an endoscopy. The nausea and IV fluids overrode any hunger, I was just grateful that I was being cared for. I wanted answers, and was willing to deal with the waiting and uncertainty in order to get them. Pain relief also helped. Developing a topical allergic reaction to morphine was annoying as it took away a quick and effective form of pain relief. And the pain was bad, worse than being in labor.
My recent experience has made me reevaluate a recurrent topic of debate in Australian neuropsychology. I started my training at Melbourne Uni in 1990, and the duration of neuropsychological assessments has been a persisting area of discussion and disagreement within the field since then, with many clinicians worried that "long" assessments may cause patients fatigue, frustration, or distress, and some arguing that long and comprehensive assessments were a waste of time for both patients and clinicians. This seemed to be based on the belief that an experienced and competent clinician should know exactly what tests to give each patient, so that the assessment is over in the least possible time. It was argued that assessments should be done as quickly as possible to avoid subjecting patients to the "adverse experience" of testing, and to improve our clinical efficiency so we can see more patients in less time. My preference for comprehensive diagnostic assessments has been called obsessional, rigid, anal-retentive, cruel, inhumane, overly anxious, excessive or a sign of questionable c
As a patient of 14 months' experience, involving more than 15 admissions and over 150 days in hospital, I can tell you that having a neuropsychological assessment under the care of a compassionate and flexible neuropsychologist would only be perceived as an invasive or adverse experience by the rarest of patients. No blood is collected. No veins are punctured, often repeatedly, in various places, in attempts to take blood or insert cannulas. No disgusting contrast liquid needs to be ingested. There is no risk of life-threatening complications from the assessment. It doesn't cause physical pain. Neuropsychological assessments do not cause nausea, vomiting, or diarrhoea, or fear of dying during surgery. There is always the chance to take a break between items or subtests if fatigue, distractibility, or distress is an issue. There is always the chance to talk about a test if it triggers a strong emotional response, but I've rarely seen that happen, except in patients with a past history of learning difficulties who hated maths at school, or in an anxious patient whose anxiety was heightened by an anxious student who kept apologising for the assessment. Strangely, the most painful procedures I've experienced have been at the hands of anxious or apologetic nurses or medical students. It's better if they just get on with it.
I suspect that neuropsychologists have been over-sensitised to the risks of causing distress or harm to patients through our research ethics applications that always ask if the research could be distressing to the patient, and in a desire to avoid the bad old days of bilateral ECTs, frontal lobotomies, and the 'deep sleep therapy' investigated by the Chelmsford Royal Commission. We've never been involved in those kinds of procedures, and our tests are standardised on healthy controls, where the Wechsler Memory and Intelligence scales are given on a single day for co-norming purposes. This proves that healthy adults can complete the tests in a single day without adverse effects, apart from the fatigue which often surprises people when the testing is over. Patients often seem to find it an interesting and enjoyable experience, albeit challenging and confronting at times.
The Wechsler Intelligence and Memory scales, since the publication of the WMS-R, were meant to be given together, and this is done routinely in the US, where a full battery of neuropsychological tests can take from 8-10 hours, often administered by a trained psychometrician. I can't recall seeing any concern about this assessment time in my years lurking on North American neuropsychology lists.
Our recent CCN survey of Australian neuropsychologists found a mean assessment time of about 5 to 7 hours, with mean durations of "short," "medium", and "long" assessments varying from 1 to 20 hours, and the categorisation of assessments into short, medium, and long, varied according to the average assessment time of each responding clinician.
In the 20 years that I have been giving neuropsychological assessments, firstly on clinical placement, then for my PhD research, and then in mental health, neuroscience, general medicine, and rehabilitation settings, I can only recall a handful of patients who did not tolerate a comprehensive assessment with the Wechlser memory and intelligence scales and a number of additional tests of verbal fluency, confrontational naming, mood, anxiety, and premorbid abilities. The ones that I remember best were the young lady with borderline personality disorder who nearly pushed me over when I asked her to accompany me to the office for an assessment, and the agitated young man who hated maths, storming out of the office when I asked him the first arithmetic question.
Most of the 2000 patients seen directly by me, my students, and colleagues at St Vincent's from 1994-2009 were appreciative of the time we spent with them in interview, assessment, and feedback, and if any became fatigued or distressed, we gave them the opportunity to take a break, or to complete the assessment another time. Some accepted, but others preferred to complete the testing that day, as they'd often already waited for months to see us. In fact, we probably found the assessments more tiring than the clients, due to the need to maintain our concentration and adherence to standardised administration while recording observational data and attending to the client's needs.
With all due respect to my colleagues who are concerned that lengthy assessments might subjects our clients to distressing experiences, I think we need to compare what we do to other more common health procedures.
A dental exam, scale and clean is uncomfortable and sometimes painful. A Pap smear or rectal exam is undignified and not the kind of thing we discuss in polite company. Having bloods collected or cannulas inserted is painful, and can provoke more anxiety the more often you have them, especially if your veins are scarred from being accessed multiple times, or when they start collapsing.
Granted, all these procedures take less time than a neuropsychological assessment, but competent neuropsychological assessment is not invasive, painful, or undignified. With the exception of a few timed tests and tests of memory, it's always possible to stop midway through an individual subtest. If any of our tests caused significant distress or lasting damage, they would not be published.
Interestingly, a classmate of mine was asked to review a patient who had been assessed the year before by another student on placement. On seeing the Austin Maze being pulled from the box, the patient burst into tears, and begged my friend not to give it to her. Review of the previous assessment showed that the patient had been made to do the maze dozens and dozens of times in pursuit of the learning criterion of two error-free trials. The finding of a highly significant correlation between the total errors on the first ten trials of the test and the number of trials to learning criterion by Bowden et al in the early 1990s obviated the need to push patients to the learning criterion on the Maze, but this development hadn't translated into clinical practice, where the ability to achieve error-free trials was seen as evidence of ability to inhibit perserverative errors. The development of the WMS-3 &4 made it redundant as a test of visuospatial memory.
I suspect that patients are willing to submit to invasive, undignified, or painful procedures because they trust the clinician will not subject them to unnecessary tests and, crucially, will not omit necessary ones either. Omitting necessary tests in the interests of time makes a mockery of the time the patient has spent waiting to see the clinician to obtain answers to their questions and solutions to their problems.
We seem to forget that the word patient has two meanings - one is a noun used synonymously with client. The other is an adjective which describes how a client is willing to behave on the assumption that their interests are being looked after, and that they will receive the answers they desire from the clinicians they go to see. After waiting to see a clinician, clients want the clinician to reward their patience with an assessment that gets to the bottom of the condition affecting their health, and they don't want clinicians to do an incomplete assessment which wastes the time spent waiting in hope of answers. They want us to get it right the first time, even if it takes a number of hours or sessions to do it. They trust us to get it right, and not waste our time on tests that are outdated or insensitive, and therefore more likely to get it wrong.
We've probably all had the experience of knowing that something is wrong with our car, and taking it to the mechanic who checks it over, changes the oil, and says everything is okay, only to drive it home and have it break down on the way, or within the next week. This experience seems to make many people furious. They take it to a different mechanic who runs more sophisticated or thorough diagnostic tests, and discovers that the clutch plates are worn out, the engine is dropping two cylinders, or that the battery is almost dead. Things that should have been discovered if a thorough assessment had been done in the first place. The motorist is understandably angry at the inconvenience, the wasted time and money, and vows to never return to the first mechanic again.
Clients are like car-owners - they are willing to invest time and effort into diagnostic testing when there is a problem, on the implicit understanding that the diagnostician will do everything necessary to get it right the first time. They understand that sometimes things can be missed in even a thorough assessment, and are forgiving of that. It sometimes takes a while for symptoms to develop to the point that the diagnosis becomes apparent. But motorists are less forgiving if a mechanic cuts corners and doesn't fully assess for the underlying causes of the presenting problem, perhaps because of internal or organisational pressures to constrain costs, perhaps because of a lackadaisical attitude, perhaps because of a well-intentioned concern about not charging for unnecessary tests. The motorist sees this as a waste of their time and money. They paid the mechanic to get it right the first time. We clinicians owe our clients the same. They have patiently waited to see us, they are willing to submit to our tests, trusting that we know and use the best tests available. Compared to the painful and invasive procedures that they have experienced as a result of their illness or injury, Spending 5 or more hours sitting with a friendly, compassionate and caring psychologist who provides cups of tea , regular breaks, and is willing to listen to their experiences as a patient is seen as a welcome change.
Spending time with a neuropsychologist would be like going to a day spa when compared with the often abrupt and starkly clinical efficiency of other medical procedures, where patients and families can feel swept up on an impersonal conveyor belt.
Unlike motorists, patients are in a very vulnerable position, and often don't feel empowered to ask questions or give feedback to clinicians. If we don't ask for their feedback, how can we improve? One way to get honest and unbiased feedback is to provide a standard feedback form and reply-paid envelope for patients and family to complete anonymously after the assessment is over.
So how can we improve what we do for our patient clients? Firstly, we can ask them what they most want to learn or gain from their interaction with us, so that we can assess their needs and desires, and sometimes quickly give them what they want without embarking on a full assessment. When I asked to see the dietician prior to my discharge from hospital last week, all I wanted was some advice on how to reintroduce food to an irritated stomach that had been fasted for nearly two weeks. (I wasn't sure what to eat, I was hungry, yet afraid to eat anything lest it cause more pain). She sat down on my bed, and before I could tell her what I wanted, she said that she didn't have time to see me, but would arrange an outpatient appointment for me in a couple of weeks time. Then she was gone. My nurse rang the dietetics department and told the receptionist what I had wanted to know, the receptionist asked the dieticians, and advised the nurse to look up low-fibre diets. If the dietician had simply let me speak, she could have told me the same thing in less than a minute. While I understand the pressure on their time with multiple referrals from around the hospital, it was clinically inefficient to come and tell me she couldn't see me without first clarifying the reason for the referral. She could have answered it in a minute, and saved wasting their receptionists time to put me down for an outpatient appointment. It turns out that the written referral was completely wrong, and said I'd wanted advice on managing constipation!!!
How else can neuropsychologists improve what they do?
- Respond to new referrals as soon as possible for both inpatients and outpatients. Get a brief idea of what the patient wants to learn, don't assume the referral question is correct, and do a brief screen for untreated mood or anxiety issues that may add unnecessary caveats to interpretation of your assessment. Unless there is an urgent clinical need, do not assess a clinically depressed or anxious person who has not had treatment. Explain the reason for this to the referrer.
- Educate the patient about what to expect from the assessment by attaching a brochure about neuropsychologial assessment to your appointment letter.
- Be brave and evaluate what we do at present, and see if it is necessary for each patient. If a patient and their family have a clear and stable diagnosis with no questions about ability to return to work or study, or concerns about symptom progression, perhaps they just need education on their condition, rather than automatically progressing to a full assessment.
- If they have a condition that may change over time, they need a baseline assessment that will provide a good point of comparison improvement or deterioration in future functioning. Such an assessment would use measures with high test-retest stability, which is best obtained by using the composite scores provided by the Weschler Memory and Intelligence Scale Index scores. Individual subtests of the Wechsler Scales rarely have the same stability over time in healthy people as the index scores do, meaning that it's harder to detect true change on subtest scores alone. So if you're seeing someone with an acquired brain injury who may want to return to study or work, or someone with a progressive neurological disorder who may lose the capacity to function independently, it is going to be more helpful for them if you assess them with the full Wecshler scales so that you can determine the degree of change over time, presuming they aren't so severely impaired at baseline that formal assessment is impossible.
- Unless you try to formally assess someone, you can't presume that they are too impaired to be assessed. I have seen patients who were severely impaired on baseline testing, who improved over time. Even people with significant sensory, motor, or expressive problems, including blindness and deafness, can get subtest and index scores ranging from extremely low to superior. If we don't assess, we will never know.
- Of course, if someone is severely ill, in considerable pain, confused, or still in the recovery phase of an acquired brain injury, the test results will not be as valid or reliable as when they are clinically stable. It's okay to delay an assessment until the patient is well enough to do it to the best of their abilities.
- We can stop being embarrassed and apologising for what we do. We have the best tests available to test cognition, memory, mood, and behaviour, and we are the experts in understanding the impact of brain disorders on the whole person. People come to us for answers and assistance, we need to acknowledge where we can and cannot help.
- We need to regularly evaluate the tests that we use according to our ethics code and code of conduct, and to ensure that every test we use is measuring what it is supposed be measuring. We need to be experts in the clinical applications and importance of reliability, validity, sensitivity, and specificity. Practically speaking, it's like checking that your grandmother's kitchen scales are as accurate as a modern digital scale, and keeping it for decoration purposes only if it is not. Or if her scales are accurate, but scaled in outmoded imperial measures, then printing out a conversion chart from imperial to metric so that you don't miscalculate the amount of sugar you use in your macaroons. In baking, accurate measurements are vitally important, as they are in science and neuropsychology.
- Give up on tests that you feel you can interpret accurately based solely on years of experience and your clinical intuition. This approach can result in widely divergent impressions both between and within clinicians. It's a little like rapidly measuring a pinch of salt with your hands vs standard measures, or guestimating one cup of flour. Too much salt will ruin your macaroons. Too much flour will make your sponge cake dry. How can we bake to consistent results without consistent recipes and standardised measures? Our assessments require precise measures, just like baking sponges or macaroons. They're not casseroles and curries that can be seasoned to taste and adapted to available ingredients.
- We should consign unreliable, invalid, old and outdated tests to the bookcase rather than continue to use tests that provide results with a standard error of measurement big enough to drive a truck through. Anything with a reliability of less than 0.7 is considered unacceptable. From memory, this means that we should dispense with Trails A& B, the RAVLT, the L'hermitte Board, Colour form sort, and the Wisconsin Card Sorting test. We have better tests for psychomotor speed, verbal and visual memory, and fluid intelligence. Surely it's worse to make a patient spend time on completing inaccurate and uninterpretable tests than to make them spend a few hours on a battery of highly reliable and well-standardised ones that were designed to be used together?
- Purchase and use tests like the BRIEF or BRIEF-A, or the FRSBe to get patient and informant reports of a range of frontal-system behaviours that can be compared to age and gender-matched norms. These inventories reveal if their are problems with impulsivity, dysinhibition, inappropriate behaviour, planning, organisation, if there has been a change from premorbid levels, they are much more revealing than relying on the WCST and Trails to measure frontal function; they save time on the clinical interview, and can help reveal lack of insight, over-reporting, protectiveness, or underreporting in patients and informants.
- Be honest in obtaining and reporting test results. If the patient comes to you in so much fatigue, depression, anxiety or pain that you think it will invalidate the test results, do not proceed with the assessment. Reschedule it for when they feel better, or see if pain relief or a warm drink can alleviate their discomfort and start the testing a little later. If the testing is affected by the development of fatigue, reductions in concentration, or increased anxiety, discontinue the assessment so you can get the patient's best performance at another time, and note the details on the record forms and in the report.
- If a comprehensive assessment is not obtainable, detail the reasons in the report, and any caveats that may place on the interpretation.
- Don't be embarrassed or apologetic if you're not sure of what to make of the assessment results. Describe what you found, and list the possible interpretations. Human problems are highly variable and and complicated, and it's not always possible to be sure of the answers. Better to acknowledge this rather than to put on a cloak of false confidence in your conclusions. Better to lay your decision-making cards on the table - "it could be one of these things, but I'm not sure. It seems more likely to be a, b or c, and less likely to be z. On an outside chance, it could be xyz". Don't assume primary responsibility for arriving at a diagnosis - your effort is just a part of a multidisciplinary assessment. As important as it is, it's just one cog in the wheel. By putting your differentials in the report, it allows others to consider your hypotheses and may help clarify their own.
- Remember your first duty of care is to the patient. Ask what they want to learn or achieve from seeing you, and try to give that to them if it is possible, or reformulate their questions and desires into a form that is possible for you to address.
- Don't let bureaucrats dictate how long your assessment should take based on the "time is money" premise, which is offensive and demeaning to patients, and disrespectful of clinicians who are the experts in their field. Each patient will need a different amount of time to be assessed. Some will skim through everything in the minimal time, others will take longer, due to a multitude of factors including personality, impulsivity, fatigue, concentration, motivation, or other things. Each patient deserves to get the best assessment for their individual circumstances, even if it takes a little more time. We wouldn't accept half a brain scan because it was taking too long, would we? If the patient was allergic to the contrast medium, then we might not get an MRI with contrast, but the reason would be documented and the absence of the data taken into account when interpreting the non-contrast scans.
- Remember to give and elicit feedback on the assessment process once it's over, and keep records of the feedback you receive, so that you can continually improve on the service that you provide.
- Make time to meet with the primary carer individually. They often find it hard to express their needs and concerns in front of the patient, and often don't know what to ask for in terms of education and supports
- Refer on to a multidisciplinary allied health team if the patient isn't already linked to one, so that the patient can benefit from the range of professional services that are available.
An anonymous person commented on the feedback to the 2012 CCN conference that I was pushing an agenda of "testing until the cows come home." This was in response to my comments on a student case presentation where I expressed concern about the apparently common practice of omitting core subtests of the WMS-IV and to assume that only giving logical memory or verbal paired associates provides a reliable measure of verbal memory. I felt bad about getting sidetracked on that issue in that session, and immediately apologised to the presenting student, and to the two students who subsequently had less time to present their cases,
after the session was over. I think we parted on good terms, and I would like to apologise to anyone present who was offended by my passion for high-quality data.
For the information of the one or two people who said on the survey that the early scheduling of the session was disrespectful to the students, the previous student case conference in Melbourne in 2009 received feedback that students did not attend to support their peers because they did not want to miss the parallel sessions, and that they did not want it to be a stand-alone session during the day because that would have made them feel too exposed. In response to that feedback, the 2012 conference committee decided to schedule the student cases as an early stand-alone session. We tried to get it right, but we couldn't satisfy everyone.
In terms of testing, despite my preference for comprehensive assessments using the best tests available, I don't believe in over-assessing anyone, just in doing the best assessment possible for the individual. I'm not completely rigid in my approach, which has developed on neuropsychological measurement research from the past 30 years. I am concerned about arriving at a false positive or negative diagnosis, and am not motivated by a desire to complete an assessment with the least tests possible in the shortest available time, or by the misguided assumption that more testing is better - we have to recognise the overlap and associated redundancy of some of our measures.
What I have written above basically summarises my approach to neuropsychological assessment, developed over 24 years of reading scholarly articles, research, practice, case-conferences, and supervision. It's an approach based on doing the best assessment possible, using the best tests, for each patient, and not taking short-cuts unless it is clinically necessary. I don't see it as "testing until the cows come home," and my patients and students haven't seen it that way either, though I can understand that it is a different approach to that described in Kevin Walsh's books of the late 1980s.
My patients and their families have appreciated the time I've taken with them, and the students I've supervised have said that taking a routine and standardised approach to assessments with each patient allows them to get a better idea of the variability of test performances between and within patients, and to spend their time concentrating on the patient and the qualitative aspects of performance, rather than worrying about missing out on crucial tests.
The student comments were reassuring for me, because I trained at a time when every supervisor had a different set of favourite adult tests which wasn't necessarily made explicit (we were told to choose them ourselves, and that we'd learn "through experience"). On placement, in trying to assess what I thought was important (and also take a history) in less than two hours, I was often embarrassed to find I'd omitted at least one test that my supervisor thought were vital, like digit span, digit symbol, arithmetic, vocabulary, similarities, comprehension, or information. On the one hand, I was admonished for leaving "vital" tests out, on the other I was admonished for wanting to do a thorough, standardised assessment rather than a brief screen based on a hypothesis-testing approach. The former was seen as a waste of time, even though it affected my confidence in my interpretation of the test results.Then I went to the RCH where my supervisors encouraged me to do comprehensive assessments, both for the benefit of the children and their families, and as a learning experience for me as a student. I learnt that the WISC-R and other intelligence scales available at the time gave similar results from slightly different perspectives, and that it was probably over servicing to do two or three different cognitive batteries with the one child (like the WISC-R, K-ABC and Stanford-Binet), no matter how sweet and compliant she was. Sometimes our tests don't give us definitive answers on subtle problems.
I decided that adults deserved the same degree of thorough assessment given to children, particularly with work, study, and independence to consider, and that it was more consistent to administer the tests of the Wecshler scales in the standardised order and procedure than to pick and choose based on clinical intuition, or to invalidate the norms by "testing the limits" or otherwise violating standardised procedures on an ad-hoc basis for each patient.
Embracing the well-normed, stable, and reliable WMS-R, WMS-III and WMS-IV indices saved me wasting patients' time on the poorly normed, less reliable and less stable tests like the WMS, RAVLT, or the norm-free L'hermitte board and colour-form sorting test. And better still, by using the WAIS-R and WMS-R in combination, there were tables to look up to see if memory was in the range expected from intellectual functioning, rather than the guesses and clinical intuition available to users of the WMS. This has only improved more with the co-normed third and fourth editions of the Wechsler scaless, and the conforming with the WTAR.
Using this approach, I could settle into a well-rehearsed process of assessment that is more time-efficient and standardised than wondering which test to give next. It allows the clinician to act like a well-oiled conduit for the tests on one level, while being simultaneously aware of the patient's mood, concentration, and other qualitative features on the other. I couldn't justify sacrificing robust tests for flimsy ones. Which is why I prefer composite scores to subtest scores. The principle of aggregation shows that the reliability and stability of composite or index scores is greater than that of the individual subtests that contribute to the index.
I have no idea who called my approach "testing until the cows come home," I find it personally amusing, but rather offensive to our patients. Since I've been likened to a dairy farmer, I'm proud to say that I'm willing to take the time to bring all the cows home by dusk, rather than rushing it and leaving old Daisy, or the calves, out in the river paddock on a stormy night because I'd rather be inside, dry and warm by the fire. It is the farmer's responsibility to be patient and to make sure each and every one is home safely. It is the clinician's responsibility to err use their duty of care to their patients. We may all choose to do it in different ways, but we all need to sleep comfortably at night, knowing we are practicing ethically, and within accepted guidelines for our field.
In the early days, I may have been a little over- inclusive in applying the same approach to every patient, especially when they needed education and information, rather than a diagnostic assessment. But I learnt that being clear on the objective of the assessment and the patient's needs allowed me to refine and tailor a consistent but individualised and responsive approach to each person. I feel more comfortable with that than trusting that a brief assessment will tell me everything I need to know to get a clear and reliable picture of the situation.
I understand the need for brief assessments in acute hospital settings or in screening or triage assessments for cases of possible dementia - if someone fails a brief screen like the MMSE or Addenbrooke's, there's clearly a problem, but depending on the case, it still may be necessary to do a more comprehensive evaluation, and I would never interpret a normal score on a brief battery as showing no evidence of impairment. Rather, I would say that there was no evidence of impairment evident on the brief and incomplete assessment conducted, and that further testing would be recommended if a more complete or detailed cognitive profile was desired.
I'm sorry if this post has been a little repetitive, I wrote the initial draft for over five hours when I should have been sleeping, but I needed to get it written so that I could stop composing it in my sleep.
PS. I'm not trying to be brave or inspiring or anything. I just want to get healthy again, so that I can enjoy being with my family and friends, and participate in society in a meaningful way. I'm incredibly grateful to have survived for 8 months after being diagnosed with two grade IV gliomas. I intend to keep improving until I'm fully recovered, and to live another 40 years so I can see my children grow up and enjoy cuddling their children.