35 APCO Public Safety Quality Assurance Webinar Questions Answered

We recently hosted an APCO Webinar on ‘Public Safety Quality Assurance (QA) Best Practices, Tips and Tools,’ featuring experienced QA professionals Sherrill Ornberg (member of the national Quality Assurance Committee, Denise Amber Lee Foundation board member and QA Director), and Nathan Lee (founder and president of the non-profit Denise Amber Lee Foundation, and member of the Recommended Minimum Training Guidelines for Telecommunicators Working Group). They were joined by Patrick Botz, NICE Public Safety Director of Engagement.

There were nearly 500 public safety professionals live on the Webinar, many of whom​ asked fantastic questions during and after the Webinar about Quality Assurance processes, standards, adoption practices, staffing, and the NICE Inform Evaluator QA software. The 1-hour time allotted for the live event only allowed for responses to some of these questions, so we are providing all answers in writing here. Many thanks to Sherrill and Nathan for their collaboration on a number of these answers.

Have another question? Just ask us.

Jump down to: Questions Relating to QA Program Adoption Challenges
Jump down to: Questions Relating to Staffing for 911 Quality Assurance Evaluations
Jump down to: Questions Relating to NICE ​Inform Evaluator Quality Assurance​ Software

Questions Relating to Quality Assurance Forms and Processes

  1. What should be our goal QA score?
     
    Ultimately, the target benchmark for a passing score for each evaluation should be 90 percent. With this said, we would very strongly recommend that PSAPs start at a much lower score, say 75 percent, when they roll out their QA program. As staff become acclimated to what’s expected from them, the acceptable score can be ramped up to 80 percent, then up to 85 percent, and so on.
     
  2. Are you recommending we evaluate 2% of calls per week?
     
    Yes, according to the ANSI-Approved APCO NENA QA/QI Standard, agencies should evaluate at least 2% of all calls. Periodic weekly QA evaluations are better than end-of-month evaluations because they allow for catching and addressing issues earlier, before knowledge gaps have a chance to cause repeat problems, or before bad habits become ingrained.
     
  3. Is there an easier way to pick calls to QA if 80% of your call load is administrative calls?
     
    Certainly. If your recorder supports it, you can filter recorded calls by metadata tagged from the integrated 911 call taking system. Or even better, use integrated Quality Assurance software to set up call selection scheduling rules to auto-select recorded calls based on associated CAD incident data, such as incident type, service type (EMS, police, fire, etc.) or priority level. This will filter out your administrative calls that do not have CAD data associated to them, and serve up the desired calls for evaluation.
     
  4. Can Public Safety Quality Assurance be used for discipline?
     
    No, a QA program should be a learning tool focused on improvement. It is not intended to be a disciplinary instrument. It is meant to acknowledge those telecommunicators (TCs) who are doing an exemplary job, reaffirm those who are following agency policies and procedures, and instruct those that are not performing at an acceptable level. Especially in cases of unacceptable performance, the turnaround time is very important to ensure the review is a valuable mechanism for timely improvement.

    ​If a TC's QA reviews reveal continued errors of the same kind after first being counseled and then placed on a performance improvement program (PIP), then disciplinary action is appropriate. One exception would be failure to dispatch a priority call, which in our opinion should be dealt with immediately through the disciplinary process, since that error is a total miscarriage of our profession’s mission.
     
  5. If a telecommunicator fails on only part of a section, should they receive 0 points for the entire section? For example, let’s say they answer the phone by giving the agency's name but don't give their own name, should I apply 0 points or partial points?​
    ​​
    ​​A TC either complies completely with your agency's policies or does not. Each question is scored as "yes, no, refused or NA". There shouldn’t be a “sort of” answer or partial credit for compliance with any of the policies.
     
  6. Should I score in accordance with the order of the questions on the call? For instance, let’s say the TC obtained the caller’s phone number at the end of the call? What are your thoughts on that?

    ​If your agency utilizes protocols, or the order is determined by policy or procedures, then following them is essential. If not, then the order isn’t important, as long as all of the necessary information has been obtained.
     
  7. Also, with a letter of the Law (this is what needs to be done, you either did it or you didn't) vs. an intent of the law (this is what you were supposed to do, what should the scoring approach be? Where is human error allowed?

    ​In​ our opinion, it is a slippery slope. It becomes dangerous and the program loses its integrity when the QAE has the ability to decide when some variation or portion of an agency’s policy or procedure is acceptable. It is much cleaner when the options are black and white (e.g. the TC either followed policy/procedure/protocol or did not). With this in mind, it is very important when implementing a QA program that it be rolled out incrementally, which provides the TCs a period of time to become accustomed to the expectations of your management. It is also important to word the questions clearly, to prevent misinterpretation by evaluators, especially if more than one is involved in QA audits.
     
  8. Should officer self-initiate calls (e.g. traffic stops, checking subjects, building checks, etc.) be evaluated as well?

    ​Absolutely, the review of the TCs’ focus on radio traffic and their ability to record the information given to them into CAD correctly is very important and can directly impact officer safety.
     
  9. Should the telecommunicator verbally repeat the address to the caller as part of the verification?

    ​This is the most important question in a QA review of a 911 call. The answer is no, because a caller who is distressed, injured or frightened will often say “yes” without clearly understanding what the TC has asked. The most effective manner is to say, "Would you please repeat the address to make sure I have it right.​

    Lack of address verification is one of the three most common reasons that PSAPs end up in expensive litigation. The other two are lack of training and lack of supervision.
     
  10. Do you consider the caller giving the Telecommunicator the address one time as address verification?

    ​No, the definition of verification is, “evidence that confirms the accuracy of” the location. Verification should always include numerics, street name or intersection, city/village, apartment/unit number (if applicable) and the state if the PSAP is near a border with another state. If any one of the applicable components are missing, the question should be graded as a “no”. This is because location verification is vital to the successful handling of a call. The point value for that question should always be weighted heavily.

    ​In fact, it is our belief that if this question is scored as a “no”, the TC has failed their QA evaluation of the incident. If the TC doesn’t obtain the correct location, it won’t matter how well trained or effective the sworn are because they won’t be able to perform their jobs. The TC is the first first responder and it is his/her responsibility to make every effort to get this step right.
     
  11. What is the empirical evidence for the benefits of the annual performance evaluation that was mentioned during the webinar?

    ​Annual performance evaluations are very useful, if they are based on objective information. Even if only two weekly QA reviews were completed for each TC, there would still be 104 reviews to use as the basis for an objective evaluation. Though it is unlikely that the annual performance evaluation will have an impact on the TC’s salary, it is a significant tool that can be utilized when opportunities for promotion present themselves.
     
  12. What are your thoughts on self-review of calls?

    ​We encourage self-evaluation. One organization had some extra PCs that they put next to the lunch room and allowed operators to listen to and score their own calls. Two things happened: 1) the signup sheet was always full, and 2) the TCs graded themselves harder than their supervisors.

    The key to overcoming fear and resistance to QA and monitoring is to involve telecommunicators in the planning process from the get-go, and elicit their input. Allow them to listen to their own calls and self-evaluate and you will see them learning to self-improve in the process.
     
  13. The dispatchers here are able to see their evaluations as our evaluator does them through the recorder. Do you recommend a one-on-one anyway?

    ​Definitely. The purpose of one-on-one meetings is to coach the TC based on results of their evaluations, versus just understanding how scores are assigned. In other words, the purpose is guidance towards improvement, beyond just recording compliance or non-compliance.
     
  14. My sense is that answering point/call entry is the focus of a majority of QA effort. Our throughput is the effective dispatch and management of calls. Review of radio was also mentioned. Do you advocate a call review process involving customers (dispatched agencies and units)? My opinion is that those customers will define quality and provide essential feedback at this end of the business.
     
    We agree. Communications of dispatchers and call takers, as well as entire incidents should be periodically reviewed. Collaboration between call taking and dispatching functions for the purpose of development of QA forms is great, as these two types of operators must work as a closely-knit team. We also have a customer who intentionally directs one supervisor to evaluate calls taken and made by another supervisor’s team. This helps with the objectivity of evaluations and leads to better collaboration among supervisors and their teams.
     
  15. Why is calibration so important?
     
    If you have some QAEs who are grading more generously and some who are grading more strictly, your QA program will definitely fail, because you’re going to have favoritism issues. Even if you just have one QAE, it’s important to calibrate to make sure the QAE’s approach to QA evaluation is in line with management's expectations.
     
  16. Do you have any suggestions on how to measure soft skill goals based on attainable criteria rather than relying solely on individual perception?
     
    Some agencies derive their QA criteria for evaluating soft skills from the experience with their best telecommunicators. It helps to pull up their recordings in the evaluation form design stage and again in calibration sessions, so that everyone’s understanding of Quality Assurance standards is clear and based on reproducible experiences.
     
  17. Should a quality improvement committee confer when a call taker or dispatcher has a problem with a review?
     
    Many agencies handle QA evaluation disputes via two-step review arrangement. For example, when quality analysts or trainers rate calls, supervisors have the right to finalize each evaluation and handle disputes. Larger PSAPs with multiple supervisors (or quality analysts) calibrate everyone’s approach to evaluations, to improve consistency and uniformity. Disputed evaluations provide valuable inputs for calibration sessions.
     

Questions Relating to QA Program Adoption Challenges

  1. How do you turn Quality Assurance into a positive?
     
    Due to limited resources, some agencies only evaluate calls in order to address problems when complaints come in or when other issues arise. This inevitably results in bouts of dissatisfaction, frustration and even anxiety among call takers and dispatchers who perceive this process as punitive.

    Agencies can successfully tackle this problem by planning for collaboration with call takers and dispatchers from the start. Engaging them in the process from the time of their initial training is the first step towards acceptance of the QA program. Self-evaluation as a part of training helps them accept the reasons and objectives for QA.

    Also, as a matter of best practice, agencies should evaluate recordings of all communications that involve a critical incident type (reported chest pain, difficulty breathing, domestic violence, etc.) regardless of who handled which call, in addition to a set number of general-type recordings per operator each week or month. When this set of recordings is randomly, automatically selected, subjectivity or “friendship discounts” are out of question.

    ​ Some agencies also allow employees to recommend their specific calls for QA review, whether it’s to showcase a job well done, enter their best calls for a contest, or to help train other team members. Employee participation in call selection helps assure that the QA process is perceived as fair. It also gives TCs much-desired recognition for a job well done.

    As a matter of best practice, employees should also be given access to their own evaluation reports and QA scores, as well as associated calls. This type of “open-book” approach allows employees to see the trends of their performance over time, not just the most recent “bad call” evaluation.

    And finally, remember that results should be treated with respect and confidentiality—much like test scores. Publish and celebrate the overall agency results, but keep the individual performance results private. In other words, build up the team publicly by showing good averages. Inspire them to compete with one another. But don’t belittle individuals openly to their peers.
     
  2. We take the 911 Quality Assurance score percentages every quarter for each shift and award pizza parties. :-)
     
    Congratulations! You’ve found one way to turn the QA evaluation process into a positive experience by celebrating and acknowledging your TCs’ achievements.
     
  3. With the scoring being all or nothing, have you seen morale drop amongst the call takers and QAEs?
     
    This depends on how you train employees, how you introduce the QA program, how​ clearly you communicate how TCs will be evaluated, and what they’ll be evaluated on. To ensure each question is understood and consistently interpreted, outline the intent and objectives behind each question in writing. Specify what would qualify for a yes/no response on each item.

    Also, the program will be better accepted if you start out selecting calls or dispatches that have positive results so that TCs do not become fearful of the QA process.

    ​ Finally, use QA reviews as opportunity to recognize excellent performance as well. Most TCs crave recognition but few receive it, as ‘praise reports’ tend to focus more often than not on sworn officers.
     
  4. How many calls per telecommunicator would be an acceptable minimum to review?
     
    According to the ANSI-Approved APCO NENA QA/QI Standard, agencies should evaluate:

    • at least 2% of all​ ​calls;
       
    • all cases involving catastrophic loss and/or high-acuity events– as soon as possible after the receipt of the call and/or following the radio dispatch, or at least within five (5) days;
       
    • any other call or event types as defined by an agency.
       

    With this said, we recommend a gradual ramp-up that emphasizes consistency of evaluation and employee feedback (at least one call per week per operator to start). It will be easier to achieve employee buy-in if the criteria for passing scores are less strict at the onset, increasing gradually over a period of 2-3 months until they reach the goals for minimum passing scores.

    As you begin the process of defining your QA program, think about any problem areas that have already been identified, most likely through complaints. Customer service complaints are common, and typically center on response times (which can be impacted by improper call-processing techniques or poor understanding of standards, policy and procedures). Remember to emphasize that QA/QI is not about enforcing discipline—you can’t punish employees for doing things poorly if they don’t have a definitive guide and true understanding of how to do things the right way. The exception is repeated ignorance of specific requirements by the same communicator even after reminders, coaching and remedial training.​

Questions Relating to Staffing for 911 Quality Assurance Evaluations

  1. How many QA evaluators do you recommend for an agency with 102 police dispatchers?
     
    That number can be determined once management decides what percentage of the 911 calls and dispatches they want reviewed. Another determining factor is whether the reviews are done manually or assisted through an automated QA solution. As a point of reference, the Charlotte-Mecklenburg Police Department in North Carolina has two full-time QAEs who review the calls of 121 TCs, and the Hamilton County 911 center in Chattanooga, Tennessee has 3 QAEs for its team of 130 telecommunicators.
     
  2. Can you explain how smaller agencies can facilitate a QA program? We are a smaller agency so we do not have shift supervisors. QA would have to be performed by the Director.
     
    A PSAP Quality Assurance program can and should be applied by anyone whose telecommunicators must comply with specific standards, whether internal, or protocol-based or both. We assume that you have written standards that can be used as a basis for quality evaluation questions and as a source for the development of QA rating forms.

    ​As for the evaluation process itself, smaller agencies can have a director perform evaluations, or they can share a QA analyst with other agencies. You may also consider services offered by the Denise Amber Lee Foundation. You can contract experienced third party analysts for a very reasonable service fee.
     
  3. We are a large PSAP (1.3 million 911 calls a year). We have 1 QA supervisor who struggles just to QA the major calls that are paged out to senior staff. How many QAs can be cranked out in an 8 hour shift, typically?
     
    This depends on the length of calls and evaluation forms, as well as on the method used to select calls for evaluation.

    Eliminating unnecessary manual steps in call selection and reporting will free up time for the evaluation, while also improving the value of the process. Automated call selection based on pre-defined rules speeds up the selection process while improving its objectivity. Automated reporting speeds up the tracking of QA scores.

    Shorter QA evaluation forms with tight focus on specific call types (which can be identified automatically based on metadata collected via CAD integration) will allow you to review more calls and focus on what really matters, while producing more accurate statistics which you can use to identify gaps and improve team performance.

    We advise against long, all-inclusive forms encompassing every possible incident type. That would necessarily lead to many ‘N/A’ answers and wasting the QA supervisor’s time.

    When you embrace consistent, focused, proactive QA evaluation, knowledge gaps can be promptly identified and remedied and the number of complaints or escalations should decrease over time.

    ​With these process adjustments, evaluation of an average call should not take more than 10-15 minutes. Use this estimate in the calculation of evaluation productivity, applied to the number of hours that the QA supervisor dedicates to QA evaluation.
     
  4. We are a small center. We have 2 dispatchers, one of whom doubles as a supervisor. Our concern is that the close relationship between the two could lead to the supervisor being too much of a buddy to objectively evaluate and or implement corrective measures.
     
    In environments that are conducive to subjectivity in QA review, we recommend automating and randomizing the call selection process. This makes the inclusion of both successful and less successful calls more likely. Structure the QA evaluation forms so that​ the questions only allow ‘yes’ or ‘no’ (or ‘N/A’) answers, versus multi-point scales. Consider engaging a trainer (if you have one) or manager in evaluations as well, to gain another perspective.
     

Questions Relating to NICE Inform Evaluator Quality Assurance Software

  1. I would like to learn about the actual creation of QA forms, for example what specific questions NICE has pre-built in the system, and how much is customizable. Tell me about setting up QA questions in NICE Inform.
     
    Quality evaluation forms can be set up very easily, either by modifying sample forms that come with the system or by creating new ones. This is done within the system’s intuitive interface, directly by supervisors or quality analysts without the need for IT assistance. Your QA forms may contain any number of questions. You can group questions into sections that make sense for your communication protocols, and/or call flows, and for specific skills. Your reports will then provide three types of scores – for individual questions, form segments, and overall evaluation.
     
     
  2. Once I implement Inform Evaluator, how much control can my department have over QA? Can I still choose calls to QA or is it all done by the NICE QA software? I would like to learn about the call selection process.
     
    The NICE QA software is designed to make your job easier, not more cumbersome. That means YOU decide how the call selection will be managed. The feature-rich toolset allows you to combine manual call selection with the rules for automated identification and assignment of recordings for QA. For automated call selection, simply define the parameters of desired calls (e.g. all or a subset of calls that involve a specific incident type, another group of randomly selected calls that are longer than “x” minutes, a number of calls that resulted in a dispatch, etc.) and the selection will then run automatically from that point on. Calls will be delivered to evaluators by the system, with the appropriate evaluation form already linked, assuming that you create different forms for different call types.

    It ​is expected that your standards, conditions, and experience will evolve and inspire improvements to the quality assurance process iteratively. NICE’s software will adapt. You can change the rules, quality forms, evaluation frequency, or processes at any time.
     
     
  3. What are the data points that can be included in QA call selection?
     
    Rules for call selection can leverage any combination of metadata captured with recordings. The most basic is telephony or radio date, time and duration. We highly recommend that CAD data such as incident type, severity, etc. is collected as well. This will improve the precision of call selection and matching of recordings to shorter, more focused QA evaluation forms.
     
     
  4. Are console screen recordings kept in the recording system?
     
    Screen recordings from one or more console monitors can be captured and synchronized with voice recordings for multimedia playback during your QA evaluation. This will help you assess how call takers and dispatchers are navigating through CAD, mapping and other applications (such as social media) during and after calls, and what distractions they have to deal with as they strive to focus on 911 calls.
     
     
  5. For agencies who are considering Text-to-911, does the NICE solution have QA for text as well?
     
    The NICE Inform Evaluator QA software can be used for rating any type of communications, including calls, radio and/or SMS Text-to-911 messages, either individually or as a group related to the same incident.
     
     
  6. How does the NICE Inform Evaluator software compare to Priority Dispatch Aqua Evolution?
     
    The Priority Dispatch AQUA Evolution, a module within the broader Priority Dispatch protocol management solution suite, automates much of the case review process of call taker communications.

    Many public safety agencies today review 911 EMD calls within Priority Dispatch AQUA, with an increasing adoption of police and fire protocols. Agencies using Priority Dispatch AQUA benefit from the integration between NICE Inform and Priority Dispatch AQUA which speeds up the retrieval of NICE recordings directly in AQUA interface.

    NICE Inform Evaluator, an optional module within the NICE ​Inform recording suite, is often used by agencies to evaluate radio dispatch traffic, operator workstation screen recordings and Text-based communications, in addition to 911 calls. It can be used for the evaluation of all communication types regardless of the protocol used, including EMD, police, and fire communications.
     
     
  7. How does NICE Inform integrate with Priority Dispatch’s AQUA case review software?
     
    The integration between NICE Inform and Priority Dispatch AQUA enables users to conveniently play back interaction recordings related to the case of interest directly from their AQUA interface – eliminating the time-consuming need to search for recordings within a separate recording system, and toggle between different applications while performing a QA evaluation.

    Evaluating a case from within the AQUA interface automatically invokes a call to the NICE Inform service to retrieve related call recording(s) stored within the NICE Inform SQL database. Matching call recording(s) are presented​ for playback via a standard, easy-to-use media player.
     
     
  8. How does the CAD integration work? Is there a fee or other software you need to purchase outside of NICE?
     
    The NICE Inform server makes an SQL connection to the CAD database, or preferably to the back-up database or data warehouse, to automatically collect CAD data in near-real time, and​ then associates this data to recordings in the NICE Inform database. The CAD metadata then becomes available as search criteria for recordings and can be used in the definition of rules for automated selection of calls for QA evaluation.

    CAD integration is included in the NICE Inform Elite (version 8.0) edition currently in early availability and scheduled for general availability in June, ​2017.
     
     
  9. What kind of interface requirements are needed for the CAD integration?
     
    For CAD integration with NICE Inform, the NICE Inform server must be able to make a connection via SQL to your CAD database, or preferably to the back-up database or data warehouse.

    To identify the 911 calls associated with CAD incidents, NICE Inform CAD integration assumes that each call-taker position defined within the CAD system has a fixed association with a searchable extension ID or telephony recording channel.
     
     
  10. Does this also associate CAD records with radio audio as well as phone audio?
     
    Yes, both phone and radio communications recordings can be automatically associated with CAD events via the CAD integration with NICE Inform.
     

 

Share this:
Twitter LinkedIn Facebook Email