Benchmarks for Gift/Data Entry

Options
Executive Leadership has tasked our team with establishing benchmarks for staff who enter gifts and biographical data into Raiser's Edge. I have a query that shows how many constituents, addresses, relationships, and gifts were last updated by each user. Obviously, those numbers can be skewed by imports, someone viewing a record and saving instead of just closing it, etc. Also, there are so many things that can impact processing speed, such as research required for certain data entry projects and missing information on gifts. Are any of you doing this and/or does anyone have suggestions on how to set realistic goals and accurately measure the speed and volume of these transactions?

Comments

  • JoAnn Strommen
    JoAnn Strommen Community All-Star
    Ancient Membership 2,500 Likes 2500 Comments Photogenic
    You are so correct in that there are so many factors at play. IMO, accuracy is a higher priority over volume. I know there have been other discussions about this over the years. Don't know that there was a definitive answer.


    If you have multiple data entry persons handling the same type of data I think I'd look for significant differences between their output to determine if someone is not achieving enough. With all the variables of data entry it's very difficult other than in a testing environment to set benchmarks from my perspective.
  • Thanks for the reply, JoAnn! I heartily agree. There's a great deal of external pressure for speedy gift entry and donor acknowledgment, so my mind immediately goes there. But perhaps measuring accuracy instead of speed is a better tactic. I appreciate your feedback.
  • I agree that benchmarks should reflect quality, instead of or in addition to, quantity. I know that makes it harder to pull, but you will get a more effective team dynamic if you do this, cultivating the staff who really care about their job and mission.


    Unfortunately, measuring data quality is easier through a negative criteria than a positive one. It's much easier to run a query on "records last changed by Dana who have no processing address but are not marked 'No Valid Address'" because it should be Dana's duty to review the record in its entirety and fix any errors if she has it open. It's harder to pull a positive criteria "records last changed by Dana with complete processing addresses" because the records could have been intact before Dana accessed them. And while pulling negative performance criteria is useful for quality control checks and training, it doesn't make a very positive tool for annual employee assessments and morale.


    If your team is willing to do more work with sit-down assessments in performance monitoring, you may be better served by identifing what entry processes occur regularly and then measure their completion rate. For example, at our org we receive alumni updates every year from the registrar's office. The data entry staff have no control over how many alumni records they are tasked with creating/updating; but the completion rate and accuracy of these records (ie, number of new alumni records created with complete attribute/constituency/address info, as a ratio of total records needing entered) does demonstrate their performance in turnaround time. Making sure that deceased person finder services and phone appends are happening on a timely schedule, is probably more important than how many records are changed in such imports. If you have a data lookup service like Whitepages, the number of replacement addresses manually searched would inform how much your data entry staff is going the extra mile to recover lost addresses. And of course you can measure the efficiency of your overall data team through aggregate metrics such as Blackbaud's data scorecard which rates the overall health of your database; ideally your data health score should go up each year, not down.


    As far as imports and bulk changes messing up your tallies, if you must measure on # of record changes, I would view it as just part of the equation. A staff member with import abilities will just have a higher rate of record changes than one with only manual entry abilities; it's the nature of their roles to have different expectations and time investments. However, since data staff have little control over the number of new donors or deceased people that fall into their laps every year, I think just taking a five-year average and using that as a minimum benchmark would probably be a more realistic reflection of success.
  • You are absolutely right, Faith, in that we have no control over the number of records in a graduate import, gifts received, alumni updating their information, etc. So a ratio of accurately processed records to received records makes more sense than records processed this week/month/year versus last or some other measure of quantity. This is very helpful info and is giving me some new ideas of how to present a clearer picture of our department's successes and challenges. Thanks for taking the time to share your input!

Categories