Are Your Established Business Practices Wasting Money?

“Time is Money.” We’ve all heard it and we all know what it means. We’ve also heard “time’s a wasting” too. It goes to show that wasting time is wasting money. But, how do you find that wasted time, and in the end, the wasted money? 

Business Process Analysis and Re-Engineering, and if you are not doing these on your existing processes, you may be losing time and costing money. This can be as simple as mapping out a claim form process from submission to resolution, or the steps needed to generate reports (and audit them). It can also be as complex as mapping out every function or action an item can have through an item’s lifecycle.

At one employer, I re-engineered the reporting process and saved over $15,000 per year, and that was just for a single employee. Let me explain what I found and how I fixed it.

I was assigned to a department that processed and settled various claim types. Due to regulations, claimants had lifetime caps that they could not exceed. Therefore, the system had to track claimants back to the inception of a program, sometimes as far as back as 1992. The system had a new set of database tables for each year. And since a claimant could update a current claim or a claim from any year previous, the reporting process had to take that into account as well. Likewise, payments could be made on a claim dating back 2 or 3 years or more.

When the system was redesigned in 1998 (from a Foxpro database into an enterprise system) and implemented in 1999, the reporting runtime was less than 3 hours for each of the four reports as there was less data. Reports on this claim system were run quarterly and spanned four different reporting periods (fiscal year, calendar year, year to date and current quarter). As time went on, new claim types and new categories were added and had to be reported on. By 2015, there were 12 reports with each report taking 8 hours to run (and running it had to be done on a dedicated machine as it tied up that machine for the 8 hours) and another 6-8 hours to audit. That was a total of 168-192 man hours IF everything checked out. Oh, and did I mention that reports had to be completed by the 7th day after the close of the quarter? Can you see an issue here? (If you cannot, there are only 168 hours in 7 days). If the audit showed a discrepancy (which often happened), the report parameters would have to be tweaked or the data investigated to find the discrepancy and the report rerun, adding another 14-16 hours to the process. This would often lead to missing the deadline.

run (without affecting old reports). This added time, which added money and also risk. It was taxing the system too. The time taken to run these reports slowly creeped up until it was not manageable. 

So time is money. In this case, there were two employees tasked to process and audit the reports. During a quarter, Employee A had a cost of $40 per hour and Employee B had a cost of $25 per hour. Employee A ran the reports for 96 man hours for a total cost of $3,840. Employee B ran the audit for a total of 96 man hours at a cost of $2,400. Without any issues, the two collaborated for an hour each for a total cost of $6,305 per quarter, or just over $25,000 per year. The total cost per report is around $525. 

Now as stated, if a report failed an audit, the report process would have to be tweaked and rerun, sometimes more than once. And if more programs were added and as more data tables (years) were added, this cost would just go up. Basically, add $550-$600 per report failure for the investigation and rerun. 

So what went wrong? How did this get so out of hand? Better yet, how could it be fixed to save time (and money) and get the reports submitted on time? The answer is simple, Business Process Analysis and Re-Engineering. 

This first issue I found when I looked at the process was that each report was ran on a client machine. This tied up a person’s computer for either hours for each report and increased the network traffic immensely as the client was sending thousands of requests to the server and then compiling the data into each report as it ran. 

This second issue I found when I looked at the process was that each report was ran separately, across 23 database table sets. Each set dealt with claim data and payment data for a calendar year. On top of that, the process did this each time for each claim type (at the time there were 5 claim types, a 6th was added during the re-engineering process). That means each data table was read 115 times per claim type and that was just for the claims. Claims could have multiple payments, to multiple payees. The reporting process as written would open each payment and then read the subsequent claim data, scanning each database to find the related claim. This could result in a thousand reads for a single payment. If the next payment was for the same claim ID, the process did not know it as it did not store the fact that it had already processed a payment for that claim and had to search for the claim ID again (another thousands of reads). In the end, I estimated the database reads to exceed 960,000 reads per report or over 10 million reads for the set.

By 2010 or so, the process was SO data intensive that it was slowing the production servers down to a crawl. This meant reporting has to wait until after hours. By 2013, the data was moved/copied to a staging/reporting server each quarter just so the reports could be run. These were stop-gap measures, however and were not addressing the real issues.

The third issue was the reports themselves were a black box. They were run against this massive amount of data, but all that was generated were the totals. These were count totals for each claim type and sub-category, totals of payments for each, average times for life cycles, etc. There was no evidentiary data to back up the report, nor was there anything to aid in the audit. So when the report was given to the auditor and they ran their queries (which all had to be manually created) against the data and if their totals did not match, where was the error? Was it in the report, or in the audit? Was something reported in the wrong claim type or subcategory? Was something counted twice? Was something missed in the audit query? No one really knew. So the analyst would have to do quick counts or spot checks. He would have to collaborate with the auditor, asking the auditor rerun this query or that one because his numbers looked right. If there was an issue on the reporting process side, the tweak would be made and the report rerun and re-audited, adding another 14-16 hours to the task. This was taking too much time and had too much room for human error (which costs more time and money). 

So what could be done? With three issues came three solutions. First, move the processing off the client and onto the server itself. This server had more power and more memory. It cut down on network latency. As it was already a staging server, it was not being used by hundreds or thousands of other clients. Without changing anything else, this cut down the reporting time from 8 hours per report to just 2 hours for each. 

While this was a good start, it could be better. The reporting process was still running only a single report at a time and even then it was “walking through each claim or payment in the table”, touching it to see if it met the reporting criteria. When that report was complete, it started the process all over again for the next report. So what if the new process opened ALL the reports at once. What if, while “walking the claims”, it checked to see if it fit in to report #1 OR report #2 OR ….OR report #12 and added the data to each report accordingly? That would save processing time, opening each of the 23 tables just once instead of 115 times. And what if the process just walked through the claims that fit within the reporting date range and ignored the ones that did not? Modifications were made to the reporting process and now reporting side took just 15 minutes for ALL TWELVE reports. 

Fifteen minutes, as opposed 96 hours? Let’s do the math. It was costing 96 hours at $40 per hour or $3,840 for the reporting side per quarter. That was $15,360 per year. With the new process taking just 15 minutes to run, plus another 15 minutes to set up (create the process documents with the date ranges, categories, report titles, etc), that is just $20 per quarter or $80 per year. 

But there was still that pesky audit issue (#3), that black box where the report says one thing and the auditor shows something else. Enter XML data. Every time the reporting process up-ticked a count or added a payment to a given report, it would write a line of XML data. It would not need the entire record, but enough to compare it the auditors query file. And, since the auditors file was in Microsoft Excel and XML can be read/imported into Excel, the act of auditing or comparing the two resulting datasets would be much easier. Adding the XML data also added NO measurable time to the overall report processing. 

On the auditing side, it caught errors on both sides quickly using functions like “vlookup” and “match”. It showed where claims in the report were not in the audit (and should have been), where payments were in two different tables and double counted in the audit (but not in the report due to some pre-emptive error checking). It also found were refunds were being added instead of subtracted. The reports were cleaner, faster, on-time and most importantly cheaper to run, saving over $15,000 per year. And, if the auditor had allowed me to help write and store the queries on their side, I might have saved them time (and money) there as well. 

So how can I help you clean up your out-dated, inflated business processes?

Comments are closed.