Regression Testing for Your FileMaker Solution

Wait, what? Regression testing for a FileMaker solution? Impossible, you say.

Well, in a lot of ways, you’re right. It remains impossible to have true, full-coverage regression testing for a FileMaker application. But if you’ve supported a complex solution over a long period of time, you know how important it is to make sure your newest feature doesn’t torpedo something else by accident.

One day while bemoaning our lack of robust regression testing tools the team had a crazy idea and decided to invest a few hours and see if anything could come of it. What if we could regression test indirectly, by comparing two sets of output?

We identified some places in our solution where we could build some regression testing for specific highly-complex features. Both are script-driven so that if the data fed in was identical, we could expect to see identical outputs if our code was undamaged.

Testing these by hand is incredibly time-consuming, due to the sheer number of test cases that have to be fed in and then checked in detail, just to be sure each branch of the code has been exercised. But if instead, we could compare one set of output to another, IN AGGREGATE, then we look for differences and instead spend our energy figuring out what caused those discrepancies.

To be sure, we still need to test our changes directly by hand and do a deep testing dive on the features we know will be affected by our alterations. A regression test like this backs up that detailed manual testing, providing broader coverage, and serving to raise the alarm if unexpected consequences are encountered.

How It Works

This regression test method relies on having exactly the same data as input in two files. One file is the production file with old code, and the other is our development file with the new code. We clear the slate by deleting any previously-existing output data, then run code to create new output data in both files. We then compare the two sets of output and look for differences.

In the production file, we have our starting data. We take a copy of that database so that we don’t interfere with production data in any way. In our copy, we can delete the output records (if any exist) and run the production version of the script and get a set of output records produced by the old code.

In the development file, we need to have EXACTLY the same starting data as that production copy. We make a clone of the development file and fill it with data taken from the production-copy we just made. Then we delete the output records (if any exist), and trigger the script being tested. That gives us a set of output records produced by the new code.

We then export each file’s output records as text files. With two separate text files which we hope will be identical, we view them in a tool that can highlight differences in a pair of text files, to tell us the real story. We use BBEdit (the free version is enough).

Pro Tips for Exporting The Files

We only export stored fields that contain relevant data that appears on production screens or in reports. No serialized or timestamped fields are exported, as these would certainly have differences not relevant to our tests.

We recommend you export as Merge instead of Comma-Separated file format. This gives a quick reference of field names at the top of each file, which makes troubleshooting much easier as you review the differences.

If you’ve got large data sets as we do, run the find and export of data server-side, then insert the files in container fields (server-side, using BaseElements plug-in) where they can be saved to the Desktop client-side. This is much faster than exporting large data sets client-side.

You may want to invest in a bit of interface to facilitate the batch delete and file export operations. A control center with scripted buttons to handle these tasks can make the whole process faster and less prone to error. Testing and deployment phases can be harried, and having important actions automated can offer some peace of mind.

Spot The Difference!

To check for differences, each pair of exported text files is opened in BBEdit, and then we select Search -> Find Differences -> Compare Two Front Windows. If differences are found, BBEdit highlights them side by side in a third window where they can be investigated.

It might be that our script code is creating some differences on purpose, supporting our new changes. We can check to see if the discrepancies are attributable to known code changes or not. This is itself helpful because we know in advance which discrepancies we hope to see and can look for them explicitly.

It might be that our script code is doing something wrong in one of the branches. Again, the regression test tells us where to start digging around for explanations.

It might even be that a problem elsewhere creates discrepancies. This testing recently helped us discover that a crucial data backfill script had been inadvertently skipped. The affected field was not directly used in our tested code, but it was involved in calculating some results on those records. We spotted unexpected differences in our exports and found the backfill omission as the cause. We could then quickly and proactively correct the situation!

What Can I Regression Test?

Look around your application. Is there any complex but free-standing process that yields a set of records that could be exported for this comparison-test approach? Check your system for scripted processes that might be a candidate.

In our case, we had a few features that were suitable. One is a complex report that is compiled using a set of a few dozen different rules, each for a different type of data coming in. Another is a set of invoices compiled according to 5 different bill generation methods, each with a few variances within it and with detailed rules for handling specific circumstances in the data being billed.

With this taste of success, now we are looking everywhere for more opportunities to apply regression testing to our large, mission-critical FileMaker application. We’ll report back if we find more!

Thank you to Ross Thompson for helping us build and refine this strategy!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top