A few years back I worked on a rewrite of an old solution. It had probably close to a thousand fields overall, and the file size was a few hundred megabytes. It was messy in all kinds of different ways. One of these was the myriad of fields that seemed to have been haphazardly added over the course of time, seemingly without any felt need for planning.
The result was fields which had, for example, values in five records, but no values in 624,321 records.
Well, one of the tasks in a project like this is to figure out which portions of the solution can be left behind, and fields like these seemed like good candidates.
The challenge was to figure out the extent to which each of the fields was utilized without having to do all of it manually.
With FileMaker 12’s ExecuteSQL function, this job is relatively straightforward. I’ve put together a little utility file that does this for us. It queries one of the FileMaker virtual tables – FileMaker_Fields – which I first learned about from a post by Andrew Duncan.
Here’s the query that I use:
SELECT TableName, FieldName, FieldReps
FROM FileMaker_Fields
WHERE FieldClass = 'Normal'
AND FieldType not like 'global%'
AND TableName <> ?
ORDER BY TableName ASC, FieldName ASC
This query returns all “regular” fields; i.e. fields that are not calculations, summaries, or globals.
It then iterates through each field and determines how many records have a value in that field.
The result is a set of data that reveals the utilization of each field. To use this in your solution, create a table occurrence for each of its tables in this file.
Here’s a link to the sample file
Take a look below for a screenshot of an analysis of a sample file, which I created using the Contacts starter solution, populated with a thousand records of fake data which I got from fakenamegenerator.com
A sample file would be nice.
I forgot to include a link to the sample file. I've added it now.
Nice technique.
Just played with the sample file and was pleased to see that the SQL query does not trigger indexing of the field(s) that are being “searched”…meaning this SQL method has advantages over other approaches.
Thanks!
hi mislav,
thanks for the excellent idea and the file. it will come in handy on the next customer with too many fields and too many files at all!
:o)
Nice technique
Small bug to fix…
Fields that have an internal return char (for example a notes field) will throw off the utilization count because they will get “overcounted” by the ValueCount ( $values ) line in the “Analyze Field Utilization” script.
A custom record delimiter that is not in the data and a PatternCount ( text ; searchString ) might do the trick.
Thanks!
Tony, nice catch. Thanks for letting me know.
I changed the SQL to "select count ( field ) from table". The download link in the blog post now points to the updated file.
Nice clean fix. Works even on fields where you add content and then set back to empty.
Thanks.
Thanks for sharing Mislav this came in incredibly useful today. (and picked up some tips too)