Playing the blame game – Part 1

I’ve seen quite a bit of noise on LinkedIn with various posts decrying the security tooling industry. The gist of the posts are that the tooling industry is broken/multiple failed promises/tools don’t provide the security they are touting etc…

Couple this with an evening seminar I attended toward the end of 2022, where the a study had been conducted claiming that tools don’t provide the security they intend. The ‘call-to-arms’ was for a kite marking scheme for these tools. Those with long memories will remember the various government-led schemes in the UK, and other international examples.

UK: CAPS – https://www.ncsc.gov.uk/information/products-cesg-assisted-products-service

International – Common Criteria – https://www.commoncriteriaportal.org

I was curious as to the lack of examples of tools that had promised much, but delivered little. Alas, I suspect this can be explained by an ‘outing’ of tool vendors in this way would likely bring on the lawyers. Based on my experience, the issues behind this lack of performance are most likely a combination of:

  • capability (the tools don’t quite do what they say they should or work in the way they were sold),
  • circumstance (in a specific set of integrations, the tools don’t operate to the best of their ability),
  • human aspects (we didn’t integrate the tools correctly/we didn’t train our people to use them/the process to use them was not followed)
  • thoroughness (we put in a tool to meet: an audit requirement/meet an immediate need/we ran out of funding to finish implementation

There is likely a very good five-whys analysis that could be done on any of the lack of performance of these tools, I suspect.

The point I want to get to, is the increasing trend I see in corporate environments of relying on tools to do the job of thought and rigour – which is the subject of Part 2.

Leave a comment