The Chaos Butterfly of Security Standards

Article Header Image

The mathematician and meteorologist, Edward Lorenz, made the Butterfly Effect famous with his talk in 1972, “Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?” and it’s stuck in the popular conscious ever since. The notion that small events in a chaotic system can build to large changes over time and space.

In large companies I frequently see a hope that a similar mechanism will occur when a security standard is published to the company SharePoint. As if the flapping of a PDF’s pages in London will, through a chaotic system, cause a cyclone over Bengaluru to rearrange source code to be more secure.

When we’re trying to change behaviours at scale we have to think about direct and indirect influence and the audiences we’re targeting with our activities. We may have a requirement to document and publish security standards, but let’s not fool ourselves into thinking that the audience for them is the hundreds or thousands of delivery teams worldwide under constant pressure to deliver value, building and maintaining your complex business systems. Never mind getting them to engage with, and understand, your material - I’ve seen security teams get absolutely no engagement or feedback from the expert working groups they have in the organisation to review and approve security standards. Of course the security team publishes anyway, in the absence of any review or approval from the wider business, because “otherwise we’d never publish anything!”. Not the most effective state of affairs.

If I may be permitted a small anecdote related to chaos theory, to lighten the mood and give the security standards people a chance to calm down. In 1998 I attended a lecture by Professor Benoit Mandelbrot at Cambridge University. He’d just released his first book applying the mathematics of fractals to finance, “Fractals and Scaling in Finance: Discontinuity, Concentration, Risk”, and gave a talk on his research to a lecture hall packed with the best and brightest mathematicians, physicists and assorted distinguished Dons that Cambridge could offer. As a 22 year old doctoral student I was expecting a lot of it to go over my head but he delivered a talk that felt from another universe entirely. He was extremely enthusiastic and passionate, soaring over the world of mathematics and finance, and making references to “mild” and “wild” numbers. At the end of this wonderful but bizarre experience I sat, bewildered, as the host for the evening came out and thanked a slightly breathless Professor Mandelbrot, asking if there were any questions from the audience. Professor Mandelbrot scanned the assembled genius of the Cambridge lecture hall with bright eyes and was met with downard gazes, shuffling feet and the occasional nervous cough. Every single person in the room was as bewildered as me! Reminiscent of the times as a child at school when a guest speaker had come in and, afterwards, a teacher desperately tried to elicit a question from the class, the host tried his best to encourage a question from someone but to no avail. Professor Mandelbrot graciously departed the stage.

Professor Benoit Mandelbrot

Getting back on-topic, if you’re trying to change behaviours in a large organisation and want to be effective:

  1. Determine where the activities you’re trying to influence actually happen
  2. Exert your influence as close to that as possible
  3. Use direct influence over indirect (push, don’t rely on pull)
  4. Favour executable specifications and automation over documented processes

Let’s take an example. If you’re tasked with ensuring that application delivery teams work securely and meet the regulatory requirements of your organisation you could write all the things they MUST do and MUST NOT do into a security policy, publish it to SharePoint and call it a day (no SHOULD or SHOULD NOT, please - if it’s not mandatory it’s not policy). This is unlikely to be effective: the delivery activities are happening elsewhere, it’s relying on people pulling the information and it’s a document that requires human interpretation and individual decision on what actions are required. Your outcome will vary across the organisation from no change in behaviour to some change that differs with each team’s interpretation.

To ensure the outcome we want, across the whole organisation, we have to take a different approach. There are things that delivery teams need to know and need to be doing. If they don’t know these things and aren’t doing these things then they cannot safely implement change to your organisation’s systems. If they stop doing these things or, through changes to people on the teams, stop knowing these things then they cannot safely implement change to your organisation’s systems. What we’re defining here we could term a Licence to Operate:

A Licence to Operate instills all the requirements your organisation has for a cross-functional delivery team to safely deliver change to production into the process of delivering change to production.

We definitely want to guarantee that all our developers are trained in secure coding practices and that every change is being peer reviewed. From PCI DSS requirement 6.3.2:

Code changes are reviewed by individuals other than the originating code author, and by individuals knowledgeable about code-review techniques and secure coding practices.

A very good thing to do, since 73% of all security defects are caused by programming error and peer review is so damn effective I can, and will, write an entire article on it alone. We could put this requirement into our security policy and hope, or we could check a training database when granting commit access to our version control system and enforce reviews for pull requests on all repos. If we need to annually refresh people’s secure coding knowledge we can script a check across all committer accounts in the version control system that sends notices when someone needs to update their training, and disables their account beyond a certain date if it’s not done.

Other requirements on delivery teams could be:

  • They have no known-vulnerable dependencies included in their system
  • The components being deployed have had a SAST/DAST/IAST check with no open Critical or High severity issues
  • The system being changed has an up-to-date entry in the organisation’s inventory management system with owners defined
  • There is record of a manual penetration test having been performed within the last 12 months

And if you have people or groups within your organisation that are nervous about delivery teams being continuously responsible for security and their own changes into production then that’s a sign that they have requirements that need to be part of your Licence to Operate. Get them involved. Once their requirements are defined and implemented then everyone can have confidence that rapid delivery isn’t compromising quality.

Requirements can be checked by change control systems, deployment pipelines, or any part of the system in place that delivers change into production. Yes, you will need structured data and you will need engineering effort to integrate these checks - if you’re concerned with application security at scale then these things are fundamental to success.