Short: Evaluate New Rules Manually

This post discusses how to handle requests for new automation rules. I’ve written it from the perspective of the development team, but it’s even more valuable if you are the person making the request, or paying for it.

At my current company, we monitor a large fleet of IoT devices. Part of keeping all those devices working well is automatic monitoring for edge cases.  Requests to monitor for new edge cases occur almost weekly.  Of course, requests to automate small rules aren’t limited to IoT companies.  They crop up in organizations and projects large and small, particularly when a workflow is involved.

Request often start out something like this::

Last month two users noticed [SomethingUnexpected]. We should have a rule that detects that.  I think checking for X > Y would do it.

Being a helpful person, you get to work.

Too Fast

You schedule the work for the next sprint – or just do it the next day since it’s a simple rule.  You’re happy and whoever made the request is happy.  

However, a few months later you get a similar request. Didn’t you already solve this problem?

You investigate and discover that either:

  1. The rule wasn’t selective enough – it generates too many false alarms. No one pays attention to them anymore.
    or 
  2. The rule wasn’t sensitive enough – it’s missing real cases.

Too Slow

Next time, you’re aware of that risk.  You start to allocate time to test and refine the rule. Unfortunately, many of these requests now go into the endless abyss of the nice-to-have backlog.  They are great ideas, but you just can’t allocate the time to do them properly right now.

This approach isn’t wasting time on bad rules, but it’s not solving the problem either.

Just Right

The third time however, is a charm. Instead of scoping a full implementation, you figure out a quick way to evaluate the rule manually. Perhaps it’s a query you can run, or 5 minutes in Excel. Then you do it. Manually.  Once per day, or once per week.  When the ‘rule’ triggers, to act on it the same way you would have if it had been automated. You learn.

If the rule isn’t selective or sensitive enough, you find out quickly, and with minimal time spent. You can iterate and converge on a solution that works with minimal effort.  Once you understand the problem, it’s easy to implement the production version, and you know it’s worth the effort. If the underlying issue turns out to be too infrequent to justify a good solution, you have evidence to support dropping the request.

Your co-workers looking for automation are happy because you took their request seriously. The high value requests get turned into code in a timely manner, and the ideas that don’t pan-out can be discarded quickly. You’re officially living the dream with sunshine and unicorns (Ok, perhaps I went overboard there, but it’s pretty good.)

Limits

This is a great approach for rules that interact with humans in a workflow.  In these situations it’s important to try it out and get feedback.  In cases where you have historical data and can evaluate the correctness of the rule objectively, it’s generally better to evaluate your new rule that way: you will get results faster and with less effort from other teams.

Put it into action

  1. Next time you get a request for a new rule, run it manually a few times before you commit to code. 
  2. Treat the results of manual execution the same way you would have treated the final version. 
  3. Ask your customer how well the rule is working. Did it deliver value?
  4. Implement, iterate, or abandon, based on the feedback.

Whoever made the request will be happy with the quick results, and you’ll get to validate the new rule with minimal time invested.

Discuss

Use the comments section below to share your successes and let me know what you think. Do you have some great ideas I didn’t mention above? 

If you know someone who would benefit, share this post using the buttons below.

The forum is moderated and your first post may not appear until it has been manually approved.

Leave a Comment