Creating Custom Alert Notifications

When it comes to being notified on your issues, there’s a wide range of options between notifying all your group team members on each and every issue instance and notifying them only once when an Issue is first seen. Ideally, when an issue happens, you want the right people to know about it in real time. For that, create custom Alert Rules in your group settings and define who to notify, about what issue, when and how, making sure that issues are getting the right attention by the relevant team members.

Alert Rules

Alert Rules enable powerful rule creation to notify developers of inbound issues through any channel or tool your team may be using. Alert rules are configured per group under [Group Settings] > Alerts > Rules tab where you’ll see a list of all active rules and can add new rules or modify existing ones.

Alert rules consist of:

  • A set of self-explanatory Conditions connected in an ALL, ANY, or NONE relation.
  • The Environments where your code is deployed and running in which you’d like the alert rule applied.
  • The Actions that should be taken when the associated conditions are met. Mainly - where would you like to route this alert to.

Creating a good set of alert rules means:

  1. Identifying the critical show-stopping issues and routing them appropriately (to PagerDuty for instance) so they can be resolved by the right people as soon as possible.
  2. Maintaining awareness and visibility to all other issues, in terms of where the alerts are routed to (mail, specific Slack channels) and at what frequency.

Generally, we recommend you configure these rules and fine-tune them as you go, adapting to your teams workflows and preferences. With that said, there are some common best practices that you should consider.

Selecting Conditions

The New Alert Rule wizard allows you to compose alert rules combining multiple conditions. These conditions are pretty self explanatory and rely on event Tags and Attributes values, event Frequency, and Issue State changes. Essentially, allowing you to combine the conditions below to fit your specific use-case.

Event Tags

Tags are key-value pairs that Airlock assigns to each event. Some of the tags are added by default depending on the platform and type of SDK. Developers can also add Custom Tags through the SDK. You can find the list of tags available in your group under [Group Settings] > Tags. The list is an aggregation of all of the tag keys (default and custom) that have been encountered in events in this group. For more information take a look at Tagging Events.

Event tags are useful for various reasons. First, tags are indexed and this allows you to query your issues in the Airlock Event Stream and Issue Stream based on specific tag values. In addition, adding the right tags in your code will allow you to tell Airlock to notify you only when events with specific details occur.

To create a rule based on a tag’s value, select the condition:

An event's tags match {key} {match} {value}

A tag-based condition can be used as the sole condition for an alert rule or in conjunction with additional conditions to fine tune the rule:

In this rule we’re leveraging custom tags that our developers added through the Airlock SDK, telling Airlock to notify us via our #Airlock-urgent Slack channel when Enterprise customers are experiencing issues in critical parts of our application.

Event Attributes

The alert rule system in Airlock is capable of picking out attributes from an event’s payload. There are 15 different kinds of attributes that a rule can target. Those include attributes like: Issue Message, Issue Type, the Platform,, http.method, stacktrace.filename, and others.

To set an attribute-based condition in your rule, select the condition:

An event's {attribute} value {match} {value}

In this example, any event with type “SubscriptionError” indicating an issue the billing flow will be routed directly to PagerDuty for our on-call engineer to handle.

In addition, you can also set up rules that account for multiple different attributes at once, and chain that logic together.

In this more advanced example, a notification containing the message, platform, and type attribute values gets routed to the Android Dev team in Slack if a RunTimeissue with the message Failed to Reload Index comes in from our Java platform.

Event Thresholds

Often, it’s necessary to create a rule based on a frequency threshold to help determine the significance of an issue’s impact and escalation priority. Among other use-cases, threshold conditions can be set in conjunction with tag and attribute based conditions to indicate a spike in issues in a certain environment, release, or page in your app or package in your code.

Airlock provides 2 threshold conditions based either on the event occurrence or number of affected users:

  • An issue is seen more than {value} times in {frequency}
  • An issue is seen by more than {value} users in {frequency}
Issue State Changes

Every Issue in Airlock has a defined state - Unresolved, Resolved or Ignored. Read about Issue States for more information.

1. Regression Alert When Airlock captures new event instances of an Issue that was previously marked as Resolved, it will change the issue state back to Unresolved. By default, Airlock will apply the regression workflow and notify all group team members about the Regression through mail. However, when it comes to regressions, you may want to put specific notifications in-place. To set up a regression rule, use the condition:

An issue changes state from resolved to unresolved

For instance, set-up two regression alert rules to:

  • Notify your on-call personal via PagerDuty when a regression is identified in your production environment.
    • Notify your engineering team via Slack channel when a regression is identified in any environment.

2. Unresolved Issue Reminder Airlock users can choose to ignore specific Issues in their issue stream for a defined threshold of time, occurrences, or number of affected users.

To set-up a reminder that a previously Ignored issue has become Unresolved again, use the condition:

An issue changes state from ignored to unresolved

In this rule we’re sending a “reminder” alert via Slack to a #Airlock-backlog channel and a direct message to the group manager @neil.


Routing Alert Notifications