All audit findings are meant to be based on reporting exceptions, so why assigning differing significance or priorities to these should be so important a matter that it requires adoption of a formal mechanism? Is it because entities need to be able to make choices for allocation of limited resources for taking actions or because certain actions are required to be taken earlier than others? Certainly not. Because in that case prioritizing actions and their timelines would have been the best approach rather than prioritizing the findings!
Maybe it’s the “oomph” factor that comes with seeing findings displayed with ratings. This factor certainly entices the auditors more than the clients, except for when the findings are largely rated as Low, in which case clients would like it more. However, clients are also likely to feel good about findings with no ratings of significance at all. Because in that case they could rate them all as “Low”!
But it’s not the case with those charged with governance, the Board. The Board is where the internal audit reports or should report (to be independent and effective) and it is not interested in either stance on internal audit findings ratings. Because the Board is interested in having findings rated on the basis of a formal and well-defined mechanism, so that these are truly reflective of the risks these represent, and that corrective actions appropriateness and adequacy can be ascertained and decisions taken.
However, the Board is one of the two stakeholders here, the other being the internal auditors; only they can ascribe ratings on their own findings! And the internal auditors (professionally qualified ones) are guided (read governed) by the Global Internal Audit Standards (GIAS) in this regard. Standard 14.3 of GIAS requires evaluation of internal audit findings to determine the significance of risk these represent. It goes on to recommend internal auditors to have a formal mechanism for assignment of significance and prioritization of the audit findings.
Therefore, such a mechanism essentially means that the findings need to be rated on their merits, i.e., what is the differing significance of issues these represent. Though the issues unearthed by auditors are all exceptions, except for recommendations for further improvement, these can’t subscribe to a standard assessment of significance.
It’s obvious that this is because, for instance, if the findings pertain to control lapses, not all controls mitigate risks that are significant, for instance a missed sign-off on a document, when the sign-off from the next higher authority is present. Similarly, if a finding highlights procedural non-compliances, not all procedural protocols are equally significant, for instance a missed hard copy record in file, when its original soft version is available.
Hence comes the strongest case for having audit findings rated, the risks they highlight that have either materialized or could materialize, are all rated within the risk inventory. So, if the risks are rated, findings pertaining to these must also be rated.
Now the next question is, since the risks are rated to a formal heat map, should findings also be rated to that heat map? In principle, it seems quite right and easy to comprehend. Audit findings should be rated proportionately to the ratings of the risks these represent. In practice too, it might hold true in many instances, but never ever in all instances!
Why so? Let’s find out!
Components of Auditor’s Evaluation |
Impact how |
Risk Management program |
If in the internal auditor’s assessment and evaluation, the risk management program is not competent, appropriate and adequate for the client’s requirements, auditor’s identification and assessment of risks will vary from the client’s identification and assessment, resulting in audit findings having ratings different from the ratings assigned to the risk. |
Risk ratings, heat maps, capacity and tolerance |
The internal auditor’s evaluation of specific risk ratings, heat maps through which risk ratings are assigned, risk capacity (the maximum risk the entity can absorb) and tolerance (the risk the entity is willing to absorb) might be different from client’s evaluation and determinations. Resultantly the audit findings will have different ratings from those assigned to the risks. |
Control design adequacy and sufficiency |
In the internal auditor’s evaluation, the controls designed to mitigate specific risks might not be fit for the purpose, i.e. their design might be inadequate / ineffective for the risk they’re designed to mitigate, the controls might be redundant, the process in which the control is embedded might have evolved such that control is no more effective, the risk might have evolved, etc. Resultantly, the audit findings will have different ratings from the risk rating. |
Controls non-compliance |
Seemingly straightforward, the auditor’s assessment of the impacts of a non-compliance of a control protocol might be different than the risk that control is designed to mitigate as documented. |
Governance Issues |
The internal auditor’s assessment of the governance philosophy and systems, the tone at the top might yield a different assessment of the significance of audit findings than the risk ratings are. |
It is through these evaluations and not just the specific audit findings that internal auditors make assessments for ratings. The ratings are then attributed on the basis of probabilities and impacts.
The probabilities reflect roughly the volume / frequency of the issue identified, that on an individual finding level might be based on the sample size used for testing the population and projections, but on an overall basis would reflect the prevalence of the issue being systemic, plausible and remote or isolated!
The impacts mean the value / exposure of the issue identified which again on an individual finding level might simply require projections over the population tested through a sample, but on an overall basis would reflect the exposures from other areas linked with the issue identified, exposures carried forward or brought forward to and from other periods and even an inability to accurately and completely assess the exposure!
The control environment significantly influences the assessment of probabilities and impacts. This is because if the people who are responsible to exercise controls have no idea what the control objectives are and what gets mitigated by exercising them, or the consequences of control failing, the rating could hardly be anything but High!
The probabilities and impacts when attributed values allow for a risk matrix to be formed against which the audit findings could be mapped and rated. Owing to the entity wide scope of internal auditing, the risks highlighted by findings need to be classified for ease of use and standardization. Since risk is defined as the effect of uncertainty on an objective, a classification of objectives may as well be used.
This is done either by using the same classifications used in risk universe (ideally) or the adoption of any available frameworks such as ERM or an indigenous development, which could at least ensure that all potential risks the findings highlight, pertain to a classification from within the risk universe. However, with internal auditors’ thirst for value addition, a new, previously unknown classification cannot be ruled out!
Summarily therefore, while the intent is to make the process of assigning ratings to audit findings as objective as possible, the numerous other factors discussed above mean that the use of auditor’s judgment is a quintessential part of the process.
But where exactly does the client’s opinion come in or matters while rating the audit findings? And why do they think they could even question the ratings?
Maybe because they believe that if not for their work, the auditors can’t make a living. Well, if not for the auditors, how are they going to know how well or if at all they’re in control of that work or what should their goalpost be?
So, dear clients, question all and everything you want. Our default ratings will be ‘HIGH’ since we ought to respond in kind!