Data quality is not an IT issue, but a boardroom issue

In many organisations, data quality is still seen as an IT issue. After all, that is where the systems, dashboards and reports that process data are located. So when figures do not match up or analyses raise questions, people often automatically look to IT.

Yet this is a persistent misconception. Data quality rarely stems from technology. It arises from inconsistent processes, unclear definitions and the way different departments handle information.

Unfortunately, data is still often seen as something technical and complex, something that specialists should deal with. But data quality touches too closely on the core of how an organisation operates to be the responsibility of a single department.

Poor data often remains invisible

A key reason why data quality remains under the radar for so long is that poor data is rarely immediately visible. There is no physical mess on the shop floor, and staff usually only notice the problem when the data is actively being used.

As long as information remains in systems without being actively managed, there appears to be little cause for concern. The risk arises when organisations start using dashboards, reports and KPIs to make decisions.

At that point, data begins to influence strategy, operations and financial results. And with that, its reliability suddenly becomes far more important.

Professional dashboards do not automatically provide reliable insights

More and more organisations are using dashboards and management reports to guide their operations. Visualisations make complex information clear and help managers make decisions more quickly.

The risk arises when the quality of the underlying data is rarely questioned.

A dashboard may look professional, whilst the data on which it is based is incomplete, inconsistent or out of date. The form inspires confidence, whilst the content does not always warrant it.

Managers then make decisions based on figures that appear reliable, but whose quality is insufficiently clear.

The consequences of poor data continue to have an impact

In the world of data, people often talk about the ‘garbage in, garbage out’ principle. If the input is incorrect, the results will be too.

But the problem doesn’t stop there.

Today’s output forms the input for new decisions, new processes and new reports. As a result, incorrect assumptions continue to affect an organisation over the long term.

Decisions based on unreliable data increase the risk of poor investments, strategic errors or reputational damage.

Data quality arises from processes

When organisations have doubts about their data, they often look for technological solutions. In practice, however, the problem lies more in processes and governance.

Data is created the moment an employee – and nowadays, customers too – records information: when an order is entered, a customer is added or a deal is recorded. If those processes are not clearly defined or definitions vary from department to department, inconsistencies inevitably arise.

When is a deal actually closed?
When is a customer considered active?
Which revenue is included in reports?

Data quality becomes more complex when departments are held accountable for different objectives.

Sales is often rewarded based on the number of deals closed. Finance looks at margins and payment terms. Marketing focuses on growth and leads. These priorities are understandable, but can clash in practice.

For example, a deal may look excellent in a sales report, whilst the financial terms are unfavourable for the organisation. Sales hits its target, but Finance has to deal with the consequences later.

Data quality requires leadership and ownership

Improving data quality therefore does not start with new technology, but with leadership.

Leadership means that executives not only recognise that data is important, but also pay attention to its reliability. This requires clear agreements on definitions, processes and ownership within the organisation.

A practical step could be to include in board reports not only KPIs, but also insights into the quality of the underlying data. Not as a simple judgement such as ‘good’ or ‘bad’, but rather as context: how much data underpins a report, what checks have been carried out and how reliable the information is expected to be.

This helps directors to look not only at the outcome, but also at its reliability.

Data quality starts in the boardroom

The question of when data is “good enough” rarely has a single universal answer. It depends on the objective, the type of decision and the risk an organisation is prepared to take.

What is clear, however, is that data quality only improves when organisations explicitly assign ownership. As long as no one is responsible for definitions, checks and agreements, the issue remains stuck between departments.

And that is precisely why data quality is ultimately not an IT issue.

It is a subject that belongs at the level where decisions are made about strategy, risk and continuity.

In the boardroom, therefore.

Related posts