Hogan Lovells 2024 Election Impact and Congressional Outlook Report
The months ahead are set to be challenging for all with the cost of living crisis taking its grip on all firms and customers, incoming regulatory changes such as the UK’s Consumer Duty demanding widespread cultural change, along with updated guidance & expectations on firms to support their customers through this period, and an increased focus overall on achieving good outcomes. Having legally appropriate policies and documentation is no longer enough – it’s increasingly important for firms to demonstrate that they are consistently delivering good customer outcomes in practice.
Nearly all projects now involve aspects of technology - from new products and services to remediation and complaint handling. Whether the systems are customer-facing (Apps and websites) or behind the scenes (Customer Relationship Management and infrastructure), the timescales and project milestones are often dictated or impacted by the technology build requirements. This shift is evidenced by how firms now use IT project methodologies (Sprint/Agile) to run projects even though the focus may not be primarily technology.
Understandably, testing is a key part of technology projects, ensuring that the ‘build’ does what it was designed to do. However, testing tends to measure the build against the design – putting a lot of faith in the original requirements.
User acceptance testing (UAT) and customer experience testing measure general usability and how a customer moves through a process. It can even improve the flow and the customer experience. Nevertheless, this is focusing on the ‘journey’, not the ‘destination’. Another way of looking at or defining this ‘destination’ is as an outcome.
Outcome testing can be overlooked or deprioritised during a project. It is natural to assume that the intent of the project is to deliver a ‘good’ outcome and that the design had this in mind. As a project progresses, the objective, focus, and priority tend to shift to delivering the design, albeit without any intent to undermine the outcome. Can you still assume at the end of a project that the outcome is ‘good’? A lot can happen in a project that impacts the final product. It would be a leap of faith to assume that just because the design intended good outcomes the finished article delivers them. Good outcomes need to be evidenced and measured. There can often be a comparative lack of investment in the ability to objectively assess the intended journey through a customer lens when balanced against internal and commercial drivers to implement on time.
This is where Quality Assurance (QA) comes in. QA is a standard part of most firms' sales, operations, service, and complaints-handling functions. Through independent assessment of customer outcomes, QA provides an understanding of how effective the systems, products, processes, or policies are in delivering the customer outcomes intended – avoiding reliance on assumptions that existing processes are effective or infallible.
Typically QA of BAU activities is used to evaluate the output of established processes. However, it's important to undertake QA throughout the delivery of both new products and changes to existing ones, to determine whether individual milestones are not only meeting the design requirements but also delivering the original intent of a ‘good’ outcome. This can be achieved through continuous QA of core BAU activities, alongside theme-based reviews which may be driven by reporting, prompting focus on a particular process, activity, or component.
If references to a ‘good outcome’ are included in policies and processes without a clear interpretation of what "good" means it could be difficult to demonstrate how the business determines an accurate and meaningful measurement of outcomes. It is easy for this to become ambiguous and focus on the sentiment rather than a tangible deliverable with clearly defined tolerances and measurements. It may seem easier to focus on ‘bad outcomes’ and determine how to avoid them, however, can you confidently list them all to ensure you cover all possibilities? Are all outcomes good or bad – what about a neutral outcome?
Demonstrating achievement of both intent and delivery of good customer outcomes is a regulatory focus and part of the new consumer duty regulations in the UK.
In simple terms, the definitions of fair and good provide us with some insight into how to interpret them in the context of outcomes. Fair generally means acceptable or reasonable, whereas good typically suggests a higher quality standard or level. With this in mind, firms may want to develop principles they are able to apply to their specific products and services to help create a clearer, consistent understanding of what "good" looks like. These principles can then inform how outcomes are measured and reported.
To understand what impacts a customer outcome you need to understand what can impact a customer. This can be one thing or a combination of things including systems/technology, a process, a procedure, a script, a member of staff, or an advert/promotion. How customers interpret information or a change in their circumstances can influence the outcome at any point in time. Understanding the factors that can impact outcomes and that a broad range of customer characteristics can impact the performance of a product or service needs to be considered and reflected in internal Management Information (MI) and reporting.
QA should be risk-based and proportionate. To achieve this, firms need to prioritise areas with a higher risk of poor outcomes. This could for example be where advice is provided, complaints, or the launch of new products or services. It's impossible to completely eliminate the risk of poor customer outcomes. The purpose of QA is to provide an early sight of actual or potential risks, allowing the business to address issues, remediate where appropriate, and mitigate the risk of poor outcomes in the future.
QA outputs should categorise the findings in a manner that allows the business to understand if policies and processes are being consistently applied and whether when applied, they provide good customer outcomes. Whilst there will always be an optimum "pass mark", the reality is that the quality and consistency of activities will fluctuate. QA is unlikely therefore to always be "green", the purpose is to highlight where fluctuations occur in order that the business can quickly address the causes. Assurance should mean the business can be confident they are alerted to issues quickly so that they have the right information to support them in making informed decisions.
Root Cause Analysis (RCA) is a window of opportunity if acted upon. The purpose of RCA is to allow firms to identify and address the cause of failings or issues, not just the symptom. For example, if a firm receives an increased number of complaints about call waiting times/response times when customers try to contact the firm, they can easily address the issue of waiting times and abandoned calls by increasing the number of customer service agents. However, the real issue is why so many more customers need to contact the customer services team.
Undertaking and reporting RCA does not in itself achieve the requirements set out by the regulator. In fact, failing to act on and address the issues identified via RCA could undermine confidence in the firms intent to achieve good customer outcomes as failure to act may result in more customers receiving poor outcomes.
The critical part of an effective QA framework is not just the outputs they generate eg the Data and Management Information (MI), but how this is presented and acted on. MI and reporting should provide a succinct view and real snapshot of both good practices and areas of focus where there are indicators of inconsistent outcomes. However, this can only be effective if the information it provides is reviewed, challenged, and used to drive action at a senior level. Firms can often point to extensive QA MI and reporting, but cannot produce the same level of data to evidence that action was taken promptly and improvements made where appropriate and effective.
In summary
Authored by Mark Aengenheister, Caroline Walters, Ben Goodman and Nick Oxley.