Catch-22: KPMG and the Inhouse Data Challenge

0
10
Catch-22: KPMG and the Inhouse Data Challenge

So, here is the catch: everyone is telling inhouse legal teams to change (usually via the use of new technology), yet to do that effectively they need to collect data first on how they work, but they don’t have the systems in place to do that, hence it’s a Catch-22 challenge.

Alistair Griffin, KPMG’s Head of Legal Technology in Australia, reckons they’ve got a solution: a highly structured assessment process that allows the Big Four firm to help inhouse legal teams know the ‘lay of the land’ before attempting to fix anything by throwing a CLM, or other tech solution, at the problems (which are not fully understood….as there is insufficient information to describe them).

We have been working with clients to break this cycle. That’s why KPMG has developed the Legal Function Assessment Suite (LFAS). Its key aim is to deliver sufficient quantitative and qualitative data to change the way legal teams transform in the most user-friendly way we can. Our hybrid approach combines automated intelligence with human specialists to deliver results smarter, easier, and faster than traditional consulting alone,’ he explained.

This approach has grown out of the global firm’s Legal Operations and Transformation Services (LOTS) group – (KPMG understandably likes acronyms). And the fact that they wanted to tell Artificial Lawyer all about it indicates that this is not an approach they are keeping just in Australia.

All well and good, but what does this approach consist of? Here is what the Big Four firm says.

‘KPMG’s Legal Function Assessment Suite (LFAS) is a set of tools which includes:

1. Target Operating Model (TOM) Diagnostic – gathering perceptions from across legal and the organistation against the pillars of the TOM including people, functional process, service delivery mode, performance data and insights, technology and governance.

2. Legal Activity Estimator – to understand how legal spend their day (without relying on timesheets), including where demand is coming from and the most common activities.

3. Maturity and Benchmarking analysis – assess your performance against best practice and uncover findings hidden across the legal team

4. Roadmap of recommendations – insights-backed decisions on what to focus on, how to prioritise initiatives, and the data to build the case for change.’

And this makes a lot of sense. It’s an effort to get a real-world insight into what happens inside a legal team without having to force inhouse lawyers to concoct data out of timesheets (see below for the problems with those) or from existing systems that were never made for this purpose.

Also commenting on the offering, Jason McQuillen, the Head of LOTS, said: ‘We’re seeing inhouse legal teams really embrace the challenge of changing their ways of working to meet increasing demand. But for many, they are unable to make the business case for the tools they really need or are otherwise going in blind. That’s because they don’t have objective data on the cost and value of their activities.’

The firm added that getting inhouse lawyers to fill in timesheets as a kind of special homework project was not a great way to go – largely because many lawyers go inhouse to escape timesheets in the first place. Another problem that this site can see is: what taxonomy to use? E.g. you have 100 inhouse lawyers and ask them to explain on an Excel sheet what they do all day. Every person will have a different way of explaining what they do if you demand they use numbers and tags, i.e. you’ll get 100 timesheets with 100 different taxonomies, in short: an impossible to rationalise mass of unstructured data.

Qualitative analysis via interview may well provide a better picture, at least until you can install the right tech to provide the data people actually need. (And, on that theme – this chimes with AL’s current focus on moving beyond ROI data, with instead the focus more on the human dimension, e.g. ‘Does it work better than before?’ ‘Do you like these changes?’ ‘Do you feel that things work more efficiently?’)

KPMG also noted that some teams will have existing operational systems that have some data embedded in them, but is hard to retrieve. It may also not be relevant, e.g. this site can imagine that most will have data from invoices they’ve paid to law firms, but it’s a very incomplete data picture of what’s really going on and can only ever cover a part of what is important to a legal team – and of course it won’t show the work the inhouse team does in any way.

They added that this lack of data also meant that when a legal ops team or other inhouse lawyers worked on a project to improve something it was hard to know what had improved – regardless of whether tech was involved or not – and also made it hard to give that team any credit.

(And on that one, this site would recommend the introduction of satisfaction scores over any type of ROI-related / ‘we did more contracts in 24 hours’ survey to understand if there has been an improvement.)

This site can also attest after talking to plenty of inhouse lawyers that although we assume the GC knows exactly what goes on inside their team, the reality is they don’t, or rather they cannot. They see everyone is busy, they see problems come and go, they see contracts move through the system, but it’s a distant helicopter view at best, and without real detail. A single human brain cannot know exactly what 100 lawyers are doing, perhaps not even 10 lawyers in a far smaller team – unless you have the systems in place to show this. So you can see the need for change here.

(Although, once again, this site would add that throughput on its own is a blunt instrument for the law. You don’t just want to do more work, you want that work to add more value, reduce risk, improve work/life balance, see the quality work levels increase and the process work decrease. And that is hard even for the best CLM systems to show.)

One last point: what KPMG is doing here is not totally new, many consultancy groups try to provide some kind of structured assessment before charging into the tech part of the solution, but they do get kudos for formalising it into the LFAS approach and also helping to raise the key issues that need to be explored.