It’s September and the lawyers are back from holiday. You can tell, the trains are busier, less friendly and humming with the tappy tap of smartphones.
Like most, I love my holidays, but the idea of booking one always fills me with horror.
It’s not the idea of the holiday I don’t like, but the knowledge that I’ll spend more time and effort searching for it than I did when I bought my last house or car. I’ve known lawyers put more hard yards into the due diligence they undertake about holiday destination than they would ever consider doing for a client deal. We’re all driven by a fear of finding out that someone else found a better holiday than us at a better price.
As buyers of holidays we can pretty much get hold of perfect information with a few clicks of the mouse. Not just about geographic location, but potential accommodation, nearby restaurants and random stuff to do. Last year I even pre-booked a ranger-led walk in the Highlands several weeks in advance (since you’re asking, it was rained off).
A lot of us make our buying decisions by subscribing to the wisdom of crowds theory, which in holiday land manifests itself through the data available through Trip Advisor. On the whole and in particular where there is strength in depth of reviews, the Trip Advisor community knows what constitutes a ‘good’ holiday.
We choose our holidays based on readily accessible crowd-sourced management information (we used to call it ‘a recommendation from a friend’). Big Holiday Data is filtered for us to use, analyse and base our decisions on. It makes it easy for us to decide what is likely to be a ‘good holiday’ before we click-to-buy.
So we lawyers either know, or know how to find out, what constitutes a good holiday. But once we’ve exchanged our swimwear for our grey suit and are back in the office deciding who to instruct, who to hire into our team or who we might recommend, do we know what makes a good lawyer?
If you ask a lawyer what makes a good lawyer, you’ll get a range of similar answers. “Excellent technical skills”, “someone who can be commercial”, “an ability to apply the law practically”, “deep client knowledge”, “sector specialist” and so on.
But none of those descriptions remotely answer the question of what makes a good lawyer. They merely state the characteristics of what lawyers think makes a good lawyer. They don’t actually describe what, for example, constitute “good technical skills” or how well someone must know a client before they are said to have “deep client knowledge”. The descriptions are subjective by nature.
Lawyers in traditional law firms have generally had their performance measured by metrics like billable targets, billable hours, hourly rates, recovery rates, WIP and the golden goose that is PEP. But none of these metrics actually measure lawyer performance, they merely measure lawyer activity, which is quite a different thing and all too often confused with being about quality.
Neither are in-house lawyers immune from the inability of the profession to “rate” its constituent parts. The best in-house teams will tell you that they aspire to lead by and implement best practice. And you will hear GCs tell you that they have a “first class” legal team, I’ve done it myself. But how do they really know? Concepts like “best in class” are laudable but all too often flawed by subjectivity.
We live in a world of Big Data and the legal sector can look embarrassingly out of date when you compare the legal sector with verticals such as marketing where ROMI (Return On Marketing Investment) is such a recognised concept that McKinsey specialise in it, or with sectors such as education and medicine, where qualitative league tables are readily available for prospective customers to review before making a buying decision.
Change is however on the horizon. A few of Lawyers On Demand’s (LOD) more innovative clients are starting to use a two letter acronym in the conversations they’re having with us. MI. Management information. These clients are asking LOD to curate MI specific to their teams and business. For example, how our LODs are spending their time and what on. And not solely as a reason to see how long a particular task took and query why. But also because they are interested how their own “end users” of LOD’s service are using our LODs.
Clients want to know if, for example, there is a significant variance between the amount of lawyer time required by their marketing team, compared to the sales team. They want to find out why a particular individual may require our LODs to spend 25 per cent more time in internal meetings than the average. They want to see which end user clients are releasing a steady pipeline of planned work to our LODs and which are creating the daily fire fight by throwing ad hoc buckets of petrol onto the fire as and when they feel like it. And also, yes, because they are interested if Lawyer A gets through more work than Lawyer B and to find out why.
One reason why our clients are interested in how our LODs spend their time is because it is likely in many cases to replicate how the core in-house team is being asked to spend its time too. Inefficient instructions are likely to lead to inefficient lawyering. Or to use a data analogy, c**p in, c**p out.
The use of MI and data is a step in the right direction away from subjective judgments. But even then the focus of MI is often activity dressed up as performance. For example, whilst a Sales Director might rate Lawyer A higher than Lawyer B, because Lawyer B gets more contracts concluded, a Finance Director might prefer Lawyer B’s approach to assessing contractual risk. Perhaps better to have a slower pass-through of robust contracts, rather than a fast pass-through of flaky ones.
Even better to have a fast pass-through of robust ones, but how do you measure the robust bit? What is a “good contract”?
Perhaps the legal profession can learn from academia where no paper worth its salt is published without being put through a rigorous tyre kicking by the authors’ peers or even, in a research context, his competitors in a field. How about a similar system where the lawyers at Firm A spot check a small percentage of Firm B’s work to verify that it is indeed of Magic Circle quality? And vice versa, of course. Or perhaps more realistically, where an in-house team subjects it’s work to oversight by one of the firms on its panel or maybe by another in-house legal team.
I have a view on what makes a good lawyer and I bet you do too. We may even think the same things make one. But it doesn’t mean we’re right. And even then, we may both agree that a well drafted liability clause makes a good contract, but we might not agree on what that clause looks like. But if ten lawyers reviewed a contract and gave it an average mark of 7 out of 10, I think we’d feel pretty comfortable that this peer review proved that the contract was at least “good enough” which likely reflects well on the lawyer who negotiated it too.
Of course, we live in a real world. Magic Circle firms won’t be swapping drafts to ask for a mark out of ten. Nor will in-house teams. Enlightened law firms and their clients might do occasionally.
But the answer to what constitutes a good lawyer or contract is in the data. If legal service providers can invest in curating MI that helps their clients interpret the data that is created in the process of a contract negotiation, then that can only be a good thing for providers and clients alike.
Lawyers will often tell you that what keeps them awake at night are the unknown unknowns. Well perhaps MI is one way of turning some of those unknowns into knowns. It’s the colour by numbers approach to lawyering if you like. Join the numbers and paint the picture.
Risk management without subjectivity, whatever next.