Statistics and big data play a supporting role in litigation

Statistics and big data play a supporting role in litigation

 
As the business world becomes increasingly complex, so does commercial litigation. Often, the sheer volume of documents, data and transactions involved makes it difficult to establish causation, quantify losses or calculate damages.
 
Analytical techniques, such as statistical and big data analysis, can make these tasks far more manageable and cost-effective.
 

Using statistics

Under the right circumstances, statistical analysis makes it possible to extrapolate the results of sample data to establish a claim for a larger population with reasonable certainty. Examples of the use of statistics in litigation include:
 
  • Estimating damages in False Claims Act cases by extrapolating the results of a sample of claims to the entire population of claims rather than proving each claim individually,
  • Using regression analysis or other statistical methods to estimate costs in calculating lost profits,
  • Using regression analysis to establish unlawful age discrimination by showing a positive correlation between age and termination rates, and
  • Using government or industry statistics to estimate a plaintiff’s work life expectancy, projected earnings or benefits in employment litigation.
 
It’s critical in these cases to involve experts with statistical analysis experience to ensure that the statistics being used are reliable, the population is properly defined and the sample is representative of that population.
 

Using big data

In a litigation setting, big data typically refers to the use of powerful computers to analyze extremely large data sets to reveal patterns, trends and associations.
 
In fraud cases, big data can be used to reveal patterns that would be difficult or impossible to spot using conventional methods. For example, fraud perpetrators often create phony invoices with round numbers — like $1,000, $5,000 or $10,000 — or that fall just under the approval limit. Big data analysis can sift through enormous amounts of data to identify vendors with an unusually high percentage of such amounts.
 
Financial analysts also sometimes use Benford’s Law to detect fraud patterns in sets of tabulated data. (According to the law, the greatest percentage of numbers begins with 1 or 2, while the smallest percentage begins with 9 — thus, deviations from these patterns may indicate that data has been manipulated.)
 
Recently, the Securities and Exchange Commission (SEC) brought its first fraud action based on analysis of large volumes of trading data. In the case In the Matter of Welhouse & Associates Inc., the SEC charged an investment advisor with “cherry-picking,” alleging that the advisor improperly allocated options trades that had appreciated in value during the course of the day to his personal or business accounts, while allocating trades that had depreciated in value to his clients’ accounts.
 
Cherry-picking and similar frauds are difficult to spot and often go undetected until a whistleblower reports it to the SEC or some other fraud indicator reveals itself. But the SEC was able to use big data analysis to prove that the advisor didn’t, as he claimed, follow his firm’s prescribed pro rata allocation procedures. Rather, he allocated a disproportionate number of profitable options trades to favored accounts (accounts belonging to the advisor or to someone with the same last name), while allocating unprofitable options trades to client accounts.
 

Making the most of technology

For years, financial experts have used statistical analysis to help make the litigation process more cost-effective. Big data analysis takes this a step further, using modern technology to sift through enormous amounts of data to uncover fraud or other wrongdoing that, until now, often went undetected for years.