This article is Part 3 of a four-part series. Part 1 published in August, Part 2 published in September
CTC 2019 was held last month in New Orleans. This biennial court technology conference is the largest conference of its kind and always a great opportunity to see where technology vendors are focused with their court offerings. This year was all about artificial intelligence. Vendors of every sort were touting their latest AI-enabled applications – some of them brilliant and some of them boring. All of the digital recording vendors were demonstrating some form of speech recognition. None of them claimed to be able to produce an acceptable transcript, much less a certified transcript, but applying speech recognition to closed captioning and assisted listening looked like some potentially viable solutions. Full disclosure: my company, TheRecordXchange, also offers a speech recognition solution called VoiceCopy. We do not claim that the technology can produce an adequate transcript yet either.
I first began working with speech recognition technology in the late 1990s as CEO of FTR (For The Record). Even 20 years ago there were serious companies with plenty of cash trying to crack this nut. The technology has improved dramatically, and it continues to advance at a rapid pace.
There are two significant factors that have changed the landscape for speech recognition. First, as expected, the technologies related to artificial intelligence, machine learning and neural networks have matured. Equally important, big tech, most notably Google, Amazon and Apple have created services that collect unfathomable amounts of voice data. Alexa, Google Home, Siri and other applications amass valuable data by the second. For machine learning, data is gold and big tech has cornered the market.
Big tech is great at solving big problems. But it rarely tries to meet the needs of niche markets. Addressing the specific requirements of court reporting and transcription is exactly what some of the companies at CTC and a handful of innovative startups are trying to do. Google and Amazon rely on these ventures to service niche markets based on the technology they have developed. Smaller companies with domain expertise understand that transcripts must be punctuated accurately, present accurate speaker identifications and be formatted to meet the specifications for different jurisdictions.
Most companies acknowledge that an acceptable legal transcript cannot be produced from current speech technology alone. So what is their answer? Some are promoting their solutions not for transcription but for closed captioning or assisted hearing. Some have given up on the court reporting market and focus resources on markets with less stringent accuracy and formatting requirements. But some are offering a transcription solution that combines AI with human input to produce an acceptable transcript.
The AI/human strategy uses automatic speech recognition to complete the first pass of transcription. Transcription is the most labor-intensive part of the process, so if that can be automated, it’s a big win. Then, a qualified proofreader, using appropriately designed tools, reviews and corrects the transcript. The review process will take longer than if the proofreader were reviewing a transcript produced by a qualified transcriber, but any additional time and money spent on the proofing process is more than made up for by the savings achieved from the automated transcription.
Today, the transcription providers may be benefiting from this cost savings, but savings may not be passed on to transcript purchasers. But if transcript users are getting an accurate transcript, they probably don’t care.
The big beneficiary of this model is the technology provider. Remember my comment above about data being gold to AI developers? This is equally true for these startups chasing opportunities in the court reporting market. These companies will never be able to collect as much data as Amazon can, but they don’t need to.
Machine learning, a subset of AI, can be divided into two types: supervised learning or unsupervised learning. When you ask Alexa a question or give it a command, if you accept the response, then Alexa “infers” that its recognition was accurate. If, however, you repeat the request after a response, then the system may infer that its recognition was incorrect. This is an example of unsupervised learning; there is no established truth to be fed back into the system, only inference. Unsupervised learning can take a long time and requires a lot of data.
Supervised learning is based on the idea that there is a known truth. With a transcript, there is something close to a known truth. Accurate final transcripts can be fed back into the system for learning purposes. The system can compare the automated results with the “truth” of the final transcript and make adjustments for future processing. Supervised learning can achieve results much faster and requires far less data to get meaningful improvement. So an AI/human process that results in the technology provider having access to final transcripts can also result in a significant competitive advantage. Eventually, improvements will certainly benefit transcript users, but in the meantime . . .
Probably not. And, here’s why.
When you receive an accurate, certified transcript today, that transcript was likely produced by a qualified transcriber and reviewed by a qualified proofreader. Think of the proofreader as the quality assurance step in the process. Good transcription firms have well-developed processes using qualified and efficient teams of transcribers and proofreaders producing quality results. Quality does not happen just because the individuals are good; it happens when qualified individuals follow a good process.
Harold F. Dodge, one of the original architects of the science of statistical quality control stated that “You cannot inspect quality into a product.” And, to paraphrase W. Edwards Deming, the father of modern quality control science, proofreading does not improve the quality of the transcript. The quality, good or bad, is already in the transcript.
As a practical matter, what this means is that a qualified proofreader can consistently review and complete accurate transcripts when receiving quality work from transcribers. The lower the quality of the original content is, the lower the quality of the finished product will be. Automated transcripts are of far lower quality than those produced by qualified transcribers. Proofreaders cannot consistently turn them into high-quality transcripts. As of today, you will be disappointed in the results.
To quote W. Edwards Deming, this AI/human combo is a “system of make-and-inspect, which if applied to making toast would be expressed as: ‘You burn, I’ll scrape.’”
Predicting that something is going to happen is easy. Predicting when is not easy – timing is everything. It is safe to say that automated speech recognition systems will become the standard method for transcript production in many industries, including court reporting. Is it ready today? No.
Will it be ready in a year?
Will it be ready in five years?
If you are a classic early adopter and want to live on the bleeding edge, go for it. If you want to go into court with an accurate transcript from a witness deposition, hire a qualified court reporting firm and make sure your transcript is produced by a qualified transcriber and proofreader.
About the Author:
Steve Townsend is CEO of TheRecordXchange, a web‐based platform for court reporting professionals. He has extensive experience in courtroom and hearing room reporting and transcription. He was CEO of FTR from 1997 to 2007 and CEO of AVTranz from 2008 to 2015. Townsend is a co‐founder of the American Association of Electronic Reporters and Transcribers.