Jason Tashea. Photo by Saverio Truglia.
In the winter of 2017, Mike Lissner knew something was wrong with PACER, the federal courts’ document and e-filing portal.
Without hacking the website or using sophisticated cybersecurity testing software, he was able to see that PACER had a major and commonly known vulnerability that could let a bad actor charge documents to a victim’s account.
“As a programmer, code should look a certain way,” says Lissner, executive director of Free Law Project, a nonprofit that builds software to help with legal research. When code is wrong, “it’s like looking at someone’s face and it doesn’t have a nose.”
He wanted to disclose the vulnerability responsibly so that the Administrative Office of the U.S. Courts could fix it, but that was easier said than done.
First, federal anti-hacking laws are written so broadly that even stumbling onto an error in plain view can turn into a civil or even criminal action.
Second, the U.S. Courts did not have information posted on its website about a bug bounty program or a vulnerability disclosure policy, which limit liability for good faith hackers by letting them know how to disclose a vulnerability responsibly. (The U.S. Courts do claim to have a bug bounty program but would not provide evidence of its existence nor could the ABA Journal independently confirm the existence of such a program before publication.)
Confronted with this set of circumstances, Lissner lawyered up and cautiously presented the vulnerability to the courts. He describes their reaction as guarded, but after some cajoling, the Administrative Office finally had the entire PACER system patched six months later and, luckily, Lissner was never charged with a crime.
With the state of anti-hacking law in the U.S. and the insecure nature of online software, bug bounties and vulnerability disclosure policies are front-line, standard practices that should be a part of most any organization’s cybersecurity posture. Yet, a recent spot check by the ABA Journal of 22 legal technology companies, nonprofits and a federal agency found scant public evidence of bug bounties or vulnerability disclosure policies. Like a police department without an emergency line, this troubling trend indicates a lack of maturity in legal tech’s cybersecurity practices.
Bug bounties go back to at least 1995, when browser company Netscape set up its “bugs bounty” program. The idea was simple: If you were the first to find a flaw, you got paid for it, says David Baker, chief security officer of Bugcrowd, a bug bounty platform.
Bounty programs are a way to offer cash incentives—as little as $50—to independent researchers who find a new vulnerability. This provides a “good guy” or white hat counterweight to the robust marketplaces for nefarious actors to buy and sell software exploits.
For these programs to work, they are a part of a larger security effort that includes audits and continual testing of software. These programs are also usually coupled with a vulnerability disclosure policy that, in the words of Lissner, says, “Hey, if you find a vulnerability on our system, we’re not going to sue the pants off of you.” While bug bounties need something like a disclosure policy to clarify its terms, a company can have a disclosure policy without offering a financial reward through a bounty program.
Today, bug bounty programs are standard in the software and hardware industries: AT&T, the U.S. Department of Defense, Google and ride-hailing company Lyft all have one. Apple, just this month, launched a bug bounty program offering up to $1 million for vulnerabilities found in iPhones, the largest sum ever offered for such a program.
Even online mattress company Casper has a bug bounty program offering up to $3,000 for finding a critical vulnerability.
These programs get real results, often before the vulnerability is exploited by a bad actor. Publicly traded video conference company Zoom learned of its recent camera vulnerability through a third-party researcher. Carmaker Tesla found out that an attacker could pull and change information about a car. And in 2017, the U.S. Air Force discovered that an exploit in its website let attackers into its internal network.
Legal tech lags
In legal tech, however, these programs are conspicuously missing, even though these platforms are handling sensitive queries and documents for lawyers that are obligated to keep client matters confidential and secure.
“Although rare in legal tech, this holistic audit approach is widely used by leading tech companies, and we consider the engagement of independent researchers an important part of our due diligence process,” says Jonathan Watson, vice president of engineering at Clio, which started a bounty program in 2014, which complements a broader security regimen.
By contrast, publicly traded digital signature company Docusign expressly prohibits customers from conducting “vulnerability scans or penetration testing” of its product. There was no public evidence of a bounty program or vulnerability disclosure policy on the company’s website. The company did not respond to a request for comment.
Clio was one of only two legal technology organizations of the 22 that had a public bug bounty program as confirmed by the Journal. The other was data visualization company Socrata, which has both a bug bounty and a vulnerability disclosure program. Its parent company, Tyler Technologies, has no such public program and declined to comment. The U.S. Courts and e-discovery firm Logikcull claim to have a program, but neither was independently confirmed.
Six of the 22 organizations affirmed they don’t have such a program, including popular legal aid software platform A2J Author, which, through a representative, said it encourages its community to report bugs, but does not have a bug bounty or disclosure policy.
The 22 organizations were picked to be a cross section of the industry, which aimed to include big and small companies that are new and old, and create different types of technology, as well as noncorporate actors developing software.
Those companies were Atrium, Bloomberg BNA, Casetext, Clio, Docusign, DoNotPay, Everlaw, Fastcase, LegalZoom, Legal.io, LexisNexis, Logikcull, Neota Logic, Rocket Lawyer, Ross Intelligence, Socrata, Tyler Technologies and Westlaw. The nonprofits were A2J Author, Code for America and Free Law Project. The U.S. Courts were the only government agency. Each was provided the opportunity to comment for this article, A2J Author, Bloomberg, Casetext, Code for America, Fastcase and Legal.io told the Journal that they do not have a program in place. Everlaw, LegalZoom, Tyler Technologies and Westlaw declined to comment. The rest did not respond.
To complement its outreach and research, the Journal also searched each organization’s website for “bug bounty” and “vulnerability disclosure,” confirming the results above.
If you’re thinking 22 is too small of a sample size to draw conclusions, legal tech heavyweights are also auspiciously missing from the almost 900 bug bounty and disclosure programs compiled by Bugcrowd.
‘Stuck in 1994’
It’s worth noting that 80 percent of bug bounty programs are private or invite-only, Baker at Bugcrowd estimates, which he attributes to outdated thinking.
“The people that don’t have these programs are stuck in 1994,” he says, referring to the nascent, more innocent days of Web 1.0. A part of his company’s mission is to provide that bridge from the early days of the internet to today, which requires more eyes on software to keep it secure.
To stave off some angry emails: Not having a public bug bounty program or vulnerability disclosure policy does not mean that a company lacks all security measures, like internal audits or outside testing. However, it does mean that there is room for improvement.
Requiring nuance, a bug bounty program or services provided by Bugcrowd may not be for everyone. For example, if a product is too new and riddled with bugs, such a program could be bankrupting. It would also be hard to manage a bounty if the company doesn’t have a security team to oversee it.
However, a vulnerability disclosure policy is table stakes for any organization developing software.
These policies are required because the 35-year-old Computer Fraud and Abuse Act, the federal anti-hacking law, is so broad and vague. Through eight iterations expanding its scope, the law concerns itself with almost every digital device in the U.S.—except for calculators—and millions more abroad, making almost any online activity, including finding and reporting a bug, liable for civil or even criminal penalties.
Matthew Stubenberg, associate director of legal technology at Harvard Law’s A2J Lab and a 2017 ABA Journal Legal Rebel, says that until federal anti-hacking law carves out space for the white hat hackers, disclosure policies make him feel confident when bringing a vulnerability to a company’s attention.
For those that just realized their company or a vendor they use doesn’t have a disclosure policy, there are templates. HackerOne, the Department of Defense and 18F, a software developer within the U.S. General Services Administration, all have versions to iterate on.
The latter is what Lissner at Free Law Project used after he lamented the absence of a public bug bounty program at the U.S. Courts, and I asked him what his program looked like. Running a scrappy nonprofit with limited funds, he said he couldn’t afford to run a traditional bounty program. However, he did it anyway.
Today, Free Law Project announced its first ever bug bounty program and vulnerability disclosure policy. While the nonprofit still isn’t flush with cash, ethical hackers that find a new vulnerability can collect on Free Law swag, complimentary services from the organization and tickets to their events. It only took a couple of weeks—and a nudge from a pesky journalist—to make it happen.
“At its lowest cost, you don’t have any vulnerabilities,” he says. “It costs you nothing except the effort to post it on your website.”
Jason Tashea is the author of the Law Scribbler column and a legal affairs writer for the ABA Journal. Follow him on Twitter @LawScribbler.