Surprise! Voting app maker roasted by computer boffins for poor security now begs US courts to limit flaw finding

Voatz, the maker of a blockchain-based mobile election voting app pilloried for poor security earlier this year, has urged the US Supreme Court not to change the 1986 Computer Fraud and Abuse Act (CFAA), a law that critics say inhibits security research because it’s overly broad.

The app maker filed an amicus brief [PDF] on Thursday in Van Buren v. United States in support of the US government, which seeks to uphold the 2017 conviction of former Georgia police officer Nathan Van Buren under the CFAA.

Van Buren was convicted of violating the CFAA for conducting a computer search for a license plate number. Although he was authorized to access the police database as part of his job, he offered to look up license plates for a stripper in exchange for cash. The exotic dancer went to the Feds, who busted him in a sting operation: for a fee, he ran a plate on someone the stripper described as an undercover cop investigating her for prostitution. The license was a fake, and Van Buren was collared.

While his actions were alleged to have violated other laws related to wire fraud, to say nothing of workplace ethics, his conviction under the CFAA is what has alarmed computer security pros and cyber liberties advocates.

“Under this expansive interpretation of the CFAA, it would be a federal crime any time a person violates a website’s terms of service,” the EFF said in its summary of the case. “If violating terms of service is a crime, private companies get to decide who goes to prison and for what, putting us all at risk for everyday online behavior.”

And it’s easy to see how problems might arise from the vagueness of the law’s language. The US Department of Justice’s own guidelines on prosecuting computer crimes [PDF] acknowledge that, “The term ‘without authorization’ is not defined by the CFAA.”

And their interest is…

Voatz, as a private company, wants to be able to fill in the blanks and decide who can interact with its systems and in what capacity.

Coincidentally, its app was slammed in February by computer scientists for a variety of security flaws. And it cites that uninvited scrutiny by MIT’s computer scientists in its filing as an example of the problematic nature of unauthorized security inquiry.

“Voatz’s own security experience provides a helpful illustration of the benefits of authorized security research, and also shows how unauthorized research and public dissemination of unvalidated or theoretical security vulnerabilities can actually cause harmful effects,” the company’s filing says, even as it insists the MIT researchers found no meaningful flaws.

In opposition to the arguments advanced by the Electronic Frontier Foundation and other organizations including security firms that support narrowing the CFAA, Voatz contends unauthorized, independent research should not be exempted from the law.

Computer with a police crime scene banner over it

CFAA latest: Supremes to tackle old chestnut of what ‘authorized use’ of a computer really means in America

READ MORE

“Rather, the necessary research and testing can be performed by authorized parties,” the firm’s brief says.

Voatz goes on to argue that allowing security researchers to violate rules and policies upends the expectations of companies setting those policies, as if their words should be law.

The company says that just as people can be prosecuted for trespassing on physical property, they should be subject to punishment for breaking terms of service rules under the CFAA, an analogy that fails to appreciate that trespassing isn’t likely to result in a sentence of several years in prison.

In an email to The Register, Daniel Weitzner, Founding Director of the MIT Internet Policy Research Initiative, and one of the three authors of the Voatz app analysis [PDF], opposed the idea of letting companies criminalize security testing in their terms of service.

“The vagueness and potential breadth of the Computer Fraud and Abuse Act made it considerably more difficult for us to conduct our security analysis,” said Weitzner.

“Allowing tech companies to threaten criminal action for violations of policies that the companies write themselves places independent research in constant jeopardy. And without independent research, there is no basis for the public to trust the safety or security of these systems.” ®

READ MORE HERE