After years of uncertainty, the Supreme Court finally shed some light in June on the meaning of a notoriously vague law, the Computer Fraud and Abuse Act (CFAA). The CFAA is an important tool for deterring and punishing cybercrime. Unfortunately, some courts had interpreted the CFAA’s language so broadly that mere violations of a computer use policy—like using a work computer for personal messages—might have landed you in jail.
In Van Buren v. United States, the Supreme Court adopted a narrower view of the CFAA that is more closely related to the law’s basic purpose: criminalizing malicious hacking into computer systems. Van Buren was a win for civil liberties organizations and some legal scholars in a long-standing debate over the sweep of federal criminal law in cyberspace.
Van Buren was also a victory—not a loss—for cybersecurity. One reason is that an overbroad interpretation of the CFAA inhibits security research; any narrowing of the CFAA encourages “white hat” hackers to find flaws they might be reluctant to tackle if they fear a lawsuit or prosecution for their efforts. Another reason is that Justice Amy Coney Barrett’s technically informed opinion offers a model for how to interpret computer crime laws. Her “gates-up-or-down” approach will prod cybersecurity professionals to step up their game when it comes to safeguarding sensitive data.
“White Hat Hackers” and Their (Reasonable) Fears of Prosecution
For decades, lawyers and judges have puzzled over what the CFAA means when it criminalizes obtaining information “without authorization” or in a manner that “exceeds authorized access.” The broad view is that these terms incorporate whatever rules data owners specify through mechanisms like click-through agreements, terms and conditions, and employer policies. The narrow view is that obtaining information is a crime under the CFAA only if it involves circumventing some barrier imposed by the computer itself.
In Van Buren, the issue was what counted as “authorized access” to a computer system. A police officer agreed to run a search of a state computer database that contained identities of undercover informants in exchange for $5,000. It was a setup—an FBI sting operation. Officer Van Buren was, of course, arrested.
The case offers a textbook example of how the CFAA has sometimes been used by prosecutors: as an add-on charge in criminal cases involving computers. Van Buren was a crooked cop, but he was certainly no hacker. His computer credentials were perfectly valid, so the computer system gave him access. Nevertheless, the government charged Van Buren not only using the main federal bribery statute but also with a CFAA violation (Van Buren was also convicted on this charge—honest-services wire fraud—but this conviction was overturned on appeal).
The government’s CFAA theory was that Van Buren’s actions violated the policy imposed by his employer, so his access exceeded his authorization. Broad theories like these explain why the CFAA has long provoked justified fear in the security research community.
Many companies do not respond well to news of gaping holes in the security of their digital services or computer systems. Too often, the CFAA has been abused as a powerful weapon to muzzle the messenger. Companies have threatened lawsuits and even referred security researchers for criminal prosecution, arguing that unwelcome demonstrations of their security weaknesses violate the CFAA.
One notorious example includes a CFAA lawsuit filed in 2008 by the Massachusetts Bay Transit Authority (MBTA) against the Massachusetts Institute of Technology (MIT) and three MIT students who found vulnerabilities in Boston’s transit fare system. The MBTA got a federal district judge to order the students to cancel their presentation at DEFCON, the flagship white hat hacker conference. With the assistance of the Electronic Frontier Foundation (EFF), the students got the gag order lifted, but only after DEFCON was over. Eventually, the MBTA dropped the case and worked with the students to improve fare security—which is, of course, what should have happened in the first place.
In recent years, responsible companies and the security research community have worked hard to avoid such conflict. Companies have adopted “bug bounty” programs to encourage researchers to find vulnerabilities before the criminals do, so the bugs can be fixed. Industry and researchers have worked together to create coordinated vulnerability disclosure (CVD) practices that give companies a head start on fixing the flaws they find. Companies typically condition their bug bounty programs on following CVD practices.
An overbroad understanding of the CFAA can cast a shadow. Companies that choose to downplay or ignore security for commercial reasons still hide behind the CFAA. In 2019, a company offering a mobile voting app in West Virginia referred a student security researcher for criminal investigation by the FBI, even though the researcher followed the company’s bug bounty program. The company, Voatz, had retroactively updated the bug bounty program’s stated policies in an attempt to disallow the research it had previously welcomed.
There is a strong consensus among election security researchers that voting over the internet—and especially by mobile app—poses unacceptable risks. Companies and governments that ignore this consensus have a strong incentive to go after security researchers who may (rightly) embarrass them.
In Van Buren, the Supreme Court rejected an approach to the CFAA that gives companies the power to use broad policies to call embarrassing research activity “unauthorized.” Researchers must still be careful, of course, to avoid any actual trespass on a company system without authorization, but Van Buren makes it at least a little bit harder for companies that act in bad faith to play games with the CFAA.
Barrett’s “Gates-Up-or-Down” Approach
The second reason that Van Buren is good news for cybersecurity is that companies will actually need to improve the security of their systems, instead of hoping the threat of CFAA lawsuits or prosecutions will rescue them from their mistakes.
As explained by the Supreme Court, when Congress enacted the CFAA, it criminalized two kinds of intrusions into computers. Accessing a computer to obtain information is a crime if the activity is either “without authorization” or “exceeds authorized access.” These separate crimes are designed to cover distinct types of malicious cyber activity: outside intrusions into a computer or computer network (accessing without authorization), and insider threats (exceeding authorized access).
“Authorization” and “access” are both legal and technical terms. The CFAA defines the term “exceeds authorized access” to cover insider situations, where an authorized user accesses a computer and obtains “information in the computer that the accesser is not entitled so to obtain” (emphasis added). As Barrett explains in her majority opinion, the word “so” in this phrase refers back to the way the user has obtained the information: “by using a computer.”
The result is that the question of exceeding access becomes a “gates-up-or-down” inquiry: Does a user’s privileges give access to the information? If so, the user has not exceeded authorized access, even if the user is using those privileges improperly. As Barrett explains, in the field of computing, the term “access” means “the act of entering a computer system itself or a particular part of a computer system, such as files, folders, or databases.” Exceeding authorized access means “entering a part of the system to which a computer user lacks access privileges.”
Prosecutors will grumble that the Supreme Court, by adopting a “gates-up-or-down” view of the CFAA, has made it more difficult for them to make cases against insiders who may misuse their access without necessarily circumventing a clear “gate.” They argue that a more flexible CFAA allows them to bring a greater range of insider cases, and that they can be trusted to bring charges only in serious cases.
This view not only downplays the serious risks to civil liberties that are inherent in relying on prosecutorial discretion as a cure to an overbroad law but also fails to recognize the responsibility that owners of sensitive computer systems have to implement proper cybersecurity controls. At a minimum, this means considering carefully which users should have access to sensitive information and how that access should be controlled.
Barrett’s approach harmonizes the CFAA with basic cybersecurity principles. “Gates-up-or-down” can be seen as shorthand for describing an entire discipline in computer security—the three A’s—authentication, authorization and access control. That is good for cybersecurity because it creates the right incentives.
The three A’s are extremely basic to cybersecurity. The CFAA does not require that chief information security officers (CISOs) implement gates that represent the state of the art of the three A’s, but they do need to think about them and at least attempt to put up something that resembles a gate. (Even smaller, less-sophisticated organizations should have a basic grasp of these concepts; if not, they should not be handling their own cybersecurity.) It’s impossible to secure a system without asking what data you are trying to protect from disclosure, who should have access to that data, whether and why they should be trusted, and how you are going to ensure that only those who are privileged to access are granted access.
There remains some uncertainty about what will count as a gate in the “gates-up-or-down” inquiry that the Supreme Court has now established. Barrett confused the issue in a footnote: “For present purposes, we need not address whether this inquiry turns only on technological (or ‘code-based’) limitations on access, or instead also looks to limits contained in contracts or policies.” The better view is that the gate should be a technical one, so Barrett’s hedging is regrettable. The broader point is that the limit, technical or otherwise, must provide a clear answer—“up or down”—to the question of whether the user’s activity is allowed. This is a technical definition of authorized access, grounded in the three A’s, not a broad, circumstance-dependent inquiry of the kind the government and companies have pushed for years.
Of course, a determined adversary inside a network is at an advantage in trying to circumvent technological barriers of authentication, authorization and access control. The CFAA remains a tool to prosecute them. CISOs do not need to avoid all mistakes, but if they want the criminal law on their side, they should at least configure their systems to provide clear answers to whether a user is authorized to obtain data or use computer resources. If they haven’t, they haven’t done their jobs. CISOs have many tools available to fight insider abuse. Terms of service, click-through agreements and banners just aren’t good enough.
Attacks by foreign hackers often lead the news, but insider threats cause serious harm. According to Forrester Research, insider incidents were responsible for a quarter of all data breaches even before the coronavirus pandemic; with the rise of remote work, this figure will increase. A 2020 IBM study further revealed that organizations spent an annual average of $11.45 million to resolve insider incidents. Vague threats of criminal punishment for insiders who misuse data access have done little to make a dent in the problem.
Even after Van Buren, the CFAA has an important role to play in deterring malicious intrusions into computer systems, by both outsiders and insiders. When the CISO’s toolkit fails, the police and the FBI must step in. It makes sense to focus their limited resources where it belongs: on criminal hackers.
In the wake of Van Buren, Congress may face calls to expand the CFAA to make it easier to prosecute insiders for misusing their data privileges, even if they do not exceed their access privileges. This would be a mistake.
Barrett’s opinion in Van Buren offers an approach to the CFAA that is grounded in sound cybersecurity principles. While the CFAA is far from perfect, Barrett’s approach will give some comfort to security researchers, while encouraging organizations to set clear boundaries for who has access to sensitive data in their computers. Van Buren is a step forward for cybersecurity.