Microsoft Security Tools Questioned For Treating Employees As Threats

Software designed to address legitimate business concerns about cyber security and compliance treats employees as threats, normalizing intrusive surveillance in the workplace, according to a report by Cracked Labs.

The report, titled “Employees as Risks” – released today by the Vienna-based non-profit – explores software from Microsoft and formerly from Forcepoint – specifically SIEM (security information and event management) and UEBA (user and entity behavior analytics) applications.

Part of an ongoing series of reports titled “Surveillance and Digital Control at Work,” the paper examines the way in which expansive information gathering in the workplace turns employees into suspects without cause.

“The boundaries between information security, the protection of corporate information, fraud and theft prevention and the enforcement of compliance with regulatory requirements and organizational policies are becoming blurred,” the report observes.

The research was conducted from 2021 through early 2024 by Cracked Labs researcher Wolfie Christl. In late 2023, Forcepoint’s public sector business, Forcepoint Federal (which offered the Behavioral Analytics and Insider Risk software at issue), was sold to private equity firm TPG and took on the new name Everfox.

Everfox declined to comment and did not respond to a request to confirm that its current insider risk software [PDF] is based upon Forcepoint’s Insider Risk Solutions and Behavioral Analytics. However, Christl explained the Everfox and Forcepoint documentation is essentially the same.

The purpose of the report, Christl told The Register, is to raise questions about the appropriate extent of workplace surveillance in light of the increasing amount of data collected through online activity logs and the communications data available to organizations.

“What kinds of data sources and behavioral profiling are really necessary and proportionate for which purposes?” Christl asked. “What are the risks and which safeguards do vendors and employers implement to prevent misuse?”

Software like Microsoft Sentinel and Purview and Forcepoint Behavioral Analytics (Everfox) is capable of monitoring everything that employees do or say, the report notes. It can monitor their behavior and flag them for further scrutiny by their bosses.

“Similar to predictive policing technologies, [these applications] promise not only to detect incidents but to prevent them before they occur,” according to the report. “While organizations can use these software systems for legitimate purposes, this study focuses on their potential implications for employees.”

Christl outlined some of the highlights of the report.

  • Both Forcepoint and Microsoft offer to monitor and analyze file activity, clipboard activity, application activity, browser activity, email/chat/message/voice communication, badging activity, performance review data and even screen activity.
  • Both promise to identify “anomalous” behavior with AI-based profiling that “learns” over time how employees usually behave. They offer to continuously calculate risk scores for employees, assess their behavior, rank them by risk and single out those who are considered potential “insider threats” or otherwise suspicious. Similar to predictive policing technologies, they promise not only to detect incidents but to prevent them before they occur.
  • Both suggest targeting “disgruntled employees” and those with bad performance reviews as potential insider threats – Forcepoint even mentions “internal activists” and those who had a “huge fight with the boss” as risks.
  • Forcepoint offers to assess whether employees are in financial distress, show “decreased productivity” or plan to leave the job, how they communicate with colleagues and whether they access “obscene” content or exhibit “negative sentiment” in their conversations.
  • Microsoft’s “communication compliance” system (part of Purview) offers to scan communication content for many different purposes – from detecting “profanity,” “offensive language,” and “inappropriate text” to corporate sabotage, data leaks, bribery, money laundering, insider trading, conflicts of interest and “workplace collusion.”

None of this is to suggest that employers don’t have the right to manage employees, or indeed the obligation to ensure the security of workplace systems. Yet as the report points out, employee surveillance fosters mistrust, may be disproportionate, and comes with potential problems like false positives and inaccuracies.

“Microsoft acknowledges that its cyber security and risk profiling systems may create ‘false positives,’ i.e., inaccurate alerts about employees and their behavior, which is why it provides a wide range of functionality to prioritize, review and investigate alerts,” the report notes, adding that the IT titan’s “Sentinel system may create large amounts of records on behavioral anomalies, which makes the data ‘notoriously very noisy.'”

The Register asked Microsoft to comment on the Cracked Labs report and were told that the Windows giant does not comment on third-party reports.

That wasn’t true in July, when we asked about the previous installment in the series “Surveillance and Digital Control at Work.” Microsoft took the unusual step of making a corporate VP available for an interview to allay concerns about Dynamics 365. Evidently, that policy has changed.

Microsoft, which sells worker tracking software, did offer a general comment about surveillance. “At Microsoft, we think using technology to track employees is both counterproductive and wrong,” a spokesperson claimed.

The “Employees as Risks” report suggests Microsoft is a bit more open to workplace surveillance than Redmond admits. It asserts that “Microsoft systematically incentivizes organizations to implement far-reaching employee surveillance by offering them the ability to quickly analyze massive amounts of personal data on employee behavior and communication and awarding them ‘points’ which promise to measure their ‘progress towards completing recommended actions.'”

Wilneida Negrón, director of policy and research at Coworker.org, a worker advocacy non-profit, told The Register, “[Employee surveillance technology] has been around for some time but has grown in sophistication [over] the last five years, especially as it has been able to collect and analyze large amounts of worker data to make predictions or assessments.

“Several labor, social, and technological trends have combined to make this type of monitoring more common – including advances in behavioral and predictive analytics, the proliferation of IoT devices, cloud computing, and remote work have made it easier to gather and analyze data from multiple sources, the growth of remote and hybrid work, and the increasing threat of insider threats and the requirement to comply with stricter data protection and cybersecurity regulations.”

The tendency to collect as much worker data as possible has become a broad area of concern – not only for employees and advocacy groups, but for regulators, legal scholars, and privacy professionals.

For example, in a forthcoming Modern Law Review article, Jeevan Hariharan, assistant professor of law at Queen Mary University of London, and Hadassa Noorda, assistant professor of criminal law at the University of Amsterdam, argue that employee monitoring is concerning because it intrudes on physical (as opposed to informational) privacy and can in severe cases amount to a form of imprisonment.

The article argues that different areas of the law need to develop in order to regulate workplace surveillance technology.

In an email, Hariharan and Noorda wrote, “In our view, [the report] highlights some very serious concerns about how cyber security and risk profiling technologies can be deployed by organizations to monitor their employees in highly intrusive ways. The case studies in the report illustrate the far-reaching surveillance capabilities of tools which promise to assist firms dealing with cyber security issues, particularly ‘internal’ threats arising from employee conduct. The software involved essentially allows constant monitoring of everything employees do or say, facilitating extensive surveillance of employees’ communications and behavior at work, from their keystrokes to their physical movements.”

The two law professors contend that while companies may implement these systems in different ways and offer various justifications based on the assertion the technology services legitimate business purposes, it’s clear there’s potential for these systems to be misused to the detriment of workers’ health, privacy, and basic liberties.

From a legal perspective, they observe these sorts of technologies would be governed by the data protection rules (eg GDPR) and human rights regimes (eg Article 8 of the European Convention of Human Rights) in Europe and the UK.

“Importantly, we would add that there is a real question about whether these legal regimes are currently fit for purpose in dealing with the full range of harms that arise from the use of these technologies,” they noted. “Companies tend to view the principal legal risk of using these technologies against employees as compliance with informational privacy laws such as data protection rules. In our view, however, mere compliance with, for example, the GDPR does not necessarily mean that uses of such technology are ethically or legally permissible.”

Hariharan and Noorda argue that from the point of view of employees, the problem posed by persistent surveillance at work goes beyond the collection of data that may be used against them.

“The objection is deeper than this, representing concerns about the employees’ bodies and lives being subject to unending control at work, and their inability to move freely in this environment,” they argued. “In our view, this highlights that there may be a range of other parts of our law which have been neglected but are in fact relevant to assessing the legality of such technology – including laws more directly concerned with bodily interferences and individual liberty.”

In the US, Benjamin Wiseman, associate director of the Federal Trade Commission’s Privacy and Identity Protection Division, gave a speech [PDF] on the issue of worker surveillance in February at Harvard Law School. He points out that not only are workers at risk of the privacy harms that affect consumers due to the abuse of geolocation information and other data, but they also face threats to their rights as employees. “Indeed, some companies and vendors are building tools that purport to predict the risk of workers unionizing,” he asserted.

“Companies are also funneling the information they collect into AI models to make automated decisions that can have serious consequences for workers’ autonomy, their physical and mental health, and their pay,” Wiseman noted, adding that the consequences can be even worse when software algorithms provide inaccurate data or make flawed calculations.

Citing how pharmacy chain Rite Aid received a five-year ban on using facial technology for deploying a system that misidentified customers – particularly women and people of color – as shoplifters, Wiseman predicted companies misleading workers about surveillance technologies can expect similar scrutiny.

The US National Labor Relations Board (NLRB) has also signaled its interest in workplace surveillance – at least to the extent that it interferes with labor organization rights guaranteed under Section 7 of the National Labor Relations Act. In an October 2022 memo, NLRB general counsel Jennifer Abruzzo wrote, “Under settled Board law, numerous practices employers may engage in using new surveillance and management technologies are already unlawful.”

Negrón said that employees and employers in the US should familiarize themselves with the state and federal laws governing workplace monitoring.

“For example, states like California, Connecticut, Delaware, New York, Illinois, Massachusetts, and Texas have laws offering varying degrees of protection against this kind of monitoring,” she explained. “California’s CCPA extends certain data rights to employees, while its labor code mandates notification for electronic monitoring. Connecticut, Delaware, and New York require employers to notify employees about monitoring, with New York also addressing potential discrimination risks. Illinois’s BIPA offers strong protections for biometric data, and Massachusetts courts have upheld privacy rights against intrusive surveillance. Texas prohibits the unauthorized recording of oral communications.

“At the federal level, the ECPA and NLRA provide some safeguards. So, although there is a patchwork of laws, the first step for employees and employers is to understand their rights and obligations at the state and federal level.” ®

READ MORE HERE