William Barr’s Lofty Plan to Stop Huawei Overcommits, Underdelivers

By: Josh Cervantes

After employing a panoply of unsuccessful measures designed to seal off Chinese telecommunications giant Huawei from the U.S. market and those of its allies, Attorney General William Barr in early February offered a bold, new approach: frustrating Huawei’s domination of the 5G market by having the U.S. State Department take a controlling ownership stake in Finland’s Nokia and Sweden’s Ericsson, two of Huawei’s arch rivals. Barr’s suggestion is in line with a U.S. campaign designed to reduce Huawei’s footprint around the world due to concerns that the company could, at the direction of the Chinse Communist Party, use its technology to spy on foreign governments and other entities that China considers a threat.

Suggesting that such a move be carried out “either directly or through a consortium of private American and allied companies”, Barr envisions a not-so-distant future where all U.S. 5G infrastructure and operations are carried out by domestic firms, with the exception of Ericsson and Nokia, given their significant U.S. presence.

However, if Barr’s ambitious plan were to be executed, it may leave taxpayers on the hook for a massive bill while potentially providing only minor security benefits in the national security realm. These considerations beg the question: is Barr’s plan actually worth it?

Barr’s plan would initially require paying satellite operators to vacate their allocated spectrum frequencies to make room for 5G services. Key to the success of the plan is having the FCC rapidly auction off C-band spectrum that is currently utilized by several space station (satellite) operators, including Intelsat, SES, and Eutelsat. In a hotly contested vote earlier this month, the FCC voted 3-2 to free up to 280 MHz of C-band for 5G, allocating at least $9.7 billion for accelerated relocation payments to the companies currently occupying the spectrum. These funds would be paid to companies in exchange for quickly vacating the desired C-band spectrum, after which a public auction for the spectrum would be held by the U.S. government. Companies occupying the desired spectrum were previously unified under the banner of the C-band Alliance but as the prospect of vacating the spectrum has materialized, these companies have turned on each other and began requesting separate payments for opening the spectrum. The requested compensation appears likely to dwarf the $9.7 billion allocation from the FCC.

Considering that Ericsson and Nokia have a combined market capitalization of approximately $50 billion, the cost of purchasing a controlling stake in both companies could enormously inflate the bill for U.S. taxpayers. This situation could be avoided, however, if the “consortium of private American and allied companies” Barr referenced in his statement were to come together and purchase controlling stakes in the companies. While not implausible, the possibility of such a move occurring in the immediate future- a crucial part of Barr’s plan- is questionable at best, as the coronavirus continues to rout global markets and curb major investment. For now, Barr’s plan would likely require the U.S. government—the American Taxpayer—to foot the bill for controlling stakes in Ericsson and Nokia.

Prominent U.S. allies have already allowed Huawei to install telecom infrastructure or are courting the idea. The U.K., a member of the Five-Eyes Alliance, announced it would allow Huawei to “play a limited role in its next generation 5G mobile networks.” While Germany wrestles with how much access Huawei should have in the 5G market, Huawei is working on building a 5G equipment factory in France. These developments—coupled with wider adoption of Huawei’s technology throughout the EU—suggest that if even if Barr’s plan is successful, it will provide only marginal security enhancements because the U.S. would likely be forced to curtail the amount of information it shares until new security protocols are implemented. Moreover, Barr’s plan appears to overlook a crucial paradigm shift in the U.S.-E.U. security apparatus: the current system is already being upended; Chinese technology will be used by the E.U. regardless of U.S. efforts to persuade it otherwise.

The State Department, AG Barr, and the wider U.S. intelligence community must ask itself: is the juice worth the squeeze? It appears that Barr’s plan may provide only modest security protections at the cost of upending our intelligence sharing agreements with crucial allies around the globe. If the U.S. is to effectively counter Huawei’s march toward global domination, it must find solutions that can adopted by its partners abroad, not just here at home.

 

EARN What? How lawmakers continue to miss the mark with Section 230

By: Alvaro Maranon

Last month, the Department of Justice held a workshop on Section 230 of the Communications Decency Act [“Section 230”]. Attorney General Bill Bar’s opening remarks highlighted a myriad of talking points that continue to act as the basis for changing Section 230. In this speech, AG Barr argued how Section 230 is responsible for, among others: shielding criminals and bad actors; and enabling internet services to block access to law enforcement officials, even with a court-authorized warrant. Although the speech contained a far more exhaustive list of concerns, they all rally behind the call for reforming Section 230.

This speech was not superficial. It illustrates the reemergence of the government’s anti-encryption campaign. More recently, members of the “Five Eyes” (Australia, Canada, United Kingdom, United States, and New Zealand) have adopted or signaled to adopt overly-broad and expansive anti-encryption legislation. Whether it’s the United Kingdom’s “Ghost Protocol” proposal or Australia’s “Assistance and Access Bill of 2018,” each legislation represents a targeted effort to weaken encryption. 

Now, Senators Graham and Blumenthal are seeking to codify these concerns with their bill titled the “EARN IT Act.” Despite the bill explicitly stating its focus is on combatting child sexual abuse material [“CSAM”], it is much more than that. The proposal amends Section 230 by removing a service providers’ liability shield in civil and state criminal suits over CSAM and exploitation-related material unless the newly created commission certifies them.

The commission would be directed by AG Barr and they would unilaterally determine what best practices each company would need to comply with to “earn” their liability shield. Moreover, the bill includes no oversight language nor any meaningful check on this expansive discretionary power. Thus, given the recent anti-encryption rhetoric, this power will very likely be used to weaken end-to-end encryption (E2E) and impose the frequently sought after but highly illusory backdoor requirement. Attempts to create a “law enforcement only” backdoor, creating a purposeful vulnerability in encryption that would permit an official to have easy access to the encrypted data, is not only impossible but dangerous. Once this weakness is created, nothing guarantees that this master key(s) to all the encrypted accounts can only be used by the good guys. Criminals will not only seek to discover this weakness, but will successfully steal it from law enforcement. The devastating WannaCry ransomware embodied this danger when it crippled devices in over 150 countries causing an estimated $4 billion in losses, after a hacking group had breached the NSA’s Vault7

In an era where the digital economy continues to grow, and threat vectors continue to evolve, encryption needs to be strengthened. These legislative proposals epitomize a near-sighted approach that will fail to account for a plethora of foreseeable consequences. Section 230 and strong encryption have yielded endless economic benefits for individuals and the economy. Encryption strengthens domestic markets and economies by fostering consumer trust in e-commerce. And despite these successes, their true benefit to society has been how they have given a voice to the voiceless: 

  • Encryption empowers journalists, activists, and political dissidents to speak and think freely in times of oppression and xenophobia.
  • Encryption helps oust corruption and other government malfeasance by protecting whistleblowers and activists who seek to reveal scandals and controversies. 

To be clear, combatting CSAM and other heinous crimes should always be welcomed and encouraged. More support is needed in the efforts against similar crimes such as cyberstalking and revenge porn. If lawmakers sincerely sought to address this issue, then they would consult with experts rather than needlessly rush harmful legislation as was seen with SESTA. The Cyber Civil Rights Initiative, headed by a board of fantastic advisors and professionals, is one of many excellent groups that can contribute to the drafting of comprehensive and effective solutions. 

Although critics of Section 230 may have some valid concerns, the aforementioned rationales for wanting to change this law are all based upon false pretexts. The same red herrings were reiterated during the recent Senate Committee on the Judiciary’s hearing on the EARN IT Act. Instead of dismantling Section 230, which has helped foster a rich online community of diversity and enabled ingenious start-ups to prosper like Go-Fund-Me, lawmakers should seek out ways to incentivize internet services to take down more harmful content. 

This isn’t a zero-sum game between law enforcement and tech companies. A critical look at the harms mentioned by AG Barr can reveal how feasible and practical cooperation can be, rather than adversarial, which is often pitched as the only approach for tech private-public efforts. Each of Barr’s claims will be shown to be ultimately unnecessary given the ample alternatives demonstrating otherwise. 

Shielding Criminals 

“No Effect on Criminal Law”- the language of Section 230 is clear. 

Unlike most legislation, Section 230 is quite short and unambiguous, and even carries the nickname of “The Twenty-Six Words That Created The Internet.” Despite this lack of ambiguity, lawmakers continue to purposefully warp its effect and purpose. The bill’s congressional findings and policy indicate an intent that the liability shield would not undermine the enforcement of criminal laws nor the prosecution of sex trafficking crimes. Even with this clear language, AG Barr continues to claim that Section 230 enabled criminals and bad actors to evade punishment. This is far from the truth and the takedown of Backpage.com showcases this. 

At first glance, the domain seizure of Backpage.com appeared to be a success story, given the mass proliferation of truly awful and despicable content on its site. But, a subsequent investigative report revealed that Backpage had not only had taken numerous steps to curb these activities but were a powerful ally in the fight against sex trafficking. The DOJ even described Backpage as being “remarkably responsive to law enforcement requests and often takes proactive steps to assist in investigations.” From developing content-moderation practices that filtered out certain search terms to retaining a former sex-crime and child abuse prosecutor to help craft a holistic safety program, Backpage did not act like they were above the law. 

While takedowns of such sites do offer some solace and remedy to victims of these crimes, these bad actors and criminals will only migrate to other less-visited, and possibly less-cooperative, platforms. The sad truth is that CSAM and other related materials will continue to be widespread, with technology companies reporting “a record 45 million online photos and videos of the abuse last year.” Congress should not only authorize additional funding for these efforts, especially given how they consistently fail to fully approve previously authorized funding for state and regional investigations, but seriously investigate what tools and research can help both the private and public sector combat this problem more effectively.  

Blocking Access to Law Enforcement

“This bill says nothing about encryption.” Senator Blumenthal actively sought to shut-down this narrative at the Senate hearing last week. Numerous experts in this field have denounced this claim and articulated how it could be used to impose ad-hoc carve out exceptions to encryption. Please read any of the following excellent pieces on this subject: Riana Pfefferkorn, Electronic Frontier Foundation, TechFreedom, NetChoice

Although the backdoor issue warrants an entire discussion in and of itself, the misleading justification for which law enforcement relies upon deserves scrutiny. Many of the pretexts have painted an environment where law enforcement is helpless against technology companies in the fight against crime. From encryption creating impenetrable barriers to nullifying the power of a warrant, each fear is far from a reality. In fact, it is the failures of law enforcement that often result in stand-offs and the compounding of problems in investigations. A key CSIS report assessed the challenges and opportunities law enforcement faces as they seek to access and use digital evidence. It found that although encryption does pose a challenge in digital evidence gathering, it is far from the problem. The biggest reported difficulty was their “inability to effectively identify which service providers have access to relevant data.” Moreover, officials often reported going to the wrong ISP, having little to no training in data requests, and an overall lack of funding. 

Cooperation is possible. Creating a comprehensive approach to digital evidence can help lift the burden for both service providers and law enforcement. Narrowing warrant requests can incentivize more cooperation, as service providers will be more comfortable in the sharing of specific information about their users. Narrowing warrant requests can prevent officials from being burdened with large amounts of information that may run afoul to the particularity requirement and the holdings from Carpenter v. United States

These incentives can also easily be applied to harmful content takedowns. Following the mass shootings in Texas and Ohio last year, Cloudfare cut their ties with the notorious 8chan by no longer offering them their essential DDOS protection service. More recently, Facebook, Twitter, YouTube, and other social media giants announced a coordinated update to their policies to flag conspiracy theories about coronavirus and other misinformation. Section 230 encouraged companies to act proactively, and they continue to do so. 

While attacking Section 230 is an easy cop-out, efforts should be put elsewhere. In a time where some legislators continue to lecture private companies about their duty to the public, despite some officials mocking serious issues, their focus should be on supporting Good Samaritans. In the end, Cloudflare’s response to 8chan should represent the norm, not the outlier, in the approach of combating harmful speech.

 

Airbnb’s Party Prevention Plan – A Consideration of Smart Devices that Lack Recording Capabilities

By: Margarita Gorospé

Use of surveillance technologies within Airbnb properties is not a new phenomenon. However, in December 2019, Airbnb announced an update to their company policies. The update focused on trust, and the new official ban on “party houses”, which refers to the use of rental properties as house party venues. The update also included ways the company hopes to combat safety and hosts’ concerns, including a “Neighborhood Support” initiative, and the new discount program for smart home technology devices. Risks and concerns regarding these initiatives, specifically the use of smart home technology, must be balanced with the considerable benefits they afford to the platform users – hosts and guests alike.

Following this December announcement, an email from Airbnb was recently sent out in February 2020 to hosts, which touted the platform’s discounts on three optional surveillance devices hosts could use in their rented out properties – Minut, NoiseAware, and Roomonitor.

Minut allows homeowners to remotely monitor the maintenance of the property, including noise, temperature, motion, and humidity. The device allows homeowners to set a threshold noise level and does not need an outlet. NoiseAware alerts homeowners of “sustained noise levels”, and considers excessive noise to be “a leading indicator of property misuse”. The website boasts that it is the only home device with a “microphone that does not record audio”. Finally, Roomonitor allows homeowners to be aware of the property’s noise levels all day through analyzed noise patterns, with real time access. All three of these devices claim to not have the ability to record.

These new technologies, which allow “smart” monitoring without recording capabilities, pose many questions. However, one significant question that is often at the center of smart technology use does stand out – do these devices finally strike a compromise between privacy concerns and safety concerns?

On the one hand, many Airbnb guests are often shocked whenever they inadvertently discover a recording device in their rental. Renters reacted with anger, confusion or disgust. Use of a hidden camera in a hotel-like space or other short-term rental is still unsettling, despite knowing it is someone else’s property. It is a temporary home, where most people expect a level of privacy at least somewhat close to what they would expect in their own home. Guests often feel violated when intimate moments are captured for homeowners to watch.

On the other hand, the homeowner who chooses to rent out their property for visitors has true ownership. Homeowners desire to maintain the value of their properties, wish to keep good relationships with their neighbors, aim to maintain a safe environment within their home, and want to ensure that their properties avoid damage. When they choose to stay off-site after guests arrive, they are placing their assets and trust in the hands of complete strangers. Anything can happen to their property without their watchful eye, including shootings, such as the ones reported in California and Canada. Finding a way to monitor their home while they are not physically present allows homeowners to have peace of mind about leaving their property to strangers, and also allows them to act swiftly if any unusual or impermissible activity is occurring.

Enter Minut, NoiseAware, and Roomonitor, with promises of monitoring abilities while limiting privacy intrusion.

Will the use of these devices help move private surveillance technology use in the right direction? First, such use completely gets rid of any facial recognition problems, an area rife with questions of bias and mistaken identity. Second, the devices claim to only monitor certain conditions, which significantly limits the scope of information being analyzed by the device. None of these devices have video features, which gives guests some level of anonymity. Third, the lack of recording capabilities allow for another layer of privacy by providing a lighter touch on accessible information.

Without recordings, the homeowner can only use live streaming or can only access compiled data. This may be a good thing. After all, recordings allow for unnecessary inspections, and require some method of storing information about a target person in a way that he or she may not know about. Instead, the devices limit homeowners to glimpses and summaries, not “on-demand” watching.

While there are certain benefits to this technology, potential risks inevitably come along. Recording capabilities do serve important functions, and without them, there are less ways to determine context. For example, will a device that alerts homeowners of sustained noise levels ignore the sound of a single gunshot? What if the homeowner chose a significantly low noise threshold on his or her Minut device, resulting in an alert if guests end up having animated and lively conversation? Finally, what if information on any of these devices could be used to provide evidence for or against a claim between the homeowner and guest, but since a full recording is non-existent, the court is stuck with a limited number or dataset? A single spike in noise chart can mean anything, and without a full recording, there is little to nothing that can be used to prove assertions.

Other concerns center around how the data is stored and handled. These devices may not have recording capabilities, but they do collect data for analysis and reports presented on the devices’ respective applications. How exactly is this information compiled and where does it store this information prior to the creation of the analysis or report?

Finally, laws and policies concerning surveillance devices center on notice. Airbnb states that hosts must disclose the presence of a recording device within the home and disclose any active monitoring, and places little to no restrictions thereafter, as long as the devices are not placed in bathrooms or similar intimate areas. There are no federal video or audio recording laws that would be applicable in these instances, so these devices are likely going to be regulated by state laws. However, it appears that the legal status of these three specific devices is unclear. While it is a tamer version of Ring or Nest, it is also a more watchful version of traditional home alarm systems. They act more like sensor devices, but still have information collecting capabilities. However, none of that information is recorded in the same way other popular home surveillance devices record footage or audio – there are no playback features, just charted data. Since most laws concerning surveillance devices center on the “recording” aspect, questions remain about regulation when a device significantly waters down collected information to near anonymity.

There are still many unknown factors when it comes to finding the balance between privacy and safety. Devices such as Minut, NoiseAware, and Roomonitor may be attempting to consciously find that balance, and if so, such innovation may help provide future initiatives with important lessons.

 

Additional Sources

Disrupting Reality: The Hidden Risks Behind Virtual Reality

By: Soniya Shah

Virtual reality (VR) simulates environments and allows users to interact in those environments. The technology is embraced by many corporate entities, such as the healthcare sector for use in training or the aerospace industry, which uses VR for flight simulation. Virtual reality is also popular with consumers who use it in video games. Games like Pokémon Go allow a player to walk around searching for virtual characters in their real-world surroundings. Most users access the environment through an interface, which is usually a headset the user wears during the interaction. The virtual reality and augmented reality industry was valued at $26.7 billion in 2018 and is expected to increase to $814.7 billion by 2025. Such a valuable industry is likely to be a target for privacy and security concerns. 

Many devices connected to the Internet of Things, which includes virtual reality devices, are likely to have been designed and developed without security protocols in mind. Of course, many emerging technologies raise questions about the privacy of the user, especially when that is not a key design question. The main concern is that VR companies have a whole new level of access because the programs access the video and audio feed of a user’s surroundings. Many of the privacy policies state that information will be shared with third parties

Many VR devices also track biometric data – movements of the body including the head, hands, and eyes. This data can be medical data. What happens if that information falls into the lap of an insurance company who uses it to make determinations about medical coverage? The issue is that the unintended consequences of doing something fun have very real, potentially damaging consequences when third parties have paid for access to that information. 

As security and privacy concerns form, developers are now making devices that have a private mode feature that prevents the device from recording data while you are using it. The burden falls on users to choose devices that incorporate such a feature if they want to keep their information safe because there are no laws regulating the devices. Users should also be cognizant of privacy policies and check to see what data companies are releasing and what permissions are given away when you agree to use a VR device. There are also potential implications because many VR companies use cloud technology, which the users never interact with. There may be different terms of service between a VR company and the cloud provider. 

Regulations are likely necessary now but the law tends to move slower than the pace of new technology. To protect users, regulations should limit the amount of biometric data VR devices can collect, either by deleting it immediately or not allowing its collection at all. Further, it is critical that VR companies provide their users with explicit information regarding the type of information they collect and who – including third parties – has access to that information. If a data policy changes, all users should be required to opt-in again. User awareness is ever-critical in an era where security and privacy are afterthoughts. 

How Technology Has Both Fueled and Hindered the 2019 Hong Kong Pro-Democracy Protests

As the city moves into its sixth month of civil unrest, Hong Kong remains consumed by widespread pandemonium and ruin, forcing the once flourishing city into a recession for the first time in a decade. No sign of abatement is in sight as protestors prepare for death in light of escalating violence. The impetus, an extradition bill which has since been withdrawn, quickly made way for full democracy and police brutality concerns to surface. Recent elections also show widespread support for the pro-democracy movement. This is not the first time Hong Kongers have risen up in protest since the ’97 handover. The 2014 “Umbrella Movement” ended in failure, but this time around, Hong Kong’s leaderless protest movement has adopted an open source organizational model, in which online communication platforms like LIHKG and Telegram have been central. LIHKG is a multi-category forum website that is oft likened to Reddit with posts that may be up or down-voted by users. The forum has been used for crowdfunding to bring attention to the protests and has served as a real-time poll for action. Similarly, instant-messaging app Telegram has allowed protestors to swiftly mobilize. Its popularity comes from its ‘secret chat’ function which allows for ‘end-to-end’ encryption, an implementation of asymmetric encryption where third-party users are blocked from accessing transferred data between a true sender and recipient.

In spite of their success, LIHKG and Telegram have not been immune to attack. China has been linked to distributed denial of service (DDoS) attacks on both platforms where servers were overwhelmed with garbage requests to create connectivity issues for users, presumably an attempt to disrupt protest mobilization. Further, other online platforms like Facebook, Twitter, YouTube, and even Pornhub, have identified Chinese information warfare and removed accounts linked to state-backed disinformation campaigns. Since these campaigns were inadequate, the Hong Kong Police Force enlisted the help of foreign cybersecurity specialists and has become ever more zealous in tracking and identifying activists for arrest. In response, private companies like Yubico have stepped up to the plate in the name of social responsibility to protect activists by donating hundreds of hardware security keys, long known as a more secure method of protection, though they are unhelpful in cases where police request protestors to unlock phones protected by biometrics. This too remains an issue. In Colin Cheung’s case, a protestor who created a facial recognition tool to identify police and who was subsequently targeted by law enforcement because of it, police forcibly pried his eyes open and shoved his face into his phone to attempt entry by way of the phone’s facial recognition unlocking function.

As social unrest endures in Hong Kong, so has weaponization of biometric data and identity generally. It has been a race between demonstrators and patriots to outdox, the leaking of one’s personal information online for malicious intent, one another. Hong Kong police were hit particularly hard on the ‘Dadfindboy’ Telegram doxxing channel after they stopped wearing identification badges as violence escalated. On November 8th, the High Court of the Hong Kong Special Administrative Region extended an injunction to stop doxxing of police officers and their family members with a journalism exemption to balance government accountability and personal security. But this ban does not address the doxxing of pro-democracy figures who have had sensitive personal data such as home addresses and phone numbers categorically exposed on both the China-backed website HK Leaks and to a lesser extent Telegram. Such targeted leaks have led to death threats meant to unnerve protestors into submission.

It remains to be seen how pro-establishment figures and state-backed initiatives will continue to use technology against these young, tech-savvy activists who are afraid of nothing but a future without hope of universal suffrage and government accountability. China is quickly losing patience with its problem child, and the world watches with bated breath for the conflict to come to a head. One thing is certain; this time, pro-democracy protestors aren’t giving up so easily. As one protestor posted on LIHKG: “If not now, when?”

Illinois Biometric Privacy Law Cries Foul on the Notion of “No Harm, No Foul”

By: Josh Cervantes

In less than a year, an Illinois law has shown that the age-old saying, “no harm, no foul” is dangerously misguided when it comes to the collection of biometric data. Illinois’ Biometric Information Privacy Act (BIPA) focuses on the unique challenges posed by the collection and storage of biometric identifiers and protects against the unlawful collection and storage of biometric information. The Act does this by giving individuals and consumers the “right to control their biometric information by requiring notice before collection and giving them the power to say no by withholding consent.” Common biometric identifiers include retina or iris scans, voiceprints, fingerprints, and face or hand geometry scans. BIPA is unique because it is the only piece of legislation in the U.S. that allows private individuals to sue and recover damages for violations. An examination of two recent cases illustrates the practical applications of BIPA, the potential fallout from the rulings, and shows how BIPA may influence the creation of similar laws in other U.S. states.

In Rosenbach v. Six Flags Entertainment Corp., the Illinois Supreme Court addressed whether a private individual is “aggrieved” and may pursue liquidated damages and injunctive relief if they have not alleged an “actual injury or adverse effect” beyond the violation of their rights under the statute. The Court ruled that the Plaintiff had indeed suffered harm because Six Flags Corp. denied the Plaintiff the right to maintain their biometric privacy by not allowing them to consent to “. . . the collection, storage, use, sale, lease, dissemination, disclosure, redisclosure, or trade of, or for [defendants] to otherwise profit from. . . associated biometric identifiers or information.” This ruling made it clear that a mere violation of BIPA alone was a harm to the Plaintiffs, and that no actual injury needed to be shown to pursue damages and injunctive relief.

The U.S. Court of Appeals for the Ninth Circuit reaffirmed BIPA principles in Patel v. Facebook after holding that using “facial-recognition technology without consent invades an individual’s private affairs and concrete interests.” The court noted facial recognition technology-which Facebook uses to identify individuals in photos-manifested an unreasonable intrusion into personal privacy because it “effortlessly compiled” detailed information in a manner that would be nearly impossible without such technology. Again, no concrete injury was presented in Patel, and the Plaintiffs proceeded on  a cause of action because their rights under BIPA were violated.

These decisions have several notable impacts on the way biometric privacy rights will be dealt with in the future. First, they introduce a unique interpretation of Constitutional standing under Article 3. To establish standing, a party must show that they suffered an “injury in fact—an invasion of a legally protected interest which is (a) concrete and particularized; and (b) actual or imminent, not conjectural or hypothetical.” Rosenbach and Patel show that individuals do suffer actual harm when their biometric information is collected without consent, and that monetary loss or damage to livelihood need not occur. Second, these rulings will impact how law enforcement agencies use biometric surveillance technology and serve to put agencies on notice that this technology poses unique risks to individual privacy. Third, the decisions emphasize the importance of including a private right of action when privacy interests are violated. Without a right of action, the practical effects of the BIPA would almost certainly be gutted, as individuals would have no redress if their biometric privacy rights were violated.

While BIPA remains the strongest biometric privacy law in the U.S., other jurisdictions have taken steps to address the growing concerns regarding the collection and storage of such information. Shortly after BIPA was enacted in 2008, the Texas legislature introduced a biometric privacy law that provides civil penalties for companies that improperly store biometric data. However, only the state attorney general can bring suit against companies for violations of the law. In 2017 Washington enacted its own biometric privacy law, which contained a large carve-out for storing biometric data for a “security purpose”, in addition to reserving a right of action solely for the state attorney general.

As technology companies and law enforcement agencies alike seek to utilize biometric data collection, it behooves states to take a closer look at the myriad risks such technology can pose to private individuals. BIPA is increasingly serving as a model for other states because it not only identifies exact biometric data that should be protected, but also because it actually allows individuals to redress their grievances in court, which, as Patel and Rosenbach illustrate, can lead to serious changes in the biometric data collection landscape. Further delineation as to when such collection is proper will only become more necessary as biometric data collection is further integrated into modern services and law enforcement.

 

 

 

Understanding Deepfakes

By: Soniya Shah

Deepfake technology involves using artificial intelligence and machine learning models to manipulate videos. A deepfake is a doctored video that shows someone doing something they never did. While the technology could be used to make funny spoofs, there are also much darker, more dangerous implications. Anyone can use online software or applications that allow them to create the doctored videos. Creating the videos is easier for those in the public sphere because there is more data available of them in clips and videos than there is for those who are not in that sphere. For example, Donald Trump is a target for deepfakes because of the number of available clips. This makes it easy to create videos of him saying or doing things that never happened. 

All ways of making deepfake videos require using machine learning models to generate the content. One machine learning model trains on a data set (which is why the larger data set makes the videos more believable) and then creates the doctored videos, while the other model tries to detect forgeries. The models continue to run until the second model can no longer detect forgeries. While amateur deepfakes are usually easy to detect with the naked eye, those that are more professional are much harder to suss out. Because the models that generate the videos are getting better over time, especially as the training data gets better, relying on digital forensics to detect deepfakes is spotty at best. 

Should we be worried? There are serious implications for creating a video that shows someone doing something unforgivable. What if such a video was timed right before an election and swayed the results? The other problem is that creating the videos is relatively easy, given the wide accessibility of deepfake technology. Deepfakes are not illegal. There are First Amendment questions because there is a high chance that deepfakes are protected by freedom of speech. However, there are also concerns about national security risks. Some states are already taking steps against the video. For example, Virginia recently made deepfake revenge pornography illegal

In June 2019, the House Intelligence Committee held a hearing on the risks of deepfakes, listening to experts on AI and digital policy about the threats that they pose. The committee said it aims to “examine the national security threats posed by AI-enabled fake content, what can be done to detect and combat it, and what role the public sector, the private sector, and society as a whole should play to counter a potentially grim, ‘post-truth’ future”. The hearing was timely, given the doctored video of House Speaker Nancy Pelosi that went viral in early summer 2019. The video raised concerned that manipulated videos would become the latest technology used to disseminate misinformation. Obviously, it is concerning when viewers can no longer trust a video that appears real, especially when it is of high ranking government officials. The experts who spoke at the hearing agreed that social media companies need to work together to create consistent standards in the industry that prevent the prevalence of such videos. 

Revisiting COPPA: How recent developments have shaped the discussions

By: Alvaro Marañon

While calls for comprehensive federal privacy legislation may continue to fall upon deaf ears, concerns about protecting children’s online privacy might have already been heard with talks of forthcoming regulatory change. Illustrating this shift, the Federal Trade Commission (FTC) welcomed comments on the effectiveness of the 2013 amendments to the Children’s Online Privacy Protection Act (COPPA). The amendments specifically updated the COPPA Rule to address the advances in mobile devices and social networking, and to expand the definition of personal information by including geolocation and persistent identifiers like cookies. More recently, the FTC held a public workshop to explore proposals to further update the COPPA Rule. This recent push has been propelled by two major developments over the past year: (1) the legislative proposal by Senator Markey titled “COPPA 2.0”; and (2) the  record-breaking FTC settlements for COPPA Rule violations.

In effect since 2000, COPPA creates a set of information privacy guidelines governing the collection, use, and access of personal information by operators of commercial websites or onlines services directed to children. These requirements are imposed upon companies if they have “actual knowledge” that they are collecting personal information from a child that is under 13 years old. Having a clear and detailed privacy policy, providing parents the opportunity to review the collected information, and requiring operators to protect the confidentiality, security, and integrity of any collected personal information are some of the main provisions. While numerous enforcements have been brought, legislative amendments have recently been proposed.

Last March, Senators Markey of Massachusetts and Hawley of Missouri announced a proposal to modify the existing scope and rules of COPPA. Among the various considerations, the bill  proposes to ban targeted advertising directed at children (defined as any user under 13), create a new division within the FTC to handle youth privacy and marketing, and modify the parental ability to view collected information by creating an “Eraser Button” that would permit the parent or user to delete their information. The bill also calls for a new and flexible cybersecurity standard for internet connected devices targeted towards children and minors (defined as any user between 13 to 15) by considering the sensitivity of information collected, the context in which it is collected, its security capabilities, and more.  

While the bill does well to account for  the interests between innovation and privacy, its outright ban of all targeted advertising of children, the changing of the “actual knowledge” standard to constructive, and the expansion of the disclosure requirement make it a highly problematic bill. 

The outright ban on all targeted advertising directed at children raises concerns for its overbreadth and encompassing of potentially beneficial ads. Secondly, changing the knowledge standard to a broader definition may seem beneficial on its face but is impractical with the inability to accurately determine one’s intended and actual audience. Lastly, these new requirements put operators in a difficult spot with their privacy and security policies. This “Eraser button” intends to improve privacy and security by requiring operators to gather more personal information and have it readily available for viewing by outside parties.  These two options, while well intended, run afoul to the policy trend for more data minimization and anonymization. Although these are just proposals, assessing its potential impact on practices and operations can help prepare for compliance costs and strategic planning for new entrants and incumbents. 

But what is the current state of COPPA? FTC Commissioner Wilson’s opening remarks at the recent COPPA workshop reiterated an important characteristic about the regulatory environment: the COPPA rule permits the FTC to keep pace with changes in technology, the development of new business models and data collection, and the manner in which children interact with these online services. This was demonstrated with the 2013 COPPA amendments, which was in response to the expansion of the smartphone market. The emergence of the Internet of Things (IoT) devices and the surge in platforms that host third-party content merits a reassessment of the current rules but whether to make any substantive changes is less clear. Looking at recent enforcement cases can help determine n if changes to the rules are needed. 

In February 2019, the FTC settled the then largest civil penalty for a COPPA violation when Musical.ly agreed pay $5.7 million for failing to seek parental consent before collecting personal information from users under 13 years old. Their application, TikTok, permitted users to upload short video clips on an interconnected platform where they could interact, comment, and directly message other users. 

In May 2019, three dating applications were removed from Apple’s and Google’s respective application store after the FTC alleged they violated COPPA by permitting users under 13 to access them. Although no fine resulted, the removal demonstrated the FTC’s ongoing supervision in this field. 

Lastly, in September 2019,  Google and YouTube settled to pay a record $170 million for the allegations by the FTC and the NY Attorney General that they had collected personal information from children without their parents’ consent. Specifically, the complaint alleged that through the use of cookies, YouTube had collected personal information from viewers of child-directed channels. Then YouTube used this information to deliver targeted ads to viewers of this channel. Despite the operators classifying themselves as general audience sites, the complaint focused on the operators failure to take action once they had notice of the channels directed at children. Importantly, COPPA does not require operators to determine if videos produced by third parties are directed to kids, but in light of this settlement, Google and YouTube have revised their advertising policies. 

While the FTC has brought COPPA enforcement in the past, this proceeding was differentiated by the severity of the fines. As noted by FTC Chairman Simmons, the civil penalty obtained against Google and YouTube was 10 times larger than all of the 31 prior COPPA cases combined. Aside from the financial implications, this settlement marked a pivotal point in COPPA enforcement by holding a platform liable for the content posted by a third party. 

These developments help indicate areas of consideration for regulators and stakeholders in this industry. First, there will be an increased reliance upon machine learning to catch violators and help identify child directed programs. Second, the lack of clarity regarding what the relevant factors are and what their respective role in the determination of child directed programming will lead to an increase in the creation of segmented “child-only” services. Lastly, a ban of all targeted advertisements directed at children could chill investment and lead various stakeholders to entirely abandon the market. While both  TikTok and YouTube announced initiatives to further fund, promote, and expand the services and content for their child specific channels, not all industry players may be capable of following suit.  

Wearable Technology & Regulation Gaps

By Soniya Shah

With the advent of wearables like smart watches and fitness trackers becoming more popular by the day, we have some big questions to answer around data privacy and security. There is always concern about new technology and cyber attacks, especially when data travels through wireless networks. 

Users of these devices usually do not want others looking at their data, especially when it comes to health data. However, many privacy policies are vague and even include disclaimers that information may be shared with third parties. Part of the issue is that HIPPA does not extend to this medical information, so makers of wearables legally can share medical data without incurring liability. 

Wearables obtain Information about a person including the time and duration of activity. This information coupled with demographic user profiles can provide data that is crucial to businesses looking to market to individual consumers. 

The security of this information is important because identifying individuals based on their data poses security and privacy risks. For example, insurance companies could use the information to price differentiate between customers. Despite the potential risks, wearables have gone largely unregulated by the FDA, because traditional wearables do not assist in patient treatment, and the risk of wearing a device like an Apple Watch is low. 

While most wearables are not subject to federal regulation, states have the power to regulate via consumer protection laws and other state laws. For example, California has stricter privacy laws around medical data than what is mandated by HIPAA through federal regulations. States should consider tightening regulations to protect consumer data and alleviate some of the risks that come with wearable technology. 

In early June, Senators Amy Klobuchar and Lisa Murkowski introduced the Protecting Personal Health Data Act, which would put into place new privacy and security rules around devices that collect personal health data, including wearables like fitness trackers. The Act would require the Department of Health and Human Services (HHS) Secretary to issue regulations related to privacy and security of health-related consumer devices, applications, services, and software. The bill would incorporate concepts from the European Union’s General Data Protection Regulation (GDPR), such as individual access to delete and amend health data tracked through wearables and other applications. To implement the Act, HHS would need to create a national task force to address cybersecurity risks and privacy concerns. 

HHS will need to take into account the different standards needed for each type of data that is collected, including genetic and general personal health data. Perhaps more importantly will be the ability for consumers to access their own data and have more control over what is used and collected by companies. 

The Act is part of a larger Congressional effort to increase efforts to protect consumer privacy, especially after Facebook data scandals. While this Act could be a big step for privacy and security concerns, there are no guarantees the bill will pass. While we wait for federal regulation, it might be time for states to follow in California’s footsteps and start creating legislation that protects consumers. 

Blog at WordPress.com.

Up ↑