How Technology Has Both Fueled and Hindered the 2019 Hong Kong Pro-Democracy Protests

As the city moves into its sixth month of civil unrest, Hong Kong remains consumed by widespread pandemonium and ruin, forcing the once flourishing city into a recession for the first time in a decade. No sign of abatement is in sight as protestors prepare for death in light of escalating violence. The impetus, an extradition bill which has since been withdrawn, quickly made way for full democracy and police brutality concerns to surface. Recent elections also show widespread support for the pro-democracy movement. This is not the first time Hong Kongers have risen up in protest since the ’97 handover. The 2014 “Umbrella Movement” ended in failure, but this time around, Hong Kong’s leaderless protest movement has adopted an open source organizational model, in which online communication platforms like LIHKG and Telegram have been central. LIHKG is a multi-category forum website that is oft likened to Reddit with posts that may be up or down-voted by users. The forum has been used for crowdfunding to bring attention to the protests and has served as a real-time poll for action. Similarly, instant-messaging app Telegram has allowed protestors to swiftly mobilize. Its popularity comes from its ‘secret chat’ function which allows for ‘end-to-end’ encryption, an implementation of asymmetric encryption where third-party users are blocked from accessing transferred data between a true sender and recipient.

In spite of their success, LIHKG and Telegram have not been immune to attack. China has been linked to distributed denial of service (DDoS) attacks on both platforms where servers were overwhelmed with garbage requests to create connectivity issues for users, presumably an attempt to disrupt protest mobilization. Further, other online platforms like Facebook, Twitter, YouTube, and even Pornhub, have identified Chinese information warfare and removed accounts linked to state-backed disinformation campaigns. Since these campaigns were inadequate, the Hong Kong Police Force enlisted the help of foreign cybersecurity specialists and has become ever more zealous in tracking and identifying activists for arrest. In response, private companies like Yubico have stepped up to the plate in the name of social responsibility to protect activists by donating hundreds of hardware security keys, long known as a more secure method of protection, though they are unhelpful in cases where police request protestors to unlock phones protected by biometrics. This too remains an issue. In Colin Cheung’s case, a protestor who created a facial recognition tool to identify police and who was subsequently targeted by law enforcement because of it, police forcibly pried his eyes open and shoved his face into his phone to attempt entry by way of the phone’s facial recognition unlocking function.

As social unrest endures in Hong Kong, so has weaponization of biometric data and identity generally. It has been a race between demonstrators and patriots to outdox, the leaking of one’s personal information online for malicious intent, one another. Hong Kong police were hit particularly hard on the ‘Dadfindboy’ Telegram doxxing channel after they stopped wearing identification badges as violence escalated. On November 8th, the High Court of the Hong Kong Special Administrative Region extended an injunction to stop doxxing of police officers and their family members with a journalism exemption to balance government accountability and personal security. But this ban does not address the doxxing of pro-democracy figures who have had sensitive personal data such as home addresses and phone numbers categorically exposed on both the China-backed website HK Leaks and to a lesser extent Telegram. Such targeted leaks have led to death threats meant to unnerve protestors into submission.

It remains to be seen how pro-establishment figures and state-backed initiatives will continue to use technology against these young, tech-savvy activists who are afraid of nothing but a future without hope of universal suffrage and government accountability. China is quickly losing patience with its problem child, and the world watches with bated breath for the conflict to come to a head. One thing is certain; this time, pro-democracy protestors aren’t giving up so easily. As one protestor posted on LIHKG: “If not now, when?”

Illinois Biometric Privacy Law Cries Foul on the Notion of “No Harm, No Foul”

By: Josh Cervantes

In less than a year, an Illinois law has shown that the age-old saying, “no harm, no foul” is dangerously misguided when it comes to the collection of biometric data. Illinois’ Biometric Information Privacy Act (BIPA) focuses on the unique challenges posed by the collection and storage of biometric identifiers and protects against the unlawful collection and storage of biometric information. The Act does this by giving individuals and consumers the “right to control their biometric information by requiring notice before collection and giving them the power to say no by withholding consent.” Common biometric identifiers include retina or iris scans, voiceprints, fingerprints, and face or hand geometry scans. BIPA is unique because it is the only piece of legislation in the U.S. that allows private individuals to sue and recover damages for violations. An examination of two recent cases illustrates the practical applications of BIPA, the potential fallout from the rulings, and shows how BIPA may influence the creation of similar laws in other U.S. states.

In Rosenbach v. Six Flags Entertainment Corp., the Illinois Supreme Court addressed whether a private individual is “aggrieved” and may pursue liquidated damages and injunctive relief if they have not alleged an “actual injury or adverse effect” beyond the violation of their rights under the statute. The Court ruled that the Plaintiff had indeed suffered harm because Six Flags Corp. denied the Plaintiff the right to maintain their biometric privacy by not allowing them to consent to “. . . the collection, storage, use, sale, lease, dissemination, disclosure, redisclosure, or trade of, or for [defendants] to otherwise profit from. . . associated biometric identifiers or information.” This ruling made it clear that a mere violation of BIPA alone was a harm to the Plaintiffs, and that no actual injury needed to be shown to pursue damages and injunctive relief.

The U.S. Court of Appeals for the Ninth Circuit reaffirmed BIPA principles in Patel v. Facebook after holding that using “facial-recognition technology without consent invades an individual’s private affairs and concrete interests.” The court noted facial recognition technology-which Facebook uses to identify individuals in photos-manifested an unreasonable intrusion into personal privacy because it “effortlessly compiled” detailed information in a manner that would be nearly impossible without such technology. Again, no concrete injury was presented in Patel, and the Plaintiffs proceeded on  a cause of action because their rights under BIPA were violated.

These decisions have several notable impacts on the way biometric privacy rights will be dealt with in the future. First, they introduce a unique interpretation of Constitutional standing under Article 3. To establish standing, a party must show that they suffered an “injury in fact—an invasion of a legally protected interest which is (a) concrete and particularized; and (b) actual or imminent, not conjectural or hypothetical.” Rosenbach and Patel show that individuals do suffer actual harm when their biometric information is collected without consent, and that monetary loss or damage to livelihood need not occur. Second, these rulings will impact how law enforcement agencies use biometric surveillance technology and serve to put agencies on notice that this technology poses unique risks to individual privacy. Third, the decisions emphasize the importance of including a private right of action when privacy interests are violated. Without a right of action, the practical effects of the BIPA would almost certainly be gutted, as individuals would have no redress if their biometric privacy rights were violated.

While BIPA remains the strongest biometric privacy law in the U.S., other jurisdictions have taken steps to address the growing concerns regarding the collection and storage of such information. Shortly after BIPA was enacted in 2008, the Texas legislature introduced a biometric privacy law that provides civil penalties for companies that improperly store biometric data. However, only the state attorney general can bring suit against companies for violations of the law. In 2017 Washington enacted its own biometric privacy law, which contained a large carve-out for storing biometric data for a “security purpose”, in addition to reserving a right of action solely for the state attorney general.

As technology companies and law enforcement agencies alike seek to utilize biometric data collection, it behooves states to take a closer look at the myriad risks such technology can pose to private individuals. BIPA is increasingly serving as a model for other states because it not only identifies exact biometric data that should be protected, but also because it actually allows individuals to redress their grievances in court, which, as Patel and Rosenbach illustrate, can lead to serious changes in the biometric data collection landscape. Further delineation as to when such collection is proper will only become more necessary as biometric data collection is further integrated into modern services and law enforcement.




Understanding Deepfakes

By: Soniya Shah

Deepfake technology involves using artificial intelligence and machine learning models to manipulate videos. A deepfake is a doctored video that shows someone doing something they never did. While the technology could be used to make funny spoofs, there are also much darker, more dangerous implications. Anyone can use online software or applications that allow them to create the doctored videos. Creating the videos is easier for those in the public sphere because there is more data available of them in clips and videos than there is for those who are not in that sphere. For example, Donald Trump is a target for deepfakes because of the number of available clips. This makes it easy to create videos of him saying or doing things that never happened. 

All ways of making deepfake videos require using machine learning models to generate the content. One machine learning model trains on a data set (which is why the larger data set makes the videos more believable) and then creates the doctored videos, while the other model tries to detect forgeries. The models continue to run until the second model can no longer detect forgeries. While amateur deepfakes are usually easy to detect with the naked eye, those that are more professional are much harder to suss out. Because the models that generate the videos are getting better over time, especially as the training data gets better, relying on digital forensics to detect deepfakes is spotty at best. 

Should we be worried? There are serious implications for creating a video that shows someone doing something unforgivable. What if such a video was timed right before an election and swayed the results? The other problem is that creating the videos is relatively easy, given the wide accessibility of deepfake technology. Deepfakes are not illegal. There are First Amendment questions because there is a high chance that deepfakes are protected by freedom of speech. However, there are also concerns about national security risks. Some states are already taking steps against the video. For example, Virginia recently made deepfake revenge pornography illegal

In June 2019, the House Intelligence Committee held a hearing on the risks of deepfakes, listening to experts on AI and digital policy about the threats that they pose. The committee said it aims to “examine the national security threats posed by AI-enabled fake content, what can be done to detect and combat it, and what role the public sector, the private sector, and society as a whole should play to counter a potentially grim, ‘post-truth’ future”. The hearing was timely, given the doctored video of House Speaker Nancy Pelosi that went viral in early summer 2019. The video raised concerned that manipulated videos would become the latest technology used to disseminate misinformation. Obviously, it is concerning when viewers can no longer trust a video that appears real, especially when it is of high ranking government officials. The experts who spoke at the hearing agreed that social media companies need to work together to create consistent standards in the industry that prevent the prevalence of such videos. 

Revisiting COPPA: How recent developments have shaped the discussions

By: Alvaro Marañon

While calls for comprehensive federal privacy legislation may continue to fall upon deaf ears, concerns about protecting children’s online privacy might have already been heard with talks of forthcoming regulatory change. Illustrating this shift, the Federal Trade Commission (FTC) welcomed comments on the effectiveness of the 2013 amendments to the Children’s Online Privacy Protection Act (COPPA). The amendments specifically updated the COPPA Rule to address the advances in mobile devices and social networking, and to expand the definition of personal information by including geolocation and persistent identifiers like cookies. More recently, the FTC held a public workshop to explore proposals to further update the COPPA Rule. This recent push has been propelled by two major developments over the past year: (1) the legislative proposal by Senator Markey titled “COPPA 2.0”; and (2) the  record-breaking FTC settlements for COPPA Rule violations.

In effect since 2000, COPPA creates a set of information privacy guidelines governing the collection, use, and access of personal information by operators of commercial websites or onlines services directed to children. These requirements are imposed upon companies if they have “actual knowledge” that they are collecting personal information from a child that is under 13 years old. Having a clear and detailed privacy policy, providing parents the opportunity to review the collected information, and requiring operators to protect the confidentiality, security, and integrity of any collected personal information are some of the main provisions. While numerous enforcements have been brought, legislative amendments have recently been proposed.

Last March, Senators Markey of Massachusetts and Hawley of Missouri announced a proposal to modify the existing scope and rules of COPPA. Among the various considerations, the bill  proposes to ban targeted advertising directed at children (defined as any user under 13), create a new division within the FTC to handle youth privacy and marketing, and modify the parental ability to view collected information by creating an “Eraser Button” that would permit the parent or user to delete their information. The bill also calls for a new and flexible cybersecurity standard for internet connected devices targeted towards children and minors (defined as any user between 13 to 15) by considering the sensitivity of information collected, the context in which it is collected, its security capabilities, and more.  

While the bill does well to account for  the interests between innovation and privacy, its outright ban of all targeted advertising of children, the changing of the “actual knowledge” standard to constructive, and the expansion of the disclosure requirement make it a highly problematic bill. 

The outright ban on all targeted advertising directed at children raises concerns for its overbreadth and encompassing of potentially beneficial ads. Secondly, changing the knowledge standard to a broader definition may seem beneficial on its face but is impractical with the inability to accurately determine one’s intended and actual audience. Lastly, these new requirements put operators in a difficult spot with their privacy and security policies. This “Eraser button” intends to improve privacy and security by requiring operators to gather more personal information and have it readily available for viewing by outside parties.  These two options, while well intended, run afoul to the policy trend for more data minimization and anonymization. Although these are just proposals, assessing its potential impact on practices and operations can help prepare for compliance costs and strategic planning for new entrants and incumbents. 

But what is the current state of COPPA? FTC Commissioner Wilson’s opening remarks at the recent COPPA workshop reiterated an important characteristic about the regulatory environment: the COPPA rule permits the FTC to keep pace with changes in technology, the development of new business models and data collection, and the manner in which children interact with these online services. This was demonstrated with the 2013 COPPA amendments, which was in response to the expansion of the smartphone market. The emergence of the Internet of Things (IoT) devices and the surge in platforms that host third-party content merits a reassessment of the current rules but whether to make any substantive changes is less clear. Looking at recent enforcement cases can help determine n if changes to the rules are needed. 

In February 2019, the FTC settled the then largest civil penalty for a COPPA violation when agreed pay $5.7 million for failing to seek parental consent before collecting personal information from users under 13 years old. Their application, TikTok, permitted users to upload short video clips on an interconnected platform where they could interact, comment, and directly message other users. 

In May 2019, three dating applications were removed from Apple’s and Google’s respective application store after the FTC alleged they violated COPPA by permitting users under 13 to access them. Although no fine resulted, the removal demonstrated the FTC’s ongoing supervision in this field. 

Lastly, in September 2019,  Google and YouTube settled to pay a record $170 million for the allegations by the FTC and the NY Attorney General that they had collected personal information from children without their parents’ consent. Specifically, the complaint alleged that through the use of cookies, YouTube had collected personal information from viewers of child-directed channels. Then YouTube used this information to deliver targeted ads to viewers of this channel. Despite the operators classifying themselves as general audience sites, the complaint focused on the operators failure to take action once they had notice of the channels directed at children. Importantly, COPPA does not require operators to determine if videos produced by third parties are directed to kids, but in light of this settlement, Google and YouTube have revised their advertising policies. 

While the FTC has brought COPPA enforcement in the past, this proceeding was differentiated by the severity of the fines. As noted by FTC Chairman Simmons, the civil penalty obtained against Google and YouTube was 10 times larger than all of the 31 prior COPPA cases combined. Aside from the financial implications, this settlement marked a pivotal point in COPPA enforcement by holding a platform liable for the content posted by a third party. 

These developments help indicate areas of consideration for regulators and stakeholders in this industry. First, there will be an increased reliance upon machine learning to catch violators and help identify child directed programs. Second, the lack of clarity regarding what the relevant factors are and what their respective role in the determination of child directed programming will lead to an increase in the creation of segmented “child-only” services. Lastly, a ban of all targeted advertisements directed at children could chill investment and lead various stakeholders to entirely abandon the market. While both  TikTok and YouTube announced initiatives to further fund, promote, and expand the services and content for their child specific channels, not all industry players may be capable of following suit.  

Blog at

Up ↑