How Technology Has Both Fueled and Hindered the 2019 Hong Kong Pro-Democracy Protests

As the city moves into its sixth month of civil unrest, Hong Kong remains consumed by widespread pandemonium and ruin, forcing the once flourishing city into a recession for the first time in a decade. No sign of abatement is in sight as protestors prepare for death in light of escalating violence. The impetus, an extradition bill which has since been withdrawn, quickly made way for full democracy and police brutality concerns to surface. Recent elections also show widespread support for the pro-democracy movement. This is not the first time Hong Kongers have risen up in protest since the ’97 handover. The 2014 “Umbrella Movement” ended in failure, but this time around, Hong Kong’s leaderless protest movement has adopted an open source organizational model, in which online communication platforms like LIHKG and Telegram have been central. LIHKG is a multi-category forum website that is oft likened to Reddit with posts that may be up or down-voted by users. The forum has been used for crowdfunding to bring attention to the protests and has served as a real-time poll for action. Similarly, instant-messaging app Telegram has allowed protestors to swiftly mobilize. Its popularity comes from its ‘secret chat’ function which allows for ‘end-to-end’ encryption, an implementation of asymmetric encryption where third-party users are blocked from accessing transferred data between a true sender and recipient.

In spite of their success, LIHKG and Telegram have not been immune to attack. China has been linked to distributed denial of service (DDoS) attacks on both platforms where servers were overwhelmed with garbage requests to create connectivity issues for users, presumably an attempt to disrupt protest mobilization. Further, other online platforms like Facebook, Twitter, YouTube, and even Pornhub, have identified Chinese information warfare and removed accounts linked to state-backed disinformation campaigns. Since these campaigns were inadequate, the Hong Kong Police Force enlisted the help of foreign cybersecurity specialists and has become ever more zealous in tracking and identifying activists for arrest. In response, private companies like Yubico have stepped up to the plate in the name of social responsibility to protect activists by donating hundreds of hardware security keys, long known as a more secure method of protection, though they are unhelpful in cases where police request protestors to unlock phones protected by biometrics. This too remains an issue. In Colin Cheung’s case, a protestor who created a facial recognition tool to identify police and who was subsequently targeted by law enforcement because of it, police forcibly pried his eyes open and shoved his face into his phone to attempt entry by way of the phone’s facial recognition unlocking function.

As social unrest endures in Hong Kong, so has weaponization of biometric data and identity generally. It has been a race between demonstrators and patriots to outdox, the leaking of one’s personal information online for malicious intent, one another. Hong Kong police were hit particularly hard on the ‘Dadfindboy’ Telegram doxxing channel after they stopped wearing identification badges as violence escalated. On November 8th, the High Court of the Hong Kong Special Administrative Region extended an injunction to stop doxxing of police officers and their family members with a journalism exemption to balance government accountability and personal security. But this ban does not address the doxxing of pro-democracy figures who have had sensitive personal data such as home addresses and phone numbers categorically exposed on both the China-backed website HK Leaks and to a lesser extent Telegram. Such targeted leaks have led to death threats meant to unnerve protestors into submission.

It remains to be seen how pro-establishment figures and state-backed initiatives will continue to use technology against these young, tech-savvy activists who are afraid of nothing but a future without hope of universal suffrage and government accountability. China is quickly losing patience with its problem child, and the world watches with bated breath for the conflict to come to a head. One thing is certain; this time, pro-democracy protestors aren’t giving up so easily. As one protestor posted on LIHKG: “If not now, when?”

Illinois Biometric Privacy Law Cries Foul on the Notion of “No Harm, No Foul”

By: Josh Cervantes

In less than a year, an Illinois law has shown that the age-old saying, “no harm, no foul” is dangerously misguided when it comes to the collection of biometric data. Illinois’ Biometric Information Privacy Act (BIPA) focuses on the unique challenges posed by the collection and storage of biometric identifiers and protects against the unlawful collection and storage of biometric information. The Act does this by giving individuals and consumers the “right to control their biometric information by requiring notice before collection and giving them the power to say no by withholding consent.” Common biometric identifiers include retina or iris scans, voiceprints, fingerprints, and face or hand geometry scans. BIPA is unique because it is the only piece of legislation in the U.S. that allows private individuals to sue and recover damages for violations. An examination of two recent cases illustrates the practical applications of BIPA, the potential fallout from the rulings, and shows how BIPA may influence the creation of similar laws in other U.S. states.

In Rosenbach v. Six Flags Entertainment Corp., the Illinois Supreme Court addressed whether a private individual is “aggrieved” and may pursue liquidated damages and injunctive relief if they have not alleged an “actual injury or adverse effect” beyond the violation of their rights under the statute. The Court ruled that the Plaintiff had indeed suffered harm because Six Flags Corp. denied the Plaintiff the right to maintain their biometric privacy by not allowing them to consent to “. . . the collection, storage, use, sale, lease, dissemination, disclosure, redisclosure, or trade of, or for [defendants] to otherwise profit from. . . associated biometric identifiers or information.” This ruling made it clear that a mere violation of BIPA alone was a harm to the Plaintiffs, and that no actual injury needed to be shown to pursue damages and injunctive relief.

The U.S. Court of Appeals for the Ninth Circuit reaffirmed BIPA principles in Patel v. Facebook after holding that using “facial-recognition technology without consent invades an individual’s private affairs and concrete interests.” The court noted facial recognition technology-which Facebook uses to identify individuals in photos-manifested an unreasonable intrusion into personal privacy because it “effortlessly compiled” detailed information in a manner that would be nearly impossible without such technology. Again, no concrete injury was presented in Patel, and the Plaintiffs proceeded on  a cause of action because their rights under BIPA were violated.

These decisions have several notable impacts on the way biometric privacy rights will be dealt with in the future. First, they introduce a unique interpretation of Constitutional standing under Article 3. To establish standing, a party must show that they suffered an “injury in fact—an invasion of a legally protected interest which is (a) concrete and particularized; and (b) actual or imminent, not conjectural or hypothetical.” Rosenbach and Patel show that individuals do suffer actual harm when their biometric information is collected without consent, and that monetary loss or damage to livelihood need not occur. Second, these rulings will impact how law enforcement agencies use biometric surveillance technology and serve to put agencies on notice that this technology poses unique risks to individual privacy. Third, the decisions emphasize the importance of including a private right of action when privacy interests are violated. Without a right of action, the practical effects of the BIPA would almost certainly be gutted, as individuals would have no redress if their biometric privacy rights were violated.

While BIPA remains the strongest biometric privacy law in the U.S., other jurisdictions have taken steps to address the growing concerns regarding the collection and storage of such information. Shortly after BIPA was enacted in 2008, the Texas legislature introduced a biometric privacy law that provides civil penalties for companies that improperly store biometric data. However, only the state attorney general can bring suit against companies for violations of the law. In 2017 Washington enacted its own biometric privacy law, which contained a large carve-out for storing biometric data for a “security purpose”, in addition to reserving a right of action solely for the state attorney general.

As technology companies and law enforcement agencies alike seek to utilize biometric data collection, it behooves states to take a closer look at the myriad risks such technology can pose to private individuals. BIPA is increasingly serving as a model for other states because it not only identifies exact biometric data that should be protected, but also because it actually allows individuals to redress their grievances in court, which, as Patel and Rosenbach illustrate, can lead to serious changes in the biometric data collection landscape. Further delineation as to when such collection is proper will only become more necessary as biometric data collection is further integrated into modern services and law enforcement.

 

 

 

Understanding Deepfakes

By: Soniya Shah

Deepfake technology involves using artificial intelligence and machine learning models to manipulate videos. A deepfake is a doctored video that shows someone doing something they never did. While the technology could be used to make funny spoofs, there are also much darker, more dangerous implications. Anyone can use online software or applications that allow them to create the doctored videos. Creating the videos is easier for those in the public sphere because there is more data available of them in clips and videos than there is for those who are not in that sphere. For example, Donald Trump is a target for deepfakes because of the number of available clips. This makes it easy to create videos of him saying or doing things that never happened. 

All ways of making deepfake videos require using machine learning models to generate the content. One machine learning model trains on a data set (which is why the larger data set makes the videos more believable) and then creates the doctored videos, while the other model tries to detect forgeries. The models continue to run until the second model can no longer detect forgeries. While amateur deepfakes are usually easy to detect with the naked eye, those that are more professional are much harder to suss out. Because the models that generate the videos are getting better over time, especially as the training data gets better, relying on digital forensics to detect deepfakes is spotty at best. 

Should we be worried? There are serious implications for creating a video that shows someone doing something unforgivable. What if such a video was timed right before an election and swayed the results? The other problem is that creating the videos is relatively easy, given the wide accessibility of deepfake technology. Deepfakes are not illegal. There are First Amendment questions because there is a high chance that deepfakes are protected by freedom of speech. However, there are also concerns about national security risks. Some states are already taking steps against the video. For example, Virginia recently made deepfake revenge pornography illegal

In June 2019, the House Intelligence Committee held a hearing on the risks of deepfakes, listening to experts on AI and digital policy about the threats that they pose. The committee said it aims to “examine the national security threats posed by AI-enabled fake content, what can be done to detect and combat it, and what role the public sector, the private sector, and society as a whole should play to counter a potentially grim, ‘post-truth’ future”. The hearing was timely, given the doctored video of House Speaker Nancy Pelosi that went viral in early summer 2019. The video raised concerned that manipulated videos would become the latest technology used to disseminate misinformation. Obviously, it is concerning when viewers can no longer trust a video that appears real, especially when it is of high ranking government officials. The experts who spoke at the hearing agreed that social media companies need to work together to create consistent standards in the industry that prevent the prevalence of such videos. 

Revisiting COPPA: How recent developments have shaped the discussions

By: Alvaro Marañon

While calls for comprehensive federal privacy legislation may continue to fall upon deaf ears, concerns about protecting children’s online privacy might have already been heard with talks of forthcoming regulatory change. Illustrating this shift, the Federal Trade Commission (FTC) welcomed comments on the effectiveness of the 2013 amendments to the Children’s Online Privacy Protection Act (COPPA). The amendments specifically updated the COPPA Rule to address the advances in mobile devices and social networking, and to expand the definition of personal information by including geolocation and persistent identifiers like cookies. More recently, the FTC held a public workshop to explore proposals to further update the COPPA Rule. This recent push has been propelled by two major developments over the past year: (1) the legislative proposal by Senator Markey titled “COPPA 2.0”; and (2) the  record-breaking FTC settlements for COPPA Rule violations.

In effect since 2000, COPPA creates a set of information privacy guidelines governing the collection, use, and access of personal information by operators of commercial websites or onlines services directed to children. These requirements are imposed upon companies if they have “actual knowledge” that they are collecting personal information from a child that is under 13 years old. Having a clear and detailed privacy policy, providing parents the opportunity to review the collected information, and requiring operators to protect the confidentiality, security, and integrity of any collected personal information are some of the main provisions. While numerous enforcements have been brought, legislative amendments have recently been proposed.

Last March, Senators Markey of Massachusetts and Hawley of Missouri announced a proposal to modify the existing scope and rules of COPPA. Among the various considerations, the bill  proposes to ban targeted advertising directed at children (defined as any user under 13), create a new division within the FTC to handle youth privacy and marketing, and modify the parental ability to view collected information by creating an “Eraser Button” that would permit the parent or user to delete their information. The bill also calls for a new and flexible cybersecurity standard for internet connected devices targeted towards children and minors (defined as any user between 13 to 15) by considering the sensitivity of information collected, the context in which it is collected, its security capabilities, and more.  

While the bill does well to account for  the interests between innovation and privacy, its outright ban of all targeted advertising of children, the changing of the “actual knowledge” standard to constructive, and the expansion of the disclosure requirement make it a highly problematic bill. 

The outright ban on all targeted advertising directed at children raises concerns for its overbreadth and encompassing of potentially beneficial ads. Secondly, changing the knowledge standard to a broader definition may seem beneficial on its face but is impractical with the inability to accurately determine one’s intended and actual audience. Lastly, these new requirements put operators in a difficult spot with their privacy and security policies. This “Eraser button” intends to improve privacy and security by requiring operators to gather more personal information and have it readily available for viewing by outside parties.  These two options, while well intended, run afoul to the policy trend for more data minimization and anonymization. Although these are just proposals, assessing its potential impact on practices and operations can help prepare for compliance costs and strategic planning for new entrants and incumbents. 

But what is the current state of COPPA? FTC Commissioner Wilson’s opening remarks at the recent COPPA workshop reiterated an important characteristic about the regulatory environment: the COPPA rule permits the FTC to keep pace with changes in technology, the development of new business models and data collection, and the manner in which children interact with these online services. This was demonstrated with the 2013 COPPA amendments, which was in response to the expansion of the smartphone market. The emergence of the Internet of Things (IoT) devices and the surge in platforms that host third-party content merits a reassessment of the current rules but whether to make any substantive changes is less clear. Looking at recent enforcement cases can help determine n if changes to the rules are needed. 

In February 2019, the FTC settled the then largest civil penalty for a COPPA violation when Musical.ly agreed pay $5.7 million for failing to seek parental consent before collecting personal information from users under 13 years old. Their application, TikTok, permitted users to upload short video clips on an interconnected platform where they could interact, comment, and directly message other users. 

In May 2019, three dating applications were removed from Apple’s and Google’s respective application store after the FTC alleged they violated COPPA by permitting users under 13 to access them. Although no fine resulted, the removal demonstrated the FTC’s ongoing supervision in this field. 

Lastly, in September 2019,  Google and YouTube settled to pay a record $170 million for the allegations by the FTC and the NY Attorney General that they had collected personal information from children without their parents’ consent. Specifically, the complaint alleged that through the use of cookies, YouTube had collected personal information from viewers of child-directed channels. Then YouTube used this information to deliver targeted ads to viewers of this channel. Despite the operators classifying themselves as general audience sites, the complaint focused on the operators failure to take action once they had notice of the channels directed at children. Importantly, COPPA does not require operators to determine if videos produced by third parties are directed to kids, but in light of this settlement, Google and YouTube have revised their advertising policies. 

While the FTC has brought COPPA enforcement in the past, this proceeding was differentiated by the severity of the fines. As noted by FTC Chairman Simmons, the civil penalty obtained against Google and YouTube was 10 times larger than all of the 31 prior COPPA cases combined. Aside from the financial implications, this settlement marked a pivotal point in COPPA enforcement by holding a platform liable for the content posted by a third party. 

These developments help indicate areas of consideration for regulators and stakeholders in this industry. First, there will be an increased reliance upon machine learning to catch violators and help identify child directed programs. Second, the lack of clarity regarding what the relevant factors are and what their respective role in the determination of child directed programming will lead to an increase in the creation of segmented “child-only” services. Lastly, a ban of all targeted advertisements directed at children could chill investment and lead various stakeholders to entirely abandon the market. While both  TikTok and YouTube announced initiatives to further fund, promote, and expand the services and content for their child specific channels, not all industry players may be capable of following suit.  

Wearable Technology & Regulation Gaps

By Soniya Shah

With the advent of wearables like smart watches and fitness trackers becoming more popular by the day, we have some big questions to answer around data privacy and security. There is always concern about new technology and cyber attacks, especially when data travels through wireless networks. 

Users of these devices usually do not want others looking at their data, especially when it comes to health data. However, many privacy policies are vague and even include disclaimers that information may be shared with third parties. Part of the issue is that HIPPA does not extend to this medical information, so makers of wearables legally can share medical data without incurring liability. 

Wearables obtain Information about a person including the time and duration of activity. This information coupled with demographic user profiles can provide data that is crucial to businesses looking to market to individual consumers. 

The security of this information is important because identifying individuals based on their data poses security and privacy risks. For example, insurance companies could use the information to price differentiate between customers. Despite the potential risks, wearables have gone largely unregulated by the FDA, because traditional wearables do not assist in patient treatment, and the risk of wearing a device like an Apple Watch is low. 

While most wearables are not subject to federal regulation, states have the power to regulate via consumer protection laws and other state laws. For example, California has stricter privacy laws around medical data than what is mandated by HIPAA through federal regulations. States should consider tightening regulations to protect consumer data and alleviate some of the risks that come with wearable technology. 

In early June, Senators Amy Klobuchar and Lisa Murkowski introduced the Protecting Personal Health Data Act, which would put into place new privacy and security rules around devices that collect personal health data, including wearables like fitness trackers. The Act would require the Department of Health and Human Services (HHS) Secretary to issue regulations related to privacy and security of health-related consumer devices, applications, services, and software. The bill would incorporate concepts from the European Union’s General Data Protection Regulation (GDPR), such as individual access to delete and amend health data tracked through wearables and other applications. To implement the Act, HHS would need to create a national task force to address cybersecurity risks and privacy concerns. 

HHS will need to take into account the different standards needed for each type of data that is collected, including genetic and general personal health data. Perhaps more importantly will be the ability for consumers to access their own data and have more control over what is used and collected by companies. 

The Act is part of a larger Congressional effort to increase efforts to protect consumer privacy, especially after Facebook data scandals. While this Act could be a big step for privacy and security concerns, there are no guarantees the bill will pass. While we wait for federal regulation, it might be time for states to follow in California’s footsteps and start creating legislation that protects consumers. 

A Quantum Leap: Washington Bets Big On “Hack-Proof” Technology to Secure Communications

By: Josh Cervantes

In the final weeks of 2018, Congress broke through an endemic gridlock and passed into law H.R.6227, better known as the National Quantum Initiative Act (NQIA). The NQIA signals the entry of the US into the nascent, but already contested field of quantum communications, and pits the US against its greatest strategic competitor, China. The European Union has also launched its own billion-euro program, dubbed the Quantum Flagship, further illustrating the urgency with which global powers are entering into the quantum communications arena.

Quantum communication is a field of applied quantum physics utilized in ultra-high security applications that offer unparalleled levels of data security, integrity, and intrusion detection. It allows enormous amounts of light photons, which are used to transmit data through fiber-optic cables, to assume multiple combinations of 0’s and 1’s simultaneously. The particles of 0’s and 1’s are called qubits, and their extremely fragile state means that if a hacker were to intercept the communications they would collapse and assume a value of either 0 or 1, showing that the data was tampered with. This contrasts with traditional data transmission, which uses values of either 0 or 1 to convey specific data, allowing hackers to more easily ascertain the content of a specific message. Continue reading “A Quantum Leap: Washington Bets Big On “Hack-Proof” Technology to Secure Communications”

ISPLS Announces its Second Annual Privacy Symposium

ISPLS’s biggest event of the year is officially announced. The Symposium on Government & Corporate Responsibility in a Data Driven World will be held Wednesday, March 20th from 9:00am – 3:00pm at the Washington College of Law in room NT01.

Stop by to hear from experts in the field of digital privacy.  Topics covered include Supply Chain Risk Management and Software component transparency; the use of Biometrics, and the Roles of Platforms in proper Data Governance.  Speakers include industry leaders from Consumer Advocacy Groups, Corporations, Government, Law Firms, and Think Tanks.

This is a great opportunity to hear from thought leaders and network with professionals.  Breakfast and Lunch will be provided.

Click here for more information Privacy Symposium

Cybersecurity and the Future of Medical Devices

By Soniya Shah

It’s no secret that attacks on the security of data and privacy are occurring with increasing frequency, which is alarming no matter what the data is. However, healthcare is one of the most frequently targeted sectors, and among those least equipped to handle when an attack happens, with data breaches that cost the industry about $5.6 billion a year. Particularly, attacks on medical devices pose a new concern in the Internet of Things world. Healthcare organizations have a higher sense of urgency is accessing their systems, since patient data can be time-sensitive. It’s creepy to think that a pacemaker or MRI scanner could be affected with malware. Malware software can do anything from deleting data to copying it, with a wide range of what can happen in between, including corrupting the data, extorting it, and modifying it. It’s easy to see why criminals would want to take advantage of this kind of information. Medical records contain the most intimate details about a person and that information can be used in identity theft. Continue reading “Cybersecurity and the Future of Medical Devices”

Blog at WordPress.com.

Up ↑