Jay Kesan

JAY KESAN MEDIA

PODCAST

TRANSCRIPTS OF JAY KESAN PODCAST

As the fall semester begins, students of Saint Louis University will soon find a new amenity, the Amazon Echo Dot, in each of their dorm rooms. Echo Dot is a voice assistant device developed by Amazon and enabled by the artificial intelligent service, Alexa. The university has decided to deploy over 2,300 of these devices in all student residences on campus to provide students with easier access to campus-related information. For example, students can ask the voice assistant about library hours and building locations. It is the first time that a university has put these voice assistant devices in student living spaces. Not surprisingly, despite the conveniences they provide, there are privacy concerns with these devices.

So, let us look at how the Amazon Echo Dot works. The device responds to a wake word chosen by its user, such as Alexa, by default. After hearing the wake word, the device records the user’s voice, sends it to the virtual assistant, Alexa, and performs a corresponding action. For example, a student may ask the Echo Dot in his dorm room to create a reminder for a personal event, which may be private information. The device will then send the voice recording to Alexa to create the reminder, as instructed. But, voice recordings like this will not be deleted upon the completion of requests. Instead, they are stored on Amazon’s server. Saint Louis University states that the Alexa-for-business platform, which is a workspace solution, is used to manage the Echo devices provided to students and no personal information will be collected. According to Amazon, devices enabled by Alexa-for-business are not associated with personal accounts. It means that any data sent to the server, including voice recordings, is anonymous, and not attributable to individual students. Alexa-for-business does not give the university any access to these audio files except the ability to delete them. Thus, students’ voice recordings are anonymous and inaccessible to the school. Amazon has also been implementing controls in compliance with the EU’s General Data Protection Regulation, the GDPR, to secure customer data.

However, even though the university does not seem to pose a threat to students’ privacy, and Amazon says protecting customer data is its top priority, it is difficult to guarantee that all the conversations near Echo devices will be safe, because sometimes things do not work as intended, especially when it comes to technology. A few months ago, an Oregon family discovered that a private conversation was recorded by their Echo device and sent to a random person on their contact list because the device misheard the wake word and the following command. This incident tells us that voice recognition is not always reliable. Under the GDPR, data accuracy is an essential requirement, and users are given the right to correct any false information. But voice assistants like Amazon Echo usually do not give users enough time to correct the misinterpreted commands before these commands are executed and an impact is made, and there is not enough visual confirmation to help users understand how their data will be processed either.

Another source of risk is the vulnerabilities in these devices. Security experts had successfully exploited Echo devices and turned them into wiretaps that could continuously listen and record by either modifying the hardware or running malicious software. Although these exploits are already fixed, new vulnerabilities may be discovered and utilized by attackers.

Other than these intrinsic risks residing in voice assistant devices, Saint Louis University is considerate about students’ privacy, and students can either mute the microphone or just unplug it and put it in a drawer for the rest of the school year.

I’m Jay Kesan.

Cellular service providers store information about which cell towers transmit signals to a customer’s phone, thus revealing the geographic area where you are located when the transmission occurs. When cell phones were first getting popular, this information would be limited to when a phone call or text message was initiated or received by an individual, and the information only provided a general vicinity of where the cell phone was located. Modern apps often give users the option to receive “push” notifications that allow services to send updates directly to your phone instead of waiting for you to manually check for updates. The frequent “checking in” from apps makes cell site location data increasingly detailed, and service providers might store this information for years. Because people carry their phones with them, historical cell site location data creates the new possibility of retroactive surveillance as a boon to law enforcement.

On June 22, the Supreme Court issued its opinion in the case of Carpenter v. United States. Carpenter concerned this type of cell site location information and involved a string of robberies in Michigan and Ohio. The defendant’s physical nearness to the robberies, as shown by historical cell site data, was used as circumstantial evidence to support convicting him. To obtain this information, the investigators had used an order under the Stored Communications Act. This order is an enhanced subpoena process to allow investigators to compel certain categories of data from cellular service providers without having to show the full “probable cause” that is required for a warrant under the Fourth Amendment. In Timothy Carpenter’s case, investigators used this enhanced subpoena to obtain 127 days of information about Carpenter’s physical movements, consisting of 12,898 individual data points. The question before the Supreme Court was whether investigators needed to show “probable cause” to obtain a court order to conduct this kind of retroactive surveillance.

A 5-4 majority on the Supreme Court noted that the Carpenter case exists at a crossroad in Fourth Amendment law. On one hand, in the recent GPS-related case of United States v. Jones, the Supreme Court recognized an expectation of privacy in physical location and movements. But there is also the third-party doctrine, which says there is not an expectation of privacy in information that you voluntarily give to third parties. The third party doctrine is often discussed in the context of business records, like financial records held by a bank.

In the Carpenter decision, the Supreme Court ruled that the third party doctrine does not apply to historical cell site location data. The Court reasoned that even in third party doctrine cases, Fourth Amendment protection may still exist for information of a particular nature. Writing for the majority, Chief Justice Roberts noted that “[t]here is a world of difference between the limited types of personal information addressed in [third party doctrine cases] and the exhaustive chronicle of location information casually collected by wireless carriers today.”

The Carpenter decision, while potentially ground breaking for privacy rights, is fairly narrow and is very much tied to the presence of physical objects. Under Carpenter, there is Fourth Amendment protection for this kind of automatically generated information that tracks locations for a particular communications device. Previous cases about location privacy under the Fourth Amendment have concerned beepers in barrels and GPS devices in cars. Cell site data is more far-reaching because cell phones are practically an extension of the body.

This action by the Supreme Court recognizes the societal developments that come with living in a highly connected technological age. If you want a device to be able to connect to services wirelessly, some form of location tracking will occur. Carpenter clarifies that that kind of information is to be afforded Fourth Amendment protections.

I’m Jay Kesan.

Humans and computers are alike in at least one sense – both could malfunction or be compromised. On Feb. 28th, the U.S. Marine Forces Reserve announced a data breach affecting thousands of marines, sailors and civilians, putting their identities at risk, as sensitive personal information like truncated social security numbers was leaked. The investigation showed that there was no malicious intent involved and the data breach was indeed a result of human error. An email containing the unencrypted confidential information had accidentally been sent to a wrong e-mail distribution list. But data breaches are not always accidental. Besides being duped into making unintentional disclosures, people are often tricked into giving up valuable information by scammers, and these hustles are referred to as “social engineering” or “phishing” in the cybersecurity world.

The 2013 Target data breach case was one such example. The case was finally settled last year, and the retail company ended up paying $18.5 million in fines for the breach of 43 million records of payment card information. The Target incident started from a phishing email sent to a third-party vendor of Target. Through that vendor, the attackers gained the login credentials of Target, and subsequently obtained unrestricted access to the confidential information of Target’s customers.

Activities like phishing are not entirely new to the law. We had them before the Internet era, and they fit the criteria for common law fraud. Fraud occurs when there is a false representation which is intended to deceive another and which causes that person to act resulting in an injury.

Phishing can possibly fit into this legal definition of fraud. But phishing and fraud potentially differ when it comes to the injury. If someone deceives you into giving them your car, you have been injured by the loss of physical property. You have no car. But there is no exclusivity for information. If you receive a phishing email that links you to a fake website designed to capture your login information, and you enter your login information, have you been injured? You can still use your login information. It’s just that now they can use it too. So when does the injury occur? Is the act of deception the injury? Does the injury occur when the perpetrator actually uses the password? Or is the injury the act of using the password plus an action that brings the perpetrator some sort of illicit personal benefit, such as stolen credit card information?

The Computer Fraud and Abuse Act is the main federal cybercrime law. One of the crimes created by the CFAA requires acting with the intent to defraud and obtaining something of value from a computer that the perpetrator lacked the authority to access. The federal court of appeals in the Ninth Circuit has said that if a person’s authorization to access a computer is taken away, they cannot circumvent that access by using someone else’s password. Presumably this reasoning would also apply to phishing, where the perpetrator had no authorization to begin with. Covering phishing under the CFAA has the same problem as viewing phishing as regular fraud, though the CFAA does state that the injury occurs when the fraudulent behavior results in the access of a protected computer without authorization or in excess of authorization.

Still, one of the major uncertainties about cybercrime is that it is unclear when the injury occurs. A data breach victim may feel uneasy upon learning that their information was stolen. Administrative staff may get nervous about keeping their job if they accidentally fall for a phishing scam. The nature of cybercrime is one reason why the CFAA needs to be amended to take modern concerns and injuries into account.

I’m Jay Kesan.

In April, the CEO of Facebook, Mark Zuckerberg, testified in Congress about the alleged user privacy violations that occurred at Facebook. Several weeks later, Cambridge Analytica, the political consulting firm that misused the personal information of millions of Facebook users, is shutting down due to the loss of customers. The Federal Trade Commission is progressing with its investigation of Facebook, and we will find out whether and how Facebook will be held responsible for these privacy violations.

That said, these privacy concerns are not going to end anytime soon. It is not just Facebook. Companies in multiple industries are becoming heavily dependent on customers’ personal data so that they can provide tailored products and services to their customers at lower cost. Ideally, both the companies and their customers should benefit from this data collector-data contributor relationship. In reality, all of us who are data contributors often feel uncomfortable about it. From Zuckerberg’s testimony in Congress, we may find answers to why we feel this way, and how meaningful regulation can improve this situation.

One reason why data contributors may not trust data collectors is that there is an informational asymmetry between the two. Many users have little knowledge about how companies like Facebook are collecting their personal information and what kinds of information are being collected. People get suspicious about whether their voices are being recorded secretly through the microphones on their laptops or through smart home devices like Amazon Echo.

It is often unclear to users who has access to their personal information. For example, people unfamiliar with Facebook’s business model may think that it sells users’ information to advertisers. During his Congressional testimony, Zuckerberg explained that Facebook acted as an intermediary that connected users with relevant ads. In order to provide greater clarity to users, regulation is needed to require that data collectors disclose, in an understandable and comprehensive manner, the sources being used to collect information, the types of information being collected, and the parties who have the access to that information.

Sometimes the collected information may be such that we are hesitant to share it. For example, many car insurance companies have been promoting telematics devices, which track drivers’ driving habits, and good drivers will be rewarded with lower premiums. But even for a good driver, the chance of her getting into an accident varies depending on many other factors, for example, the road conditions. Imagine if insurance companies start to gather information about the routes that a driver takes and change premium rates accordingly, in real time. Drivers will spend less on car insurance if safer routes are taken, and insurers will be able to monitor their risk exposures more closely. But even in such a mutually beneficial scenario, not everyone is willing to share his every footprint with others.

In short, people balance different considerations when it comes to the sensitivity of personal information. Some may be happy about giving up personal information in exchange for lower price or convenience, while others may not. Therefore, data contributors – meaning you and me — should be given the right to not share the information they consider sensitive, unless they want to do so.

In the EU, the General Data Protection Regulation – GDPR – goes into effect on May 25, 2018, and it provides data protection and privacy for all individuals within the European Union. It also deals with the export of personal data outside the EU, and hence, it will have a global impact.

Given the increased focus on cyber privacy, it is likely that some Congressional legislation regulation emerge. Will this regulation be adequate and will there be adequate enforcement of those regulations?

I’m Jay Kesan.

The 4th generation of the Apple Watch was released last week with exciting new features including fall detection and advanced heart monitoring capabilities. Two of these health-related features, heart rhythm detection and personal electrocardiogram, have received clearance from the Food and Drug Administration (FDA), making the new Apple watch a Class II medical device, in the same category as a powered wheelchair. Hence, many more people will be wearing medical devices on a daily basis and rely on them to keep track of their health.

Despite their convenience, cybersecurity issues have always been a threat facing these networked medical devices and their users. When medical equipment, such an infected CT scanner in a hospital has to be taken offline in order to be patched, patients in the hospital suffer and other patients may have to travel longer to another hospital to get treatment. In August 2017, the FDA recalled almost half a million networked pacemakers, because these implantable devices were found to have vulnerabilities that might allow hackers to remotely alter a patient’s heartbeat. Unlike pacemakers, smart watches are less likely to cause direct physical harm to the users, but because these wearable accessories constantly collect data about your personal health, there may be serious privacy violations associated with a security breach of these devices. Also, attackers may be able to influence user’s behaviors indirectly by providing false health information.

In response to the increasing concern about the cybersecurity of medical devices, the U.S. Department of Health & Human Services (HHS) recommended that the FDA take additional measures to address this issue.

Currently, before a manufacturer can market its product as a medical device, it has to go through a 3-phase procedure with the FDA to get clearance or approval. First, there is a pre-submission program that allows the manufacturer to better understand FDA requirements. Then, the manufacturer needs to submit a set of documents based on the FDA’s “refuse-to-accept” checklists, which simply means that the FDA does not accept submissions with missing documents. Lastly, the FDA uses a template, called a “SMART template,” to guide its reviews of submissions.

Corresponding to these three phases, the recommendations given by the HHS are threefold, including promoting the use of pre-submission meetings to address cybersecurity-related questions, adding cybersecurity documentation to the FDA’s refuse-to-accept checklists, and creating a dedicated section for cybersecurity in the SMART template.

These recommended measures will certainly raise the awareness of cybersecurity among medical device manufacturers. From now on, submissions without cybersecurity documentation will not be accepted in the first place, and manufacturers have to prioritize addressing the cybersecurity issues residing in their products.

Nonetheless, these recommendations are limited in scope, leaving many important cybersecurity issues unresolved. Aside from checking cybersecurity with the SMART template, which the FDA has already started doing, the other two new measures suggested by the HHS seem to be more about procedure and documentation, rather than actually incentivizing manufacturers to improve the cybersecurity capabilities in their products. Manufacturers can come up with perfect cyber risk mitigation plans in order to pass FDA review but never effectively implement those plans.

In addition, although the FDA has a post-market surveillance program, which monitors the performance of drugs and medical devices on the market after they receive clearance or approval, it often takes several years for cybersecurity vulnerabilities associated with these medical devices to be exposed, and the discovery of these vulnerabilities are usually due to third-party researchers who are not involved in the surveillance program. In short, improvements can be made by the FDA to make vulnerability detection quicker and more effective and thereby improve the cybersecurity of networked medical devices.

I’m Jay Kesan.

If you have online accounts with social media sites, e-commerce companies or any other businesses which have your personal information, you are probably receiving several notifications through your email about privacy policy updates from them during the past weeks. This is because the deadline, May 25th, for them to be GDPR-compliant has just passed.

GDPR stands for General Data Protection Regulation. It was adopted by the European Union in April 2016 to protect the data security and privacy of people in the Union, or “data subject”, which is the term used in this regulation. Companies collecting or processing the personal information of data subjects had two years to get ready and be compliant. The focus of the regulation is to restrain companies from abusing user data and to make the process of data collecting and handling more transparent to data subjects. In addition, under the GDPR, data subjects have the right to control the data collected by companies, such as correcting wrong personal information, requesting a copy of collected data and erasing personal information, also known as the “right to be forgotten,” under certain circumstances.

The GDPR applies to not only EU companies, but also to businesses outside the EU handling personal information of data subjects located in the European Union. The scope of this regulation can be interpreted expansively because the data subjects to be protected are not necessarily EU citizens. The specific wording of the regulation refers to data subjects “in the Union”, so by its language, it could conceivably apply to protect the personal data of Americans on vacation in Europe.

Besides its broad scope, the GDPR imposes heavy fines on companies that are found to not be compliant. Under the GDPR, there are two tiers of administrative fines. The lower tier—up to 10 million Euros or 2% of the company’s global annual revenue—is for violations like failing to report data breach incidents in a timely manner. The higher tier—up to 20 million Euros or 4% of global annual revenue, whichever is higher, is for violations of data subjects’ rights and unlawful data processing practices. To many companies, a fine as large as 4% of annual revenue is a significant percentage of their profit for a whole year. Some analysts believe that a fine this big is unlikely as the regulation says fines must be “effective, proportionate and dissuasive”.

How the fines are set up suggests that the regulation is going to have a bigger impact on small businesses than on the large ones, because for small businesses a 10-million-Euro, minimum fine for the lower tier might be greater than 2% of their annual revenue, and so they may end up paying relatively more than large companies do. In addition, large companies like Google or Facebook have more technical resources to implement data protection measures as required by GDPR and more legal resources to be compliant. According to a survey conducted by a security research firm, Crowd Research Partners, about the challenges of being GDPR-compliant, 43% of the interviewed firms suggest that they do not have expert staff, and 40% of the firms say that they are lacking the budget to comply. In short, many firms may have to decide to be fined or try to be compliant at greater cost.

The GDPR has been in force for only a couple of weeks. There are still a lot of uncertainties about how firms should comply with it, and how it will be enforced. But it is a good starting point of improving data protection, and perhaps in the future, we may see fewer incidents like Facebook Cambridge Analytica scandal.

I’m Jay Kesan.

In recent news, President Trump has accused Google of rigging search results. The President claims that the search engine giant has purposefully suppressed positive stories about his administration, and could be opened to prosecution as a result.

While the President’s specific claims of political censoring are unsubstantiated, the President has touched on an underlying issue; how do we know what we search and what we see on the internet represents the objective truth? How do websites rank and show information in a fair way?

In the case of Google, Google searches makes up 92 percent of all internet searches. Despite their popularity, the search engine has never published how its search algorithm works. What is the “it” factor that pushes a Wikipedia page or a New York Times article near the top of a search, while keeping a quote “untrustworthy” source near the bottom? These “secret algorithms” are prevalent throughout search engines and social media; popular sites like Facebook, Instagram, and Twitter all rely on these algorithms to determine what content to show users.

For example, in July of this year, Twitter algorithms limited the visibility of some Republicans in profile searches. Testifying before Congress, Twitter’s CEO, Jack Dorsey said the site tried to enforce policies against “threats, hate, harassment or other forms of abusive speech”, and that the tweaking of their algorithm unintentionally excluded republican profiles. Twitter has since fixed their search algorithm.

What we know about these secret algorithms isn’t much. For sure the algorithm looks for sites that use the same kind of words that people are searching for. But they also try and ensure that the pages writing those words are legitimate, by looking at information such as whether the site is trustworthy and if it is using the latest and most secure technology. There’s also an element of personalization affecting site rankings, where users will see more stories from publishers that they’ve visited frequently in the past,.

For Google, their decision to keep the algorithm secret is partly an attempt to ensure that it still works. If the nuts and bolts behind Google’s rankings were revealed, companies would try to alter their content in order to maximize their rank.

Regardless, because of the entire industry’s lack of transparency, it is easy to think that the search results we receive every day could be inherently biased. If Google or Facebook ever went rogue and decided to throw an election to a favored candidate, it would only have to alter a small fraction of search results to do so. And that is a very scary proposition.

So how do we fix such an issue?

Frank Pasquale, a professor at the University of Maryland Law School, has suggested that the Federal Trade Commission and the Federal Communications Commission should gain access to search data and investigate claims of manipulation. His hope is that a nonpartisan body could investigate accusations of bias and put the issue to rest.

Conversely, Facebook has sketched out a plan that involves giving academic researchers access to its search data, and allowing these academic researchers to study whether bias exists. Under Facebook’s proposed solution, the tech company would keep a tighter lid on its secret algorithm, while still allowing some review to come out to the public.

However, recently it was revealed that the Trump administration is considering instructing federal antitrust and law enforcement agencies to open investigations into the practices of sites like Google and Facebook. While the preliminary document is still in the early stages of drafting, and could change significantly in the coming months, the threat of federal antitrust enforcement from the Trump administration could spur tech companies to introduce more transparent policies regarding their search algorithms.

It’s still an open question how these tech companies will deal with these developments. It will take months for the Trump Administration’s proposal to take shape, if it does at all, and other proposals are only still preliminary thoughts. Stay tuned for more developments on this front, and let’s hope that these developments aren’t blocked.

I’m Jay Kesan.

Net neutrality means that your Internet service provider should not be able to treat some type of content on the Internet different from others. In other words, all websites and their content would be considered to be equal.

Net neutrality is about competition and profit in the Internet access market. The Federal Communications Commission – the FCC — is the government agency that has been most involved in net neutrality issues. Its authority to regulate high speed Internet services has been hotly debated over the years. The FCC has the most authority to regulate common carriers like telephone companies.

In 2005, the Supreme Court decided the Brand X case. Cable Internet, according to the Supreme Court, was an information service, not a telecommunications service, and so the common carrier rules did not apply.

After the Brand X case, the FCC re-classified DSL service as an information service, even though as a service provided by phone companies, DSL was originally thought to be a common carrier service. The FCC also tried a variety of tactics, including a non-binding Internet policy statement, to curb abuses using their existing authority.

Finally, in 2015, under pressure to preserve net neutrality, the FCC re-classified broadband as a common carrier. Now, in December 2017, by a partisan 3-2 vote, the FCC, under President Trump, repealed that 2015 re-classification of high speed Internet service. The FCC argues that its repeal is needed to encourage broadband innovation and investment, especially with the rapid deployment of 5G wireless technology.

This decision to overturn net neutrality has been praised by telecom companies, but it is criticized by technology companies such as Facebook, Google, and Amazon and by consumer groups.

So where do we go from here?

Without net neutrality, paid prioritization enters the picture. Internet service providers can charge based on the type of content served, for example, charging more for multimedia content, and they can also prioritize certain traffic over others. This means that, in the future, we may see tiers of Internet service causing your Internet bill to look like your cable subscription, perhaps with different payments for access to various websites. Today, there are differences in price based on your speed of Internet access or the reliability of your connection. But, in the future, they may also be based on the type of content being accessed.

In addition, smaller websites may also not be able to afford the payments to ISPs to deliver their content rapidly compared to large, popular websites.

The FCCs’ decision brings another agency, the Federal Trade Commission – the FTC — into the picture. The FTC is the primary agency that handles consumer protection issues and it can apply its competition law expertise to provide some protections against deceptive and anticompetitive consumer practices. Unlike the FCC, the FTC lacks the technological expertise regarding Internet communications, and it remains to be seen how successful they will be at enforcing consumer protections or making rules regarding broadband access. In its recent repeal, the FCC has included certain transparency requirements to make it easier for the government to oversee broadband providers’ conduct.

Before this repeal really takes hold, there are legal hurdles ahead. The attorneys general of several states, including New York, Pennsylvania, Massachusetts and Minnesota, have announced their intention to challenge the FCC’s repeal in court.

In another approach, lawmakers in states like California, New York, Washington and Massachusetts have proposed state bills to establish net neutrality protections. But does the FCC’s deregulatory federal approach preempt state and local action regarding net neutrality? In the past, courts have upheld the FCC’s ability to preempt state regulations in the telecommunications market.

Finally, it will take months for the FCC’s net neutrality repeal to be approved by the Office of Management and Budget. So stayed tuned for more developments regarding your Internet service.

I’m Jay Kesan.

On Friday, July 13th, Robert Mueller’s special prosecution team announced an indictment of twelve Russian agents in connection with cyberattacks against us.

The indictment discusses two computer intrusion units in the Russian military. One of these units is associated with actions to publicize private data, and the other unit is associated with attempts to disrupt our election infrastructure. Both of these sets of crimes include violations of federal law – the Computer Fraud and Abuse Act—the CFAA. The defendants in the indictment are charged with two violations of the CFAA: accessing a computer without authorization and obtaining information, and transmitting something that causes damage to a computer.

Information gathering was one focus of the Russian cyber intrusions. According to the indictment, Russian intelligence agencies have been deploying hackers against a variety of systems in the U.S. Arguably the most visible victim is the DNC, which experienced multiple intrusions resulting in the theft of emails and other digital information. Then, posting under names like DCLeaks and Guccifer 2.0, they staged releases of these documents. An unnamed organization – generally assumed to be Wikileaks – is described as working directly with Guccifer 2.0 to release the documents.

When someone wants to break into a computer system, they must find a security vulnerability. In many cases, that security vulnerability is situated right between the chair and the computer keyboard. The indictment alleges that units of the Russian military used spearphishing tactics to obtain passwords or other methods of access. Spearphishing involves sending emails by personalizing them so they appear to be from a trusted source thereby inducing the targeted individuals to reveal confidential information. Once they had access, they installed malware on the computers, including keyloggers. That way, the hackers were able to record every keystroke that the user made.

When there is evidence that someone committed a crime, you want to know everyone who interacts with that evidence in the so-called chain of custody. You want the evidence to be in the same shape in court as it was when the crime was committed. An unknown hacker is hardly a reliable link in any chain of custody. When the documents were posted to Wikileaks, journalists immediately started reading them and identifying significant things. But there is no way of ensuring that the hacker didn’t plant dozens or hundreds of little lies throughout the stolen documents.

Another part of the indictment concerns interference with election infrastructure. The indictment refers to an unnamed state board of elections that was the target of a cyberattack by the defendants resulting in the theft of the personal information of about 500,000 voters. In the months following that data breach, some of the defendants also hacked into an American company that makes software to help state and local election boards verify voter registration information. The company is referred to as “Vendor 1.” Some journalists have suggested that this company might be VR Systems.

The indictment also alleges that the defendants used email addresses that resembled Vendor 1 as part of spearphishing campaigns against election officials across the country in November 2016. They used the company’s logo to make the emails seem more legitimate as coming from a company that election officials trust, but the attached Word documents contained malware. It is unknown how many of these spearphishing attempts were successful or what the hackers did when they got access. It’s possible that some of those problems were part of a hacking operation aimed at causing confusion and long lines and lowering voter turnout on election day.

But perhaps the strongest effect that we are experiencing is psychological, as the entire country is concerned about the legitimacy of our political system.

I’m Jay Kesan.

The Internet is being used to recruit terrorists, to spread disturbing and abusive content, and fake news, and many of us want to stop this. Facebook allegedly sold ads to Russian operatives whose goal was to sow discord within the American electorate in advance of the 2016 election. One source of this activity was the Internet Research Agency, which has been described as a Russian troll farm. The anonymity of the Internet combined with the technology to automate a lot of social media accounts allows for the creation of echo chambers of bots. In December 2017, Facebook rolled out a tool to allow users to see if they had “Liked” or “Followed” content created by the Internet Research Agency. Facebook and Google represent about 60% of the market share for online advertising revenue. Hence, Facebook and Google likely profit the most from the marketing-style outreach of Internet troll farms.

Into this picture comes companies like Unilever, a large consumer goods multinational, which is concerned that Facebook and Google are not behaving responsibly with their power. Unilever brands include household names like Lipton tea, Dove, and Ben and Jerry’s, so their advertising choices matter. And Unilever has threatened to boycott Facebook and Google, if they do not get better at screening out extremist and illegal content.

Unilever’s approach relies on the power of private market forces. And this may be the only viable approach to combat toxic online content. The law could create barriers on Internet content based on the identity of the sender, but that would probably violate the First Amendment. As a country, the right to free speech is one of our most sacred values. Several other democracies, including France and Germany, have laws against hate speech. We do not. The Supreme Court has instead said that such speech cannot be restricted unless the speech is aimed at “inciting imminent lawless action.” Hate speech is tolerated up until the point that it involves a present threat.

Some courts have pointed out that it is patronizing to assume that people should be shielded from others’ speech. We believe that ideas should rise and fall on their merits in the so-called marketplace of ideas. If something is false or has no logical support, the law trusts the reasonable person to come to the right conclusion about that.

However, the First Amendment is also being used as a shield by people who want to divide us from each other. Some argue in favor of requiring people to reveal their real identities online, but a legal requirement like that would interfere with the right to anonymous speech under the First Amendment.

Private market forces do not compel in the same way that the government does, and a private company’s actions, as opposed to the government, cannot violate the First Amendment, except in very limited circumstances. Hence, Unilever’s threatened boycott is a potentially powerful tool against hate and disinformation. Facebook and Google could develop their own rules and contracts with their users, filter out content on their networks, and develop industry standards and metrics for advertising.

The Russian information operations are one focus of Special Counsel Robert Mueller’s criminal investigation, so it is likely that additional indictments and insights into these activities will continue to emerge. For now, private companies and consumers like you and me are entrusted with the responsibility of consuming information responsibly. Organizations like the International Federation of Library Associations and Institutions (IFLA) are also helping us identify fake online content. In short, we have to hold ourselves responsible for evaluating the information that is in front of us, and it is up to us to hold companies responsible for profits earned from another’s dishonesty or hate filled content.

I’m Jay Kesan.

NEWS