Disinformation and Election Security
As the midterms approach, election officials and experts have expressed concern about a new surge of disinformation that could disrupt voting in ways that traditional cyberattacks wouldn’t. But are their fears warranted? What types of cyber threats should the nation be focused on as we approach election day?
The mention of “election security” amongst cybersecurity practitioners typically conjures up concerns about voting machine tampering, software and hardware vulnerabilities, and the possibility of data breaches. This concern is expressed regularly through demonstrations of voting machine compromise at professional conferences, the formation of industry groups that lobby manufacturers for equipment hardening, and the work of countless individuals who have worked with Congress and other regulatory bodies to help develop guidelines, standards, and requirements for election security.
But there’s more to election security than hardware, software, and process. Misinformation and disinformation are extremely pressing problems that are commingled with traditional cybersecurity. And they present a very dangerous threat. This multi-layered attack technique took center stage in the 2020 elections and has only grown more endemic since.
In just the last few weeks, a Mesa County Clerk and Recorder entered a “not guilty” plea for charges relating to her alleged involvement with election equipment tampering. Her colleague previously pled guilty to similar charges. The pair are being held accountable for providing access to an unauthorized individual who copied hard drives and accessed passwords for a software security update (the passwords were later distributed online). The accused clerks publicly spread disinformation about election security prior to the incident.
In Georgia, election officials recently decided to replace voting equipment after forensics experts hired by a pro-Trump group were caught copying numerous components of the equipment, including software and data. It has not been found that the outcome of the election was impacted, but the fact of compromise sows the seeds of doubt and begs the question: how and where could the stolen data be used again to influence elections?
And back in February 2022, election officials in Washington state decided to remove intrusion detection software from voting machines, claiming that the devices are part of a left-wing conspiracy theory to spy on voters.
Unfortunately, the preponderance of public platforms on which anyone can voice an opinion on a topic—even if it’s without a shred of factual information—makes it simple for that voice to be heard. The tech community isn’t helping, either, with their reluctance to shut down mis- and disinformation, citing users’ concerns over freedom of speech. The result is constant public questioning about the veracity of any information, all data. And this mistrust/distrust hurts society in many ways, but as it pertains to the outcomes of U.S. elections, we’re now in a quagmire—will U.S. voters ever trust the election process again? Who knows what to believe—or whom to vote for—when cleverly crafted messaging campaigns prey on targeted individuals, influencing them with lies and manipulated information?
What constitutes “misinformation” and “disinformation”?
Before we go too much further, let’s level-set on definitions. All too often, new terms are introduced into the vernacular and different people apply different meanings—sometimes to support their own causes. Ironic.
As far as “misinformation” and “disinformation” are concerned, neither is a new concept or practice. The practice of spreading mis- and disinformation (a.k.a. “fake news”) can be traced back as far as (circa) 27 BC, when then Roman Emperor, Octavian, spread lies about his nemesis, Mark Anthony, to gain public favor.
Misinformation, disinformation, malinformation, fake news, lies, propaganda—all terms for similar practices—have been used consistently throughout history. But as far as cybersecurity is concerned, clarifying what’s “mis” and what’s “dis” provides some potentially important context.
Misinformation is the unintentional spread of disinformation.
Disinformation is the intentional spread of false information that is purposely meant to mislead and influence public opinion. Also known as lies. Disinformation may contain tiny snippets of factual information that have been highly manipulated; this helps the creators of disinformation cause confusion and cast doubt on what’s fact and what’s not.
Cybersecurity’s role in stopping the spread of misinformation and disinformation
The amount of dis- and misinformation that can be spread grows proportionally alongside the cyber attack surface. Reasonably, the more places people can post, share, like, and comment on information (of any ilk), the wider and farther it will spread, making identification and containment more challenging.
As noted above, lies and propaganda are just one category of election-focused threats, and they often leverage digital platforms for distribution. But some other, more traditional cybersecurity threats to elections include:
Phishing: Tricking users into giving up secrets or information that would allow threat actors to stealthily use legitimate accounts and systems for malintent.
Malware/Ransomware: After successful phishing, threat actors can drop malware or ransomware onto a user’s system and alter or delete files, scrape sensitive systems for data, install keyloggers to intercept private communications, and more.
Software and hardware vulnerabilities: The most obvious cyber choice, criminals can take advantage of weakened systems to manipulate data, systems, and users in nefarious ways.
All seasoned cybersecurity professionals should be familiar with the processes and procedures to combat these types of threats. When it comes to election security, not all cybersecurity professionals are going to be involved with identifying, triaging, and remediating these threats, as they would be with general phishing, malware, or software/hardware vulnerabilities. Nonetheless, many of the strategies and techniques are similar.
The personas chiefly involved with election security include:
Security teams working for voting equipment manufacturers (e.g., Dominion Voting Systems, ES&S)
Security teams working for businesses that supply component parts (i.e., the supply chain)
State, local, and federal agencies responsible for ensuring the confidentiality, integrity, and availability of election equipment and data
Technology and media companies that facilitate the spread of information (e.g., Twitter, Tik Tok, LinkedIn, Reddit, Slack, etc.)
Industry watchdogs and regulatory bodies (e.g., CISA, DHA, CIS, The Center for American Progress)
Best practices for election security
Needless to say, it’s best to be proactive when building systems, deploying tools, and implementing cybersecurity controls. But attacks are also inevitable, some of which will be successful. Therefore, it’s imperative to institute fast, reliable identification and remediation mechanisms which reduce mean time to detect and respond.
Some recommended practices for election-oriented cybersecurity include:
Monitor: Continuously monitor infrastructure: Identify all relevant systems in use, who/what is using those systems, and how those systems are used. Set baselines for usual and expected activity, and then monitor for anomalous activity. For example, look for unusually high levels of activity from system or user accounts. This may be indicative that a malicious user has taken over an account, or that bots are being used to disrupt systems or send disinformation.
Test: Test all systems, whether the software/hardware used in voting machines or the people who have/need authorized access, test for vulnerabilities and weaknesses, apply remediation where possible, and triage any identified issues along the way.
Verify: Verify who has access to systems, who should have access to systems, and apply multi-factor authentication to prevent account takeover. Verify human versus bot activity to help prevent the malicious use of bots in spreading disinformation.
Profile: Understand the most likely targets/subjects of election-related mis/disinformation. These are often high-profile individuals or organizations with strong political stances (and, of course, the candidates themselves). It may be necessary to place greater security controls on those individuals’ accounts to protect against data leakage, account takeover, smear campaigns, etc. Use the same methods for protecting systems/tools/technologies threat actors use to create and disseminate false information.
Apply: Apply machine learning to study digital personas, bot activity, and AI-generated campaigns. Use baselines for “normal” behavior to contrast with anomalous behavior. Machine learning can also be used for keyword targeting—identifying certain words or phrases used by people propagating disinformation and misinformation. When problematic language is used—or language is found that indicates an attack may be in the planning—flag activity or automate security controls to have it analyzed and removed or quarantined.
It is unfortunately the case that humans will continue to manipulate machines for their own benefit. And in today’s society, machines are used to influence human thinking. When it comes to elections and election security, we need to be focused just as heavily on how machines are used to influence the voting public. When this “influence” comes in the form of misinformation and disinformation, cybersecurity professionals can be a huge help in stopping the spread.