Sure, you’ve heard about companies sustaining massive data breaches. But sometimes the biggest threat to an organization’s network could be its own employees. The role of human error in security breaches is well known; human gullibility and carelessness are often called the weakest link in computer security. Here’s a look at some of the biggest user fails ever.
1. Prey for a Phishing Expedition
One of the most consequential successes in phishing occurred in March 2016. John Podesta, chair of Hillary Clinton’s 2016 Presidential Campaign, received an email that appeared to be from Google: “Someone just used your password to try to sign in to your Google account … Location: Ukraine …”. The email provided a helpful link so he could change his password “immediately”.
Podesta had good instincts; he found the email suspicious and forwarded it to his chief of staff, who passed it to the campaign’s IT people. IT responded with a brief message containing useful guidance – and unfortunately starting with one badly-stated sentence. “This is a legitimate email. John needs to change his password immediately, and ensure that two-factor authentication is turned on… He can go to this link: https://myacount.google.com/security to do both…” (IT says they intended to write “illegitimate email”.)
It appears that IT’s full response may not have made it all the way back up the chain to Podesta. In any event, someone in Podesta’s office clicked on the bait in the phishing email, rather than on the link that IT sent, and gave the hackers Podesta’s password.
In the final weeks of the presidential campaign, selected internal messages and confidential documents from the hacked Gmail account were leaked (via WikiLeaks) day by day and massively covered in the media. The phished material was one of the several key factors determining the result of a very close presidential race.
Back in 2000, Bruce Schneier noted that “Only amateurs attack machines; professionals target people.” Social engineering, the ancient art of the con artist, has always been part of the black-hat hacker’s arsenal. Many of the biggest attacks of the last few years have not depended on burning a zero-day but on cajoling or panicking someone into giving away their credentials, emailing back proprietary data, or wiring the hacker a few million dollars.
Phishing is a form of social engineering that uses email, text messages, or phone calls to try to convince people to give away information or to run a malicious payload on their computer. Phishing may be a generic message broadcast to thousands of people, in hopes of getting a bite or three in response. It may be spear phishing, which is targeted at specific individuals of interest, using personal details researched from the Internet and social media to craft targeted bait that is plausible and compelling.
As is often the case, multiple things went wrong in the Clinton campaign breach. Podesta was right to trust his gut, but had he (and his staff) been “savvy IT consumers,” they would have known to hover over the email link to check its destination; and they would have known never to click on the link in a suspicious email, but rather to go directly to the website of the company in question. And they would have had two-factor authentication turned on already.
2. Easily-Guessed Passwords and Reused Passwords
In 2016 hackers took control of Facebook founder Mark Zuckerberg’s Twitter and Pinterest accounts, which were vulnerable due to a password both overly simple and reused across multiple accounts.
Back in 2012 millions of user’s passwords were stolen from LinkedIn and later published. Years later, hackers discovered Zuckerberg’s password in the LinkedIn dump and used it to post taunting messages from his dormant Twitter and Pinterest accounts. (“You were in LinkedIn Database with the password ‘dadada’!” “Hacked by OurMine Team.”)
The combination of a password that is both insufficiently random and reused across multiple accounts is a gift that keeps on giving to patient black-hat hackers. Zuckerberg only suffered a few days of embarrassing publicity.
The cost of Sony Pictures’ data breach has been tens of millions of dollars and is still growing to this day. Back in 2014, executives there received a wave of phishing emails appearing to be from Apple, warning them to verify their login information. The hackers correctly surmised that some people would be reusing their Apple ID passwords as their Sony corporate passwords. After they researched on LinkedIn to help guess login IDs, the hackers were in. They stole data and uploaded malware that created chaos on the Sony Pictures network.
The difference in outcomes for Zuckerberg and Sony is suggestive. A 2015 study on password practices found that security experts use high-security behaviors for consequential accounts, but not necessarily for trivial accounts. For a valuable account, an expert will never reuse a password, always use passwords that are sufficiently random and sufficiently long, use a good password manager app for storing and recalling their passwords, and use multi-factor authentication where possible. But, for low-value websites, the experts often relax those rules.
Besides knowing those rules, a key skill is distinguishing the two kinds of accounts. Zuckerberg may have figured the little-used Instagram and Twitter accounts were low value; nevertheless, he suffered some embarrassment from his password behaviors. The Sony executives, presumably far from experts, suffered much worse.
3. Disabling or Delaying patching, Anti-virus, etc.
The Equifax data breach revealed in September of this year disclosed key personal data for nearly half of Americans, putting them at risk for identity theft. Perpetrators might open new credit accounts in your name, borrow money and make purchases, withdraw your money, and commandeer your online accounts at various companies.
The black-hat hackers used a known vulnerability in the Apache Struts MVC framework, unpatched for two months on an Equifax customer dispute portal. In congressional testimony, the former CEO stated that the portal had not received the critical Struts patch because one person failed to notify the relevant IT team, and Equifax’s security scanner did not spot the vulnerability on the portal.
In a 2015 study comparing the reported security practices of experts and nonexperts, one difference was where non-experts said, “I use antivirus software”, the experts said, “I make sure software updates get installed.”
Computer security is an arms race; attackers discover or learn about vulnerabilities and craft exploits, and defenders patch holes and deploy defenses. Delaying or disabling software patching or antivirus software updates means that you are exposed to known vulnerabilities. On an individual’s machine, this provides a stepping stone for leap-frogging deeper into the network. On an Internet-facing server, it provides the crack in the dam that is often enough to unravel other protections.
Was the Equifax breach the fault of a single user’s error, that unnamed person who failed to notify IT? From our so-far limited knowledge of what happened, multiple faults compounded each other. The patching process apparently had a single point of failure and no human assurance process backing it up. The automated assurance from the security scanner was inadequate. Defense in depth was absent; attention the need-to-know was inadequate, based on the amount of unencrypted data on the customer portal and apparently elsewhere in the network; and attention to least-privilege was insufficient, based on the apparent ease of siphoning a volume of data from the Equifax network.
Human error is often the error of lowly users, but it can also be the errors of the skilled who have authorized privileges, and of those who set priorities and allocate resources.
Are you the Weakest Link?
Nearly all security failures involve human error in some way. Carelessness, hurriedness, lack of knowledge, lack of awareness, and kindness abused all play a role.
But experts are also pointing out that blaming the end user is a cop out and an excuse for fatalism. Jessy Irwin notes that “‘People are the weakest link in security!’ is more of a comfortable excuse for many to lean on instead of a rallying cry to actually do something that changes the status quo.” Kelly Caine states “It’s actually executives, managers, system administrators, designers, and coders – rather than users – that are the weak links in information security.”
Most user failures reveal a problem in training, work environment, resources, and incentives. Changes in how we communicate about security and a focus on establishing security culture are called for.
- Training needs to help users develop a mental model of computer security that will generalize to new situations, learn and practice good security habits, and develop situational awareness.
- In industry and aviation, when safety experts see a human error, they look for ways to change the environment to make doing the right thing easier. In computing, this includes human-centered design and designing security facilities so they leverage rather than clashing with human instincts.
- Projects and departments must have the resources – particularly the staff hours and schedule relief – to follow good security practices.
- Organizations must prioritize security and incentivize good security behaviors.
- Human error is a key source of vulnerability, but it is often a system issue and organization failure. It’s not an occasion for fatalism, but for learning and creativity.
Keith Barker’s End-User Security Awareness course is a great starting point for users and organizations to become more educated about IT security.