Phishing Your Users

min read

This blog post is adapted from a February 18, 2015, message to the EDUCAUSE Security Discussion Group listserv after the following question was raised: "Our institution is thinking about phishing our users as an awareness activity. What are the pitfalls and payoffs of conducting a phishing simulation or awareness campaign?"

It is absolutely possible to create a campus phishing assessment or simulation program without facing a negative response. We did it at Emory across tens of thousands of users without any problems, and we're currently implementing a similar process here at the University of Colorado.

Below are 10 key points, which I've ranked in rough order of importance.

  1. Implementing a phishing assessment program — sometimes referred to as "self-phishing" — is an alternative to traditional awareness that, from my experience, is far more effective. We work in higher education, and exploring the most effective learning techniques is at the core of our mission. As a wise person once said, if your behavior hasn't changed, you haven't actually learned anything. We are engaged in learning with the goal of behavioral change. This process both provides effective learning to that goal and measures to that goal.
  2. The process must be nonpunitive. Falling for a phishing message has no negative impact on a person's job, and no information will be provided to any departments that would include specific names or be detailed enough to infer names. We only provided aggregate stats on groups of at least 20 people. We rejected all requests to provide specific names. Organizations wishing to include direct ramifications to the process should first look at including information security in job descriptions and performance plans.
  3. The community is fully informed of the process that will occur before it happens. Someone once asked me, "Aren't you afraid that telling people what you're doing will skew the results?" This isn't a research paper — we are trying to promote learning. If raising awareness about a self-phishing project is enough to prevent someone from responding to phishing, then you've already won.
  4. The leadership of the institution (in its various forms from vice presidents to the chancellor to the board) must have a chance to hear about the process in advance in order to ask questions, express concerns, and address issues before the exercise begins. You can call it "management buy-in" if you'd like. It is imperative to inform the leadership of the institution about the exercise in advance and give them a chance to understand the goals of the activity.
  5. The educational landing page (for those who fell for a phish) provides contextual information that is actionable. It cites the items in the specific messages sent out that would be the easiest indicators of a fake e-mail and teaches about how to avoid fake e-mails in the future. Avoid being generic; point out the exact items in the message you sent that could serve as red flags. 
  6. The process would use content based on real-world phishing of a moderate level. The goal is not to come up with something good enough to fool everyone; the goal is to educate a reasonable person to be able to recognize a typical phish. I have said many times that I don't expect anyone to catch an advanced social engineering attack. Some of the most security-aware people can be fooled by a sophisticated, targeted attack.
  7. We worked with the help desks to inform them about the process and monitor their workloads during the message campaigns. We regulated the sending of messages to ensure that they were not totally overloaded at any one time.
  8. We continuously evaluated the process as it progressed, reviewing everything from goal achievement to process improvement to community feedback. If the process did not demonstrate measurable improvement in phishing response rates during the first year, it would be discontinued. If there were considerable complaints, we would evaluate our approach for possible improvements.
  9. We analyzed the results across demographic data to look for any hot spots that might require additional training. Was there a department, job class, student major, etc., that demonstrated a notably higher-than-average phishing response rate? Were there other trends to investigate?
  10. Each campaign used four or more different messages to both provide variety and allow us to test for response rate differences between various popular phishing message themes. We also tested for differences between phishing messages that were totally generic (did not mention the institution name at all) and those that contained basic targeting (institution mentioned in a couple places). You can view a Security Professionals Conference presentation from Emory on the effort (including charts about response rates and improvements).1

At the end of the day, the Emory project demonstrated success at reducing response rates while receiving essentially zero negative feedback across quarterly campaigns with 40,000 users. Our current work at the University of Colorado has undergone a sampling process to establish a baseline response rate that will next go through some process improvement work, followed by phishing awareness work.

Note

  1. Derek Spransy, "Phishing Ourselves to Raise Awareness" presentation at the 2012 Security Professionals Conference, Indianapolis, Indiana, May 15–17, 2012.

Brad Judy is the information security officer at the University of Colorado System. He previously worked in the IT Security Office at Emory University and Emory Healthcare. Prior to that, he worked at the University of Colorado at Boulder in the IT Security Office and as part of the IT architecture group and Active Directory management team. He has been actively involved in the Windows in Higher Education community and REN-ISAC.

© 2016 Brad Judy. This EDUCAUSE Review blog is licensed under the Creative Commons BY-NC-SA 4.0 International license.