(Note: I had the opportunity to present my findings on Attack Timelines at Bsides Las Vegas 2017. The powerpoint can be found here: Powerpoint presentation. I also presented at Usenix SOUPs on user-centric design around chatbots here: Powerpoint presentation)
The Discovery Phase
During the discovery phase we crafted a short narrative that describes the problem based on our initial understanding. For example, the following is a short excerpt we validated through the entire design process.
“From the Defensive Cyber Operations (DCO) side, there are streams of new exploit types and malicious attacks constantly threatening an array of different network environments. A typical defensive analyst job is to maintain their knowledge of these attacks; they need to know what patterns to look for, what to spot, essentially finding that needle in a haystack - in a very short amount of time. Of course once that needle is found, these security analysts are then tasked to find where other corresponding problematic areas exist; exposing and remediating other parts of the network the attacker could have manipulated. It's a classic ‘cat and mouse’ game that keeps analysts constantly on their toes searching or reacting to malicious events…”
The narrative proceeds in different directions based on the use case and lists our biases coming into the study. Our company has an invaluable source of first-hand practitioner experience, but it also creates immediate pockets of biases. Through this phase of discovery, we are aware of our existing beliefs and focus on avoiding confirmation bias. We avoid directional questioning through interviews and testing. The goal is to create a baseline for feature creation which will be confirmed or disproved during user interviews. Below is a small snapshot of a few biases we addressed while going into the “attack timeline” visualizations study.
User Testing and Interviews
With our biases on hand, the next step involves capturing user data through different types of user testing and research. User input provides an opportunity to redefine our user roles by creating new personas specifically around the individual use case (e.g., alert triage, visualization enhancement, etc).
User Group A: The traditional SOC user group represented a seasoned security team currently employed in the commercial space. The participants from this team included four highly trained SOC professionals, two moderately trained SOC analysts, and one manager overseeing the team’s operations. They were familiar with a multitude of security analysis tools, including Endgame’s platform. The participants were brought into hour long user interview sessions, and answered carefully crafted and open-ended questions tangentially around the topic at hand.
User Group B: The Novice Training Team user group was a hand-picked, relatively inexperienced security team operating in the federal space. The participants from this team included ten new SOC professionals, and two highly trained SOC leaders. While the two experienced analysts were familiar with many security analysis tools, the other ten participants had a baseline familiarity with two tools, one of which was the Endgame Platform. This group participated in a multi-day A/B exercise that tested the knowledge and skillset of the individual analysts. Researchers were onsite for observational purposes and performed impromptu user interviews. We directly compared data collected from this group with that collected from group C.
User Group C: The Red v. Blue (internal) represented a mixed group of security professionals, with backgrounds mostly from the federal space. The participants from this group were split into two teams. The blue team consisted of four highly trained security professionals, two moderately trained analysts, and two inexperienced analysts. The red team consisted of three extremely proficient security professionals. Participants had a mixed range of knowledge of the Endgame platform, as well as other security analysis tools. This group also participated in a multi-day A/B exercise that tested the knowledge and skillsets of the individual analysts. UX and Product Researchers were onsite for observational purposes and performed impromptu user interviews.
Data Collection / Persona Creation
With all of our data on hand we created a new set of personas specifically catered toward each use case. Along with the personas, we created a long narrative for a typical “Day in the life” for each role .
The data collection phase highlighted several key themes and areas of focus, which in turn established a list of design requirements. Below are the two major categories used to define the “Attack Timeline” visualization:
Concept Phase (Prototyping)
With our redefined personas, workflow habits, and design requirements, designers and developers entered the process of ideation - discussing the possibilities of accomplishable designs. The discovery phase helped facilitate actionable feedback and produced different results between both case studies. In the chatbot use case, the discovery phase opened the possibility of giving the user something entirely new but useful in their day-to-day life to facilitate alert triage. In the attack timeline use case, the discovery phase redefined the definition of a historically consistent visualization and opened new ways to explore the data.