MATTHEW PARK

  • 0 item
  • Home
  • UX/UI
    • Endgame Platform
    • Endgame Cloud
    • Attack Timeline
    • Endgame Ops Dashboard
    • Industrial Mobile
    • Industrial Web
    • Workforce Management
    • Healthcare
    • Automotive
    • Hospitality
    • E-Commerce
    • Telecommunications
    • Cyber Security
  • Other Work
  • Resume
infinite loader
infinite loader
infinite loader
infinite loader

Wireframing and Prototyping Gen (1.0)

infinite loader
Wireframing and Prototyping Gen (1.0)
infinite loader

Wireframing and Prototyping Gen (4.3)

infinite loader

Wireframing and Prototyping Gen (4.3)

infinite loader
Visual Design Gen(2.0)
infinite loader

Current Live Product

Attack Timeline

(Note: I had the opportunity to present my findings on Attack Timelines at Bsides Las Vegas 2017. The powerpoint can be found here: Powerpoint presentation. I also presented at Usenix SOUPs on user-centric design around chatbots here: Powerpoint presentation)

The Discovery Phase

During the discovery phase we crafted a short narrative that describes the problem based on our initial understanding. For example, the following is a short excerpt we validated through the entire design process.

“From the Defensive Cyber Operations (DCO) side, there are streams of new exploit types and malicious attacks constantly threatening an array of different network environments. A typical defensive analyst job is to maintain their knowledge of these attacks; they need to know what patterns to look for, what to spot, essentially finding that needle in a haystack - in a very short amount of time.  Of course once that needle is found, these security analysts are then tasked to find where other corresponding problematic areas exist; exposing and remediating other parts of the network the attacker could have manipulated. It's a classic ‘cat and mouse’ game that keeps analysts constantly on their toes searching or reacting to malicious events…”

Our Biases

The narrative proceeds in different directions based on the use case and lists our biases coming into the study. Our company has an invaluable source of first-hand practitioner experience, but it also creates immediate pockets of biases.  Through this phase of discovery, we are aware of our existing beliefs and focus on avoiding confirmation bias. We avoid directional questioning through interviews and testing. The goal is to create a baseline for feature creation which will be confirmed or disproved during user interviews. Below is a small snapshot of a few biases we addressed while going into the “attack timeline” visualizations study.

  1. There are large groups of users that lack security and platform domain experience. Making a lot current visualizations difficult to navigate, as they are overly complicated and too expansive - compounded with the fact users do not have the proper training to make an informed decision.
  2. Users lack time: These analyst will typically have 5-10min to make a decision on an alert. As illustrated before, the queue of alerted information can be never-ending and can stack up when not dealt with in a timely manner.
  3. Current solutions in visualization tend to force conformity. Rather than offer innovative and differentiating ways to navigate your data.  Products tend to lean on existing visual models familiar to users, regardless of usability or value.

User Testing and Interviews

With our biases on hand, the next step involves capturing user data through different types of user testing and research.  User input provides an opportunity to redefine our user roles by creating new personas specifically around the individual use case (e.g., alert triage, visualization enhancement, etc). 

User Group A: The traditional SOC user group represented a seasoned security team currently employed in the commercial space. The participants from this team included four highly trained SOC professionals, two moderately trained SOC analysts, and one manager overseeing the team’s operations.  They were familiar with a multitude of security analysis tools, including Endgame’s platform. The participants were brought into hour long user interview sessions, and answered carefully crafted and open-ended questions tangentially around the topic at hand.

User Group B: The Novice Training Team user group was a hand-picked, relatively inexperienced security team operating in the federal space. The participants from this team included ten new SOC professionals, and two highly trained SOC leaders.  While the two experienced analysts were familiar with many security analysis tools, the other ten participants had a baseline familiarity with two tools, one of which was the Endgame Platform. This group participated in a multi-day A/B exercise that tested the knowledge and skillset of the individual analysts. Researchers were onsite for observational purposes and performed impromptu user interviews.  We directly compared data collected from this group with that collected from group C.

User Group C: The Red v. Blue (internal) represented a mixed group of security professionals, with backgrounds mostly from the federal space. The participants from this group were split into two teams. The blue team consisted of four highly trained security professionals, two moderately trained analysts, and two inexperienced analysts. The red team consisted of three extremely proficient security professionals. Participants had a mixed range of knowledge of the Endgame platform, as well as other security analysis tools.  This group also participated in a multi-day A/B exercise that tested the knowledge and skillsets of the individual analysts. UX and Product Researchers were onsite for observational purposes and performed impromptu user interviews.

Data Collection / Persona Creation

With all of our data on hand we created a new set of personas specifically catered toward each use case.  Along with the personas, we created a long narrative for a typical “Day in the life” for each role .

The data collection phase highlighted several key themes and areas of focus, which in turn established a list of design requirements. Below are the two major categories used to define the “Attack Timeline” visualization:

  1. Visualizations should be used a tool to enhance the typical analyst workflow by providing high to low-level visibility and adding context to granular data.  When created with the user’s perspective in mind, visualizations can shine when mapping large quantities of data at scale.
  2. Visualizations should be used as a tool for collaboration or reporting.  Clear visual representations of data do not require domain expert. Well-designed data visualizations clearly depict understandable trends or content distribution.  Humans are naturally visual and process visual data at a faster “at-a-glance” rate - which opens accessibility of content to a larger array of users.

Concept Phase (Prototyping)

With our redefined personas, workflow habits, and design requirements, designers and developers entered the process of ideation - discussing the possibilities of accomplishable designs. The discovery phase helped facilitate actionable feedback and produced different results between both case studies.  In the chatbot use case, the discovery phase opened the possibility of giving the user something entirely new but useful in their day-to-day life to facilitate alert triage. In the attack timeline use case, the discovery phase redefined the definition of a historically consistent visualization and opened new ways to explore the data.

matthewpark
Made with Pixpa
Share on Facebook Share on Twitter Pin to Pinterest Post on Tumblr Copy link to share LINK COPIED