Sunday, May 17, 2020

#SecurityStories: Using OWASP Juice Shop for Teaching

Have you heard of OWASP Juice Shop? It's a project that's very dear to me and helped me massively over the last years.

Johannes Seitz was the first one who introduced me to this intentionally vulnerable application used to practice security testing hands-on. He facilitated an open space session at TestBash Munich 2017 with it, and I got hooked. Dan Billing also used this great application in his tutorial at Agile Testing Days 2018. I personally used Juice Shop for security testing workshops at my own company since beginning of 2019.

What I like about Juice Shop is that it's a full-blown application. It's working, and it's vulnerable. We can safely practice lots of techniques, whether manually or having automation support us. You're also not alone, it offers guidance in case you need it. What I love most of all: it's based on gamification, offering many challenges on various difficulty levels. The first challenge itself is to find the score board to get an overview on which tasks are there and what's your progress! Although I know that attackers would approach a productive application differently, the gamification approach is very appealing to me. It's simply fun and draws me further from one challenge to the next.

This kind of gamification also worked well for the people I've had in my workshops, introducing them to security testing. Challenges can be taking time and be quite frustrating - yet when you finally solve them, the moment of epiphany and heureka is invaluable and very memorable. In these workshops, I've also seen people learn how to make more use of tools when testing, like the browser's developer tools or REST clients. Despite them having used these tools before, Juice Shop triggered them to discover more possibilities and features they weren't aware of yet. Also, people shared lots of knowledge on how applications are built, which assumptions we make, which approaches we take.

My personal challenge this year is to tell #SecurityStories, so I thought of using Juice Shop again for teaching. Parveen Khan is currently on a testing tour and asked me to join her for a session. She knew about my #SecurityStories challenge, so we thought it's a great match to pair on security testing. Once more, Juice Shop it was.
I believe that pairing on Juice Shop challenges (or the like) will result in deepening my own understanding by sharing the concepts and approaches I've learned.
I know I'll have succeeded when my pair learned 3 new things from me.
Just around that time, a new shiny Juice Shop version got released! Perfect. In our pairing session, I helped Parveen set everything up and we also tackled the first challenges together. As I already knew the solutions, I held back with my knowledge not to spoil the experience for her. Instead, I led her through only nudging in certain directions, waiting for her to ask for hints. It worked! The first challenge was the hardest - it's a whole new application to get to know after all. Once getting the grips with Juice Shop, Parveen solved the second chosen challenge a lot faster. It was really fun doing this together with her! At the end, Parveen shared with me what she learned from this experience hat was completely new to her.
  • She knew how to look at information in the browser's developer tools, yet now she learned that she can also do something with it and how powerful these tools really are.
  • She always thought that security testing needs a hacker mindset and JavaScript knowledge and therefore concluded that she can't do that. Now she saw she can take first steps into security testing herself indeed and solve challenges to learn more.
  • She shared she never had much interest to learn about security, despite knowing that it's important. After having fun with Juice Shop, she's now open to learn more.
  • She learned that she could do security testing together with another person to have more eyes on the problem which makes things easier and more interesting.
  • She realized she forced herself to think in a different way, and she will always remember that. It was great to get through the experience without me giving away too much.
So I'd say, my experiment worked out well! This experience taught me once more how useful Juice Shop and security testing in general is to teach knowledge that also helps us in everyday testing life: understanding how applications work, what we need to check for under the hood of a shiny interface, which tools can help us, and more. Security testing is combining so much knowledge, learning about it is super useful for anyone involved in product development. This fit very nicely to my findings from doing security testing workshops at my company.
I could have stopped there when it comes to Juice Shop. However, there's something that bugged me. Despite knowing Juice Shop for quite a while, and frequently using it for teaching purpose, I haven't solved nearly as many challenges myself as I would like to. I decided that now's the time to change this. So here's my next experiment.
I believe that working on Juice Shop challenges, alone or with a pair, will result in increased confidence in my own skills.
I know I'll have succeeded when I've solved all challenges below 5 stars.
This fits well to what I learned during the AppSecDays: I need more hands-on practice. Off to new frontiers! Want to pair with me on this one? Feel free to reach out

Thursday, May 14, 2020

#SecurityStories: OWASP Virtual AppSec Days

When I heard that there will be a virtual conference by OWASP, hosted at a time I could easily join after work, I simply had to sign up. It fit too well to my personal challenge of telling #SecurityStories to let this opportunity pass by.
The OWASP Virtual AppSec Days April 2020 consisted of a free mini-conference with talks as well as two days of training, split into four hours each day. They also hosted a virtual capture the flag competition, yet I felt not ready to go full in yet.

The Talks

On the first day, three talks were presented in a row. In case you'd like to watch them yourself, their recording is available on YouTube.
  • "Building and growing an application security team - lessons from a CISO perspective" by Michael Coates. I liked this talk a lot. Nothing was really new to me, yet these important messages still need to be heard.
    • I keep finding lots of parallels when it comes to security testing and any kind of testing. In his talk, Michael made clear that it's not about eliminating all security bugs, and rather about building up risk management. There's always a healthy balance of risk in every organization. Fixing every single bug is not worth it, the effort is too high; yet we want to fix the most important ones. Sounds familiar? Here's another one: The goal is to empower business to move fast and make informed risk decisions. It's important to have both technical and business understanding. Secure code empowers the limitless exploration of technology and innovation. And another one: Put security in the hands of teams themselves instead of a security team approving something. This moves ownership of risk to the teams. If the teams know that they are responsible, it really changes their mindset. I can't agree more.
    • According to Michael's experience, a successful application security program uses a "Paved Road Approach", offering the teams an easy and secure route where they get support, and empowering them to take this route. They are not forced to, yet the incentive is high and teams usually prefer the easy way. If security issues are found, however, he advocated for taking the hard way and fixing the fundamental root cause instead of the symptoms. Make sure they cannot occur again or at least people get alerted if they do. To operate at scale, refrain from building your own solutions and rather integrate trusted existing systems. If you have a central security team, they don't own the risk, the single business units and product teams do - so they need to get the incentives to care. Last but not least, a successful program needs to be focused, prioritizing the most important risks. Every time you shift focus, you're saying that this thing is more important than what you did before.
    • When it comes to building a successful application security team, Michael emphasized the importance of senior team members letting go of easier problems and instead training less experienced people to solve them. Allow your juniors to grow, and have seniors focus on senior problems. Overall, creating a great work place where people get training as well as challenges is key. Having real one to one conversations, finding out people's motivation, encouraging them to blog, speak and contribute to open source are all parts of having the people create a great work place. We are the result of an amazing team around us.
  • "Certificate Revocation: Past, Present, Future" by Mark Goodwin. This talk taught me more about certificates as well as concepts and mechanisms I wasn't aware of before. Lots of approaches that are waiting to be explored further.
    • Certificates allow you to verify the identity of some entity, for example of a website. You trust certificates because some authority is satisfied enough to issue one, your browser trusts the authority enough to honor it, and you trust your browser to make good trust decisions. You can also trust that things, however, will go wrong. What if the site's private key is stolen? What if an authority mis-issues a certificate? What if an authority has its systems compromised?
    • Let's talk about remediation. There are certification revocation lists (CRL) you can check, yet it's hard to keep in sync with them. There is the Online Certificate Status Protocol (OCSP) where you can check just in time whether a certificate is still good, yet connection time causes latency. To work around this problem, there are methods like OCSP Stapling to bundle these requests, where Must-Staple is probably the best known way as of now.
    • What to do to prevent that things go wrong? HTTP public key pinning (HPKP) was used for quite some time but then phased out as it was open to abuse. Then there's the Certificate Authority Authorization (CAA), a DNS resource record mechanism which you can use to say that certificates for resources of a particular domain can only be provided by a particular certificate authority.
    • Finally, what about notification? Certificate Transparency (CT) is a cryptographically assured mechanism to allow clients to find out what certificates have been issued for a particular domain. Browsers can require certificate transparency. This way, after a certain date, all trusted certificates will be known.
  • "OWASP Top 10 2020" by Andrew van der Stock. An interesting look behind the scenes for one of the most commonly known OWASP projects!
    • If you haven't heard about the OWASP Top 10 yet, they are really worth a read. Although this document is sometimes mis-used as a standard, it's first and foremost meant for education purpose, as Andrew emphasized. It is a lightweight, developer-centric resource to raise awareness. An update was planned to be released in 2020, yet due to the current crisis situation we will have to wait for another year.
    • This talk allowed to have a look on how the top 10 are compiled. Andrew shared their difficulties to collaborate with organizations to obtain data, to perform data science and analysis, and also to get the desired industry attention and mindshare upon releasing a new version.
    • The group behind this project are collecting evidence from as many sources as they can. For the upcoming version they are aiming to improve their data science efforts as well as the community-driven qualitative process, having the community support the included risks. In addition, the project team is thinking about possibilities to allow anonymous data submissions. They also want to design a better look and feel, and offer more ways to consume the information to reach even more people.
    • What we can do to help is to donate data, help with the data analysis and data science part, respond to qualitative surveys, or peer review the content. If you can help, reach out to the project or any of their leaders.

The Training

Lots of different trainings got offered, yet in the end I opted for guided hands-on practice and picked the "Web Application Security Essentials" training by Fabio Cerullo. I cannot share the training content, yet I can share what it was based on - as I definitely recommend you to check it out for yourself.
  • The training focused on the first five of the current OWASP Top 10: injection, broken authentication, sensitive data exposure, XML external entities (XXE), and broken access control. We learned about the concepts behind as well as good practices to mitigate these risks.
  • To practice exploits and techniques hands-on, we used an application that was designed to teach them in lessons: OWASP WebGoat. The easiest way to have everything set up was to run the all-in-one WebGoat Docker image.
  • To help solve the exercises included in this application, we used the developer tools of FireFox or Chrome, as well as OWASP ZAP as a proxy. I got to know the FoxyProxy browser extension which helped easily switch proxy configurations.
For me personally, the training was great to observe myself and evaluate my current skills. For some techniques I understand the concept, yet I still need more practice to figure out a successful attack quicker. For some exploits, I still struggle to get my head around them. And then there are some challenges that feel way too easy and like everyday business. Things are relative, and practicing on my own in a guided manner made me realize once more: it's just a matter of practice. What I feel is easy, I've done a lot more often, even as part of usual "everyday" testing. The more tricky things are not more tricky themselves, I've just done them less and therefore it takes more time to get the syntax right or think of everything to consider.

I really like what Fabio emphasized at the end of the training. Security scanners are great tools that will check for certain rules, but they cannot help you to find flaws in your business logic. If you're serious about security testing you need highly skilled pen testers who also look at the business logic of your application. Testing is where imagination can take you. You see the response right away, notice a changed behavior in the application and see if you're on the right track. Proxies are useful to learn more about the application and get more information on potential issues, especially when developing or testing an API.

The Lessons

You may have noticed I didn't formulate a hypothesis for this experiment; I just jumped at the chance. Well, I decided to let it count as part of my #SecurityStories nonetheless. Yet if I would have had a hypothesis upfront, it would have probably looked like this.
I believe that attending the OWASP Virtual AppSec Days will result in new knowledge and inspiration.
I know I'll have succeeded when I learned about one new concept and had a new idea for another learning experiment in the area of information security.
Attending this conference was a great experience. It was tiring to do so three days in a row after work, yet I had the opportunity and don't regret I took it. Once more, I've learned more theory, I've got more tools in my tool belt, and I've practiced more. Inspiration for another learning experiment? Although I'm not yet sure whether I will pick it up or not, going through the other lessons of WebGoat would definitely be worth it.

Well, I can only hope you also learned something new in the area of information security from this post. If that's the case, then please leave a comment or drop me a direct message on Twitter. Let's continue learning!

Sunday, May 3, 2020

#SecurityStories: Ethical Hacking Courses Revisited

My first contact with security testing was back in 2016. My company offered us a Pluralsight account so we could benefit from their vast course catalog. As I had been inspired to learn more about security, this felt like the perfect match. I watched several of the security related courses offered on Pluralsight back then.

Four years later, Pluralsight granted everyone free access to their offer throughout April. This made me wonder: what if I revisited those courses with the security knowledge I have today? This felt like too good a chance to let go, and led me to the following hypothesis.
I believe that following parts of Pluralsight's ethical hacking courses will result in surprising knowledge and deepened understanding.
I know I'll have succeeded when I made a new connection of existing knowledge and realized that pieces of the puzzle were falling together.
What I remembered from 2016 was that these courses were worth it. Even though I had limited knowledge back then, they helped my gain a lot more awareness and insights into this vast area of expertise. Rewatching these courses now four years later, having a lot more security knowledge than before, was absolutely worth it as well. I found I had a better understanding these days, and I rediscovered aspects, techniques and tools I simply didn't memorize back then. If you have any chance to get a Pluralsight account (or make good use of the ten days trial) and you're in for learning more about security, these courses are top-notch in my eyes. Very informational, very well explained, able to follow also with limited previous knowledge - and you can also follow along hands-on if you want. This time I managed to watch the following courses, which represent about a third of the ones available.
While I can't and don't want to spoil all the course content, there are several points that frequently came up. Pieces of knowledge that I (re-)learned, that re-established or created new connections in my brain, and that are now (hopefully) etched on my memory.
  • It's hard to be an ethical hacker. 
    • To be able to review systems and infrastructure from a security standpoint, to test the current solution, create better solutions, and retest them, you need a lot of knowledge and skills. You basically have to be an expert with operating systems, programs and networks, proficient with vulnerability research, mastering hacking techniques, have a lot of software knowledge in general, be able to think outside the box, have great communication and management skills, lots of patience - and more. This quote from Dale Meredith really fits well: "Practice builds knowledge, knowledge builds confidence."
    • You have to follow a strict code of conduct. You need explicit permissions in writing before you can do anything. This includes your own employer! For practice, there are lots of intentionally vulnerable apps whose purpose it is to hone your skills. Yet whatever you find in real life, even by coincidence - report it. In addition, when it comes to penetration testing, a major part of the work consists of documentation. So document everything, report everything. Yet make sure to choose a secure medium to store findings, and a secure channel to report these findings. It's way too easy to do the job for the attacker and deliver all information on a silver plate.
    • You can't stop attackers, so the job is not to stop them but to discourage them, misdirect them, and slow them down. Time is on the attacker's side, not the ethical hacker's. An attacker only needs to find one opening, while being on the ethical side of things you have to find all of them and also make sure they're covered.
  • It's a lot about information gathering. Really, a lot.
    • The so called reconnaissance phase is probably the biggest and most important in the endeavor to penetrate a system. There's so much to find out about applications, infrastructures, organizations, individuals, and more. Much of the information is just freely and publicly shared, completely legal to retrieve, and easily accessible for everyone. Just using a search engine like Google can reveal lots of vulnerabilities; especially when you know what to look for and how to feed the advanced search options. So many places can give valuable information to attackers, among them also your own website (job offers are a great source!) or what employees share on social networks. The horrifying thing: this is just the tip of the iceberg, and you can find a lot without investing much effort. 
    • If attackers find interesting information, they might go further and start scanning your networks, i.e. looking for "live" systems and identifying them. Using a bunch of different scanning techniques they can discover what ports are open or closed, whether those systems are running any services, and more. They basically probe the target network to find out as much information as they can about the system. All this adds to what they already found during reconnaissance. Oh, and - we are probably being continuously scanned. Remember, time is on the side of attackers. Drawing out a network can help detect holes and remember them on the long run.
    • Fingerprinting helps as well to identify further information. Operating systems usually behave in certain ways that let you make conclusions about the system. You can determine the host via sending well-crafted packets, or use banner grabbing to check for welcoming messages that already reveal information about the target system.
    • When it comes to web applications - well, they reveal way too much information by nature already. You can see the whole frontend source code, all the JavaScript executed. If client-side security constructs are in place (which you shouldn't have by any means!), like password constraints, they are very easy to discover and work around. Browsers nowadays offer protection for several attacks. Still, there's a lot they simply cannot defend against, like parameter tampering (any input from client side is untrusted data!) or persistent cross-site scripting as then the malicious data is already in the database.
  • Ignorance, laziness and misconfiguration are way too common and make things way too easy. How many times have we just copied over a solution we found on the internet? How many times have we just made use of a new framework without a thorough security review of its source code? How many times have we even considered that this could be exactly the reason for its existence? How many times have we just kept the default configuration for applications, frameworks or servers; not to mention default passwords? Well, we all know the answer. It's hard to accept the truth - and frightening at the same time, as we can assume how many other people building products probably are sharing these feelings.
  • There is a plethora of tools out there to help all sides. As "plethora" is one of Dale Meredith's most favorite words, I simply had to include it in this post. But seriously, there's a tool for everything. Most of them are completely legal, as they also help for many other absolutely ethical and valid purposes. Yet as it is with any tool, they can be used for good and evil and all the shades of gray in between. Let's list some examples, yet be aware that they are not even scratching the tip of the iceberg. There are proxies like Burp Suite, OWASP ZAP or Fiddler. There are network tools like Nmap or netcat. There are website crawlers or copying tools like HTTrack or Netsparker. There is the Google Hacking Database or MetaGoofil for reconnaissance. When it comes to web apps, the browser's development tools might already be your best friend. To quote Dale Meredith once more: for each purpose, "pick a tool and learn it, love it, use it."
  • Social engineering is way too easy. People are usually the weakest link. Convincing them to reveal information does require social skills, yet with enough confidence these kind of attacks are scarily often successful. From looking over someone's shoulder to following someone holding the door into the building. From searching your trash (yes they do) to impersonating internal IT. From phishing attacks to distressed calls for support. This makes you think of your own behavior a lot. I haven't even re-watched the whole course on social engineering, yet in all the other courses this technique was referred to at least once. In the end - it's still all about the people, and our education is crucial.
  • Seemingly minor risks can be turned into full blown exploits. It's all about the context and how things can be connected. One information can help you to another, one exploit can lead to another. Again, time is on the side of the attacker. It's way too easy to discard an issue as too minor, not important, not revealing interesting information, simply not posing much risk. But - is it really? Let's not make this too easy.
(By the way, when reading all of the above - do you also see the similarities to testing in general?)

There's so much more I learned watching these courses. If you have the chance to check them out, I can only highly recommend them. I've only watched 26 hours of currently overall 79 hours of course material on the Ethical Hacking (CEH Prep 2018) path. I am eager to watch them all at some point. Some day I will.

All this really made me think even more about security in all areas. Not only when developing our application or interacting within an organization, yet also as an individual. In my eyes it's not about getting paranoid, but about stopping being careless. I wouldn't leave the door to my apartment wide open, either. That being said - I just revealed I'm living in an apartment. You never know what piece of information can help attackers. For example, I got a lot more cautious around sharing photos from my living areas on the internet; I wouldn't want to reveal my address there as well, and it's probably way to easy to conclude to it anyway. Well, doing a thorough check on my own behavior a well as the applications and infrastructure I'm using - that's definitely on my list as another experiment.

As always in this series of #SecurityStories: if you learned something new in the area of information security from this post, please let me know by leaving a comment or sending me a direct message on Twitter. Your feedback is much appreciated.

Wednesday, April 22, 2020

#SecurityStories: Threat Modeling

It's time to start writing about my personal challenge this year: telling #SecurityStories. My goal is to help people gain new insights when it comes to all things security; a very essential topic that's unfortunately often dismissed in favor of our own convenience. I have to admit, I dismissed security concerns way too often myself, and unfortunately I still catch myself doing so. I want to change this. Behavior change starts with awareness, and that's a big part of this story as well.

Let me share my experiences creating a threat model for the very first time on my own. It was Dan Billing who introduced me to threat modeling in his "Web Application Security" tutorial at Agile Testing Days 2018. The next time I heard about this approach was only last year at TestBash Manchester. Saskia Coplans gave a great talk about it (check it out my sketchnote or even better the full recording on the Dojo if you have a pro license by any chance) and we also did an example together during open space.
The idea to start my challenge with a story about threat modeling came from one of the security testing sessions I had together with Peter Kofler beginning of January. He asked me if I knew anything about threat modeling and I shared with him what I learned at the conferences. To paraphrase him: "I already learned something from you: threat modeling and why it's important, why testers would like to learn it." Remember, the desired outcome for my personal challenge is that ten people have confirmed that they learned something new from me in the area of information security. You can imagine how happy I was to hear that feedback from Peter even before really starting out on this journey.

All this gave me the idea for my first #SecurityStories experiment. It took me some time until I could finally start it in March, yet it was a perfect match with an opportunity I had at work.
I believe that creating a threat model for our own product will result in applied knowledge and surprising findings.
I know I'll have succeeded when I discovered an unknown attack vector for our own product.
For a long time developing my team's product, security was not our biggest concern. After all, we are building an internal application and the little security testing we did was mainly focused around access control and permissions. Now I took the chance to do a more structured risk analysis when it comes to security by creating a threat model. Better late than never!

Building on the knowledge I gathered at conferences, I started with research and reading up on several resources. Here are the ones that helped me most when trying to understand the main steps of creating a threat model.
According to these resources, there are two important things to consider when starting out. First, it's strongly suggested to create the threat model with a group of diverse people to end up with a holistic picture: the whole development team, a business analyst, an architect, whoever adds a new perspective. My team, however, was suddenly finding themselves in crisis mode and fully focused on other topics. Therefore, I decided to start this learning journey on my own, creating a first model version, then involving a few people to refine it, presenting our findings to the whole team and continue refining as we go. Better starting imperfectly than not at all.

Second, the resources agreed that no tool itself is required to create a threat model, and a whiteboard might be the best medium for this group discussion. Over the last weeks, my team also had to learn how to work fully remote, full time. Therefore, I considered creating a digital version already in the first step. This would also help me adapt the model as I learned more without having to redraw it each time. I was curious if there would be tools specifically designed for threat modeling. I found several, yet most of them are not maintained anymore. Besides OWASP's Threat Dragon, which is still under development yet already offers a native client for both MacOS and Windows. I decided to give it a try and see if it would fulfill my needs. I wasn't disappointed. Using Threat Dragon did prove worth it. This application is great as an idea and already provides lots of features. Granted, it's still under development, and I could feel it. For example, usability is not yet too great. I'd love to have the option to add comments or other descriptive text fields to the model. I'd love to be able to select multiple diagram items and move their position all together. Still, I haven't regretted this, and I especially appreciate the nice report this tool provides.

Let me be clear about it. I created a threat model for the very first time, and I very well might have gotten things wrong, or done in a way that's not recommended. As it still proved to be an exercise very well worth its time, I'm just going to tell you what I did and how things went. Due to the nature of the thing, I cannot show you my results - yet OWASP provides several examples in their Threat Model Cookbook so you can get an idea of what I'm talking about.

Why threat modeling and how does it work?

Threat modeling is a structured way to brainstorm about threats. It's important to consider as many factors as we can think of from diverse perspectives to get a holistic view. For example, we also need to consider malicious acts from within the company, or simple human error. Keep in mind that a model is never correct as it does not represent the full reality, yet models help us think - in this case about security.

Security is a quality attribute that needs to be built into the system from the start. We want to improve and include security concerns early on into how we develop, test and run our services. On the one hand it's about making our systems less prone to human error, on the other hand it's about not leaving the door wide open and make things hard enough for attackers to become less attractive as target. When it comes to the latter, some might think that our internal product does not provide valuable information, that there are more attractive targets in the company; yet we simply cannot tell for sure what is valuable and what not. For example, we could grant people access to other systems in the company through our services without being aware of it, our server resources could be misused for other activities, or we could lose all our data by mistake.

From what I learned, the threat modeling process is basically about the following questions and steps.
  1. What are we working on?
  2. What can go wrong?
  3. What are we going to do about it?
  4. Did we do a good job?

What are we working on?

As we cannot easily tell how attackers think or what's valuable to protect, the most common approach is to have a look at what we are building and identify the data flow between the different parts. To create such a data flow diagram, we used the following common notation.
  • Process (circle): Our services and code.
  • Store (two parallel horizontal lines): Data stores (e.g. files, databases, shared memory, message queues).
  • Actor (rectangle): External entities, everything but our code and data. This includes people and cloud software.
  • Data flow (pointed arrow): Connects processes to other elements.
  • Trust boundary (dotted line): Indicates where the trust level changes (e.g. from internet to intranet through a firewall, from web server to database, from our application to an external third party service). There is always a trust boundary when your code goes to someone else’s, or their data comes to your code. We perform security checks only inside our trust boundary.
Before starting my first version of the diagram, I looked up earlier architecture drawings we had created to get inspired. I then started listing assets like stores, actors and processes. Doing so, I had to become clear about above standard notation. What is used for what again, which category would fall that one in? Looking at the threat model examples helped. I quickly noticed that this is very iterative work. I jotted down everything that came into my mind: the services we own, the integrations we have to other systems, the trust boundaries we operate in.

As with all kind of visualization techniques, I realized once more the power of modeling. It really helps you think! Just doing this was a great exercise also for testing and quality in general, not only security focused. It really helped to get the overall picture again; especially as our product landscape and its complexity grew heavily over the last five years. Just a little internal application? Far from that with all infrastructure included and so many things to consider. I realized a thorough model would really take time, yet each step on this endeavor was so much worth it. I got to know our system better again, realized more clearly how things are connected and where they could break, and created a better understanding of our architecture components and how traffic actually goes through them. It was great to have that model not only in my head, but visually "on paper" so I could align it with my teammates and discover any discrepancies or unknown risks.

With more and more components added, the diagram grew and grew. It slowly felt like an overwhelming task. I wondered how much detail should be added to the data flow? Where was it okay to simplify the model, where would exactness help? It's a model after all and it's main purpose is not to depict absolute reality but help us think. Still, it's not easy to decide whether I should add all kind of requests and the used protocols, or abstract this. Should I add the kind of data flowing, down to single entities? That would let the diagram explode. Maybe rather keep the flow generic? In the end, I just decided on one way as a proposal and left the rest for future iterations.

What can go wrong?

Based on the diagram, I started to identify threats. To do so, STRIDE is the most common approach for threat modeling. I've found a good overview on STRIDE that I'll copy here to get us on a shared page.
  • Spoofing
    • Property Violated: Authentication
    • Definition: Impersonating something or someone else.
    • Example: Pretending to be any of Bill Gates, Paypal.com or ntdll.dll
  • Tampering
    • Property Violated: Integrity
    • Definition: Modifying data or code
    • Example: Modifying a DLL on disk or DVD, or a packet as it traverses the network
  • Repudiation
    • Property Violated: Non-repudiation
    • Definition: Claiming to have not performed an action.
    • Example: “I didn’t send that email,” “I didn’t modify that file,” “I certainly didn’t visit that web site, dear!”
  • Information disclosure
    • Property Violated: Confidentiality
    • Definition: Exposing information to someone not authorized to see it
    • Example: Allowing someone to read the Windows source code; publishing a list of customers to a web site.
  • Denial of service
    • Property Violated: Availability
    • Definition: Deny or degrade service to users
    • Example: Crashing Windows or a web site, sending a packet and absorbing seconds of CPU time, or routing packets into a black hole.
  • Elevation of privilege
    • Property Violated: Authorization
    • Definition: Gain capabilities without proper authorization
    • Example: Allowing a remote internet user to run commands is the classic example, but going from a limited user to admin is also EoP.
While I was still extending the data flow diagram, I was already taking note of any threats that came to my mind. All this was quite an iterative process, while continuously learning more. I heard several people describe threat modeling as dull - yet I found it quite interesting, there's a lot to discover using this structured way to think about these kinds of risks! When pondering on the diverse potential threats and ways to mitigate them I learned a lot about them. My biggest question in this step was how many threats to list? Only those I know are critical, or all I can think of? In the end I decided to go for the latter in favor of a more holistic picture, while being well aware there would be a lot more that I am even not aware of yet.

Now was the time to involve my team and get initial feedback on the model. I invited my colleague who knows our infrastructure best for a call and walked him through what I've done so far. I asked him to double-check the diagram and the initially brainstormed threats. Admittedly, I was quite anxious to hear his feedback. You can imagine my relief when he confirmed that I had created a model that fit to both our shared mental models, with only a few minor adjustments that I gladly worked into the diagram. He shared with me that he has never seen a threat model before and found it very useful to think about risks. What a great entry point for risk discussions indeed!

Step by step I went through all of the resources again and continuously extended the model, adding anything I missed. With each iteration I discovered something new, something else to consider, another potential threat. I have a lot more ideas what to check for and I am far from finished refining the model. There's so much more to look into, like checking the tech stack we use for known vulnerabilities, cross-checking with the OWASP Top 10 security risks, considering social engineering attacks, and more.

What are we going to do about it?

Identifying threats is not enough. We also need to decide what to do about them. For each individual threat, we have the following options at hand. 
  • Remove the threat (e.g. removing the respective functionality)
  • Mitigate the threat (e.g. through standard practices like encryption)
  • Accept the threat (be careful about “accepting” risk for your customers)
  • Transfer the threat (e.g. via license agreements or terms of service)
For each threat I was aware of, I made a first assessment, or rather educated guess. I was happy to involve our product owner in this step, presenting him the whole picture. He was intrigued seeing our whole service landscape visualized and recognized that it grew a lot more over the years than he had realized. He asked further valuable questions, and also helped assess the threats from his point of view.

At this point I decided it was time to document the current state in our wiki and invite the team for a presentation. I wanted to get everyone introduced to threat modeling and our current model version, including all assumptions it was built on. We had a short session, and promptly I got further invaluable feedback! More pairs of eyes instantly caught what we missed before, and also detected a flaw in the visualized data flow. Perfect input to refine our model further.

The more I learn, the more I know what I don't know yet. There are so many more things to think about, yet having this model is a great discussion base. We're far from done - yet a big step further.

Did we do a good job?

Finally, we need to validate that the identified threats have actually been handled.

We haven't done this yet for all identified threats. There's still a lot more to do indeed. Yet again, the effort already proved worth it. I know a lot better what to look for, also with each new change we're implementing. My hope is, that my team does know that better now as well, and we all use this increased awareness to find good solutions together.

A Living Model

In the end, our threat model is supposed to be a living document. As our socio-technical system changes, this model will change as well. There are several triggers for a revision, like the following examples.
  • We develop a new service or remove a previous one.
  • The architecture of one of our services changes.
  • We introduce a new technology.
  • The infrastructure conditions change.
  • The knowledge and skills of our development team grow.
  • External actors and their interactions with our product change.
I'm curious how threat modeling will help us in the future, as it already helped us in the present. Our awareness increased, we can make more informed decisions together when it comes to security. That's a big step for us indeed. When it comes to anything else, only time will tell.

All this was based on an experiment. Could I prove my hypothesis and identify a previously unknown attack vector for our product through threat modeling? The answer is clear: yes, and more than one. We have work to do.

One more question remains. Having read this story, have you learned something new around information security that you weren't aware of before? If so, please leave a comment or write me a direct message on Twitter. Have fun with threat modeling!

Thursday, April 16, 2020

Speaking at Conferences - My Personal Advice

This is a very short blog post, referring you to a very long writing. I've started my public speaking journey back in 2017, thanks to my learning partner Toyer Mamoojee. Now I've finally found the time and energy to document what helped me (and still helps me) with speaking at conferences.

The result is a compilation of my personal advice for speaking at conferences. Although it's targeted at conferences, there's advice for speaking at any kind of event as well, like giving a talk at a meetup, or a workshop at a company. Whether you've never spoken before or you are an experienced speaker, I hope you get inspiration out of it for your own journey.

Sunday, February 16, 2020

DDD Europe 2020 - About Close Collaboration, Shared Language and Visual Models

Last week it was my first time at DDD Europe, and it was great. Although it was a pity to miss the fifth and final edition of European Testing Conference which took place at the exact same dates and even the same city, I was still very happy about this opportunity to explore a new community and gain a foundation in domain-driven design. I heard lots of great things about DDD Europe from different people, one of them being my colleague Thomas Ploch, who also got accepted as speaker this year.

In the beginning I didn't know what to expect of this conference and the DDD community that was new to me. I've seen the program with lots of big names, I've seen the sessions offered on topics I've merely heard about, and I wasn't sure whether my sessions on the mob approach would be a fit for this new audience. Quite intimidating, and yet also very exciting - a great learning opportunity.

Brace yourself - it was a long conference week, this is going to be a long post. It was worth it for me, though!

Arriving in Amsterdam

Sunday evening before the conference I've arrived at the hotel where most of speakers and attendees alike had been accommodated. I had some time to prepare myself for the week and practice my upcoming talk.

No one else from the conference was to be seen yet - or at least I couldn't make them out, as once more I entered a new community here. My colleague Thomas arrived the same evening as well so we had a good time over dinner and then made it to bed early. We knew it was going to be a long week; and it was indeed. A week where I learned once more about the importance of close collaboration with all parties, of evolving a shared language with everyone, of visualizing mental models to help us think, of experimentation and continuous learning. Lots of familiar topics, looked at from a different angle that was new to me.

Training Days

Both Thomas and I decided to participate in Nick Tune's and Kacper Gunia's two day training "Strategic DDD using bounded context canvas". I knew Nick from SwanseaCon and I hoped this workshop would provide me a quick hands-on introduction to all things DDD. I was not disappointed. We discovered our example domain using the business model canvas, did event storming and rule storming. We learned about bounded contexts, message flows, strategic classifications, and more. We used Nick's bounded context canvas, discussed ubiquitous language and policies, model traits, context interfaces, and sociotechnical architecture. We discovered lots of valuable heuristics along our way, too!

By joining this workshop I gained lots of new insights and pieces of knowledge which triggered lots of new thoughts. What a great entry into all things DDD, learning about concepts while applying them. I loved that we did lots of hands-on interactive exercises and mixed formats that we can take home to improve collaboration and architecture. I felt this training helped me right away to have better conversations about architectural topics. The long term impact? It made me curious to learn more, especially to see how we can bring people from different areas of expertise together and discuss a holistic view on everything.
The first day ended with very nice dinner conversations with Thomas PlochMaxime Sanglan-Charlier, Jennifer Carlston, and Thomas Bøgh Fangel. What a great group! The second day? Well, it also ended with a nice dinner - one of my favorite parts of every conference. This time I had the pleasure to get to know Zsofia Herendi, Roman Sachse and Marcello Duarte. What a nice crowd already, thank you all for the warm welcome into the DDD community!

DDD Foundations and Speakers Dinner

The main conference was preceded by a day of two smaller conferences taking place at the same time: DDD Foundations (curated by Nick Tune) and EventSourcing. People signed up for one conference could join sessions of the other one as well. In the end, however, I decided to stay with the foundations conference as one of my goals was to gain a fundamental understanding of DDD - so this sounded like the perfect opportunity. Check out my sketchnotes for the talks to learn more yourself.
A word of warning regarding my sketchnote for Alberto Brandolini's keynote: it really does not do it justice. This keynote was amazing and I could only put down a fraction of all the ideas shared. I so much related to this keynote, Alberto shared lots of wisdom on all things collaboration, experimentation and learning together. I'm very much looking forward to watching the recording when it's published, and I can only recommend it to you as well.

In the evening the speakers dinner took place. Once more I got to know more people, once again had fantastic food and even better conversations. Thank you all so much!

DDD Main Conference and Further Networking

The main conference days arrived, and with them also my own sessions. First of all, here are the sessions I joined including my sketchnotes of them.
It was a real pleasure finally listening to Kent Beck and seeing him perform on stage. What mastery. He derived the keynote content from conversations he had with conference participants. He drew his slides live in front of us. He lectured us in a unique entertaining way while still conveying important messages. What really resonated with me was to "insist on feedback before we make another decision based on risky assumptions", and that "waterfall is back, it stopped apologizing and it needs to be killed with fire."

During these main conference days, it was also up to me to perform. When first seeing the venue I couldn't resist and peek into the hall where I was supposed to speak - and I was in awe. My respect raised immediately. This theater hall accommodates up to 800 people! In the end it wasn't filled with 800, but still it was the largest stage I've ever put my foot on so far; and also the largest audience besides TestBash Brighton.

So here I gave my talk "A Story of Mob Programming, Testing and Everything". It happened to be in the last slot of the first day, with only lightening talks taking place at the same time. Therefore people thought it would be a keynote! It wasn't scheduled as such though, and yet people told me it didn't matter to them as they still think it was a keynote. I decided to stop correcting them and taking this as a compliment! :) I'm really happy it got recorded, too.

When hearing great feedback about my talk I was very relieved. It seems the topic resonated very well with the DDD community. One of my highlights here was that also Kent Beck listened to my talk, and afterwards I finally could speak with him for the first time. Now, there's a story to it. In the beginning of 2017, when my public speaking challenge started, Kent suddenly followed me back on Twitter (I assumed Lisa Crispin retweeting my stuff made this happen, so thanks to Lisa!). I couldn't believe it, so I wrote Kent a direct message telling him how honored I felt - and he said he liked my blog posts. This resulted in a written conversation over the next weeks that left quite an impact on me, encouraging me to go further on my journey. That was it back then, we never met. Now was the first time I had the chance to speak with him in person, so I grabbed it and contacted him again. And then, right after my talk, it happened. He found me and said he had been listening to me. He asked whether I would like to get feedback (absolutely!) and he shared very valuable thoughts with me how I could further level up as a speaker. Seeing him keynote the next day and having a longer conversation with him afterwards was truly inspiring! Lots of food for thought for me.
Following up on my talk closing the first day, I had a hands-on lab session on the second day on the topic of "Mob Exploratory Testing". I had given this workshop a few times already, and always revised the concept to improve it further. Just like this time, and it worked out very well. The audience was great! All of them wanted to be part of a mob, so we split into several mobs, mostly small ones around laptop screens, and two larger mobs working on bigger screens. Huge shout-out to Tobias Göschel who volunteered facilitating one of the big mobs! Great help and he said he learned a lot in this role, too. Overall, the two hours went very fast, people had fun and learned lots of things in short time. That's exactly what I like to set up the environment for! Mission accomplished.

The main conference was great, and there were even a few people I already knew. I had the pleasure to meet Kostas Stroggylos again who I knew from Agile Greece SummitGojko Adzic whom I met at several conferences already, and Romeu Moura whom I first met at European Testing Conference.

The conference evenings were great as well. On day one we had a huge dinner group where I finally met Tobias Göschel for the first time, on day two some of us joined lots of European Testing Conference speakers for dinner, bringing two great communities together and enjoying lots of insightful conversations. So good to meet many wonderful people from the testing community there! Among them my power learning group mate João Proença - although time was short I thoroughly enjoyed speaking with him in person again.

Sightseeing and a Long Way Home

Saturday arrived, the day I planned for sightseeing; so that's what I did. In case you'd like to see these parts of my conference speaking journey as well, feel free to follow me on Instagram. In the evening I joined Romeu Moura and met Felienne Hermans for the first time - what a pleasure! We had a great time together.

That should be it. I was supposed to leave Amsterdam the day after, yet life had different plans. Due to the heavy storm going across Europe, I got stranded. Instead of returning home and resting the next day, I had to wait at the hotel, the Amsterdam airport, Frankfurt, again a hotel, the Frankfurt airport, until I finally arrived home on Tuesday noon. Nothing but tired.

Still, I was happy about my time at DDD Europe and getting to know many great people. Thanks to everyone for welcoming me and sharing experiences with each other. The conference was very inclusive, and made an active effort to be so by offering gender-inclusive toilets, food for everyone, making the Pacman rule really work, and more. I loved the variety of super interesting topics. So many great speakers, no matter whether they were renowned already or not. All that combined with a very smooth organization - everything worked perfectly. Thanks so much to the fantastic organizers, you did an amazing job here and treated people very well. Looking forward to another DDD Europe!