Son Of Sun Tzu

To content | To menu | To search

Tuesday 22 January 2019

Presenting tips for new and nearly new speakers - updated 22 Jan 2019

I've a note on presenting guidance I make a point of reading whenever I'm putting something together, I figure it would be useful to others too:

( whether I remember to read the advice, and follow the advice, is another matter entirely - feedback always welcome )

Recommended reading

Before you start reading, an excellent summary of most advice you'll receive is in Zach Holman's "The Talk on Talks", which is here https://www.youtube.com/watch?v=YVb2GsJHejo ; watch that, it's about fifty minutes long. Then visit Zach's site at https://speaking.io/

Now you've got an overview, concentrate on the overall format using How to give an Effective Presentation: https://qz.com/work/1110377/how-to-give-an-effective-presentation/

This presents the idea of how to structure the talk overall... and how to get your ideas together. I would emphasise this, I still struggle with trying to put too much in, spending days on research, and then realising I have a 90 minute presentation for a fifty minute slot. The sooner you can determine what is and isn't in the talk, the more time you'll have left to spend on making the most of what's in the talk.

If you want to be above average then avoid all of the "anti-patterns" Troy Hunt mentions in this article... a bit of preparation and avoiding very basic pitfalls can take you a long way: https://www.troyhunt.com/speaker-style-bingo-10-presentation/

For a step beyond that, Thom Langford has written a good three part guide to presenting, this is worth working through:

Part 1: https://thomlangford.com/2018/05/18/the-art-of-the-presentation-part-1-of-3/

Part 2: https://thomlangford.com/2018/05/30/the-art-of-the-presentation-part-2-of-3/

Part 3: https://thomlangford.com/2018/07/20/the-art-of-the-presentation-part-3-of-3/

If you want the level beyond that then make the most of the transitions, moving on to the next slide on the right word can make all the difference, as described in this piece. However this does require considerable practice: https://medium.com/@saronyitbarek/transitions-the-easiest-way-to-improve-your-tech-talk-ebe4d40a3257

Confidence tips

Practice. Preferably in front of family or colleagues or friends, but if it comes down to it just practice in a room on your own with the laptop and a timer. That way you know the timing works, you'll at least have an inkling of what slide is coming next as you present - and if you can get any kind of audience they'll give you feedback on what does or doesn't make sense.

If you need confidence just before a performance... power pose with fists clenched... and by "power pose" I mean that rather silly looking stance Tory MPs adopted where they stood with their legs slightly too far apart. In that pose your body will automatically think you're about to enter some kind of conflict and boost you with the right chemicals for a fight.

Try that with this idea, mentioned to me by my first mentee for BSides London, just imagine that the audience are penguins... flapping and squawking away, or waddling down to the water's edge for fish. It's silly, and nonsensical, but if it works it puts a smile on your face before you walk out there to face everyone.

And one last point, if you're an introvert, as per this tweet: https://twitter.com/KevinGoldsmith/status/963850794440187904, presenting gives you something to talk about with people, rather than having to navigate the morass of small talk.

The Presentation Itself

Just one piece of advice for this so far, look at how often you move around the stage, and the impression that can give to an audience. Essentially, according to this article by Simon Raybould, moving around more makes your more approachable, but less authoritative: http://presentationgenius.info/presentations-and-moving/ .

And from personal experience, make the most of people's immediate enthusiasm right after the presentation to hand out business cards and make connections.

Friday 11 January 2019

A response to "Who are we at Cyberwar with?"

In Michael Brunton-Spall's weekly cyber security newsletter, which can be found here https://tinyletter.com/CyberWeekly/letters/cyberweekly-33-who-are-we-at-cyberwar-with, he highlighted a recent Twitter thread by Perry Metzger that garnered a great deal of attention. The thread can be found here https://twitter.com/perrymetzger/status/1075928695058120705 or there's a nice formatted display of the entire text here: https://threader.app/thread/1075928695058120705 .

I recommend you read the thread, but the quotes from Metzger that "Computer security is warfare. No, really, it's war." and "Security _is_ adversarial. In warfare, you don't survive if you're second rate, you die" sum up his central point, that cyber security is a technical discipline, focused on defending against dedicated and capable foes, where the main determinant of victory, if not the sole determinant, is if you are more skilful than your opponent. There's also a strong theme against the value of certifications, but without offering any kind of viable alternative or better methodology of rating ability.

In reply to Metzger's thoughts Brunton-Spall used him blog to highlight how many security issues are caused by "accident, misconfiguration or error", emphasising that "we aren’t at war with advanced 'cyber-adversaries' whatever that means, but that we are at war with ourselves, with the software that we build", and looking to safety engineering for solutions.

I think they're both wrong.

And right.

Metzger is right in that some of the situations within cyber security require an adversarial mindset, and makes an excellent point that this aspect to cyber security is often missed. However this is lost in all of their comments about technical excellence being the dominant reason for success, if not the sole reason; similarly Metzger imagines a world where the enemy are only "High quality paid professionals", there's no concept of levels of risk here, no nuance regarding the different types and ranges of ability required for the many different positions within the industry or within criminal organisations. Similarly no appreciation for all those roles where security is just part of the overall job description, system administrators for example.

And as I've argued in my recent set of presentations, the current approach to incident response based on containment and recover misses many of the opportunities present when you're up against an active and sentient adversary, and the effect you can have on them - so the adversarial point appeals to me, but it's lost in everything else.

My "military" experience has all been gained through Xbox Live, and I don't get to wargame as often as I like - but I've been fortunate enough to have access to experienced war-fighters and those working in Operational Research, and been through simulations of conflict at various levels of abstraction. The ability to win conflicts is not based on a linear scale comparing your technically adeptness against your opponent, this is an idea of warfare based on DragonBall Z, not any kind of real world conflict. The relevance of communication, of logistics, of morale, and of an overall strategy based on forethought and feedback, are crucial to highlight here if you're going to bring in any kind of military thinking into the industry at all. In my own presentations I argue we are over-simplifying the requirements of the situation by emphasising a golf-like approach to individual ability, which suits a game like golf, but certainly doesn't suit our complex area of operation, where the ability to work within and against teams is much more relevant.

However similarly Brunton-Spall makes a great point, so many cyber security breaches are a result of failure on the part of the defenders, due to incorrect processes or those processes not being followed correctly, in such a way that the nature and skill level of the attackers is almost irrelevant. As Brunton-Spall states, he'll be focusing this year on "how we can build systems that are resistant to misconfiguration or accident". While it's not the complete solution, I agree in that there is so much more that the industry can do here - for example looking at the Checklist Manifesto style approach to executing administrative tasks, inspired by improvements in the world of medicine.

I think examining these analogies and references to other areas are useful, both in what they illuminate and when the analogy breaks because it has been stretched too far. Some of what cyber security is conflict, some of the methods cyber security needs are well-established within safety engineering, but simply advocating one approach or the other is too simplistic for our inordinately complex field, and while the appeal to a safety engineering mindset is far more useful than the "OORAH!" of a particularly distorted view of armed conflict, I think some combination of the two is the way forward.

As to the best analogy for computer security as a whole, I think Wendy Nather summarises the situation well, as she often does, in that crime is a "fine analogy", https://twitter.com/wendynather/status/1076847905208700928. Crime is a vague concept, with a wide scale of events covered, from petty fraud to mass murder - we should appreciate that cyber security is similarly broad, and learn useful lessons from others where we can, but aggressively toss them aside when they do not.

Tuesday 11 December 2018

Lessons From The Legion - ISSA UK Christmas Meeting 2018

Overview

Thank you to ISSA UK for a chance to present at their recent meeting. This is a summary of my presentation "Lessons From The Legion", from Thursday December 6th. I've been offered some more opportunities to present this, from which I'm hoping to generate more interesting and useful conversations, so this will undoubtedly evolve, your feedback is welcome.

And thank you to Grant Thornton for their hospitality, and great facilities... and Francesca's help setting things up too.

A very high level summary would be this tweet, if you want to point someone at a very short summary:

"people trying to excel at self-taught technical skills are sub-optimal at strategic decisions required for a nebulous conflict, their emphasis should be on team work, and on the strategies of, and constraints on, their adversaries; they should seek inspiration elsewhere"

As a less brief summary, but trying to keep things snappy, my reasoning is as below. A slide by slide summary is too dense, and makes me realise how many ideas I've pushed into the audience's heads in about thirty five minutes or so.

This isn't really designed to be read, but if you're here I expect you're after a reference or two of something that really caught your eye.

Logical Progression of the talk

Introduction

I have a question - in Cyber Security - if we're all so smart, which we are, and we all work so hard, which we do, why is everything so awful?

To try and figure this out my presentation is an "investigation wall", a set of interconnected ideas and theories where I try and figure out what the solution is to this mystery.

I start with John Kindervag's presentation "Winning the Cyberwar With Zero Trust", which explains the difference between a strategy, what he calls "The Big Idea", and the tactical and operational level solutions you use to achieve your big idea. John Kindervag's "Win the War With Zero Trust" can be found via BrightTalk here: https://www.brighttalk.com/webcast/10903/280059

So what "big idea" has emerged from the tactics we've chosen?

The main three areas I'm familiar with, as a cyber security practitioner, are:

  • System Administrators / Developers
  • Penetration Testing
  • Incident Response

The strategy in all three areas is based on being the most technically skilful practitioner you can - sysadmins patch as quickly and as thoroughly as they can through knowledge of their operating systems, developers code as securely as they can through knowledge of their languages; penetration testing aptitude and success is based entirely on how technically adept the pentester is and how well they apply the "flaw hypothesis methodology"; and incident responders work on their forensics skills to learn how to spot different attacks, and prepare to investigate an incident as rapidly as they can.

Arguably the way this strategy has come about is because of how we train and practice for each area - all of which is based on self-motivated learning, and a passion for the job that is often described as "eat, sleep, breathe security". Therefore the emphasis is on individual skill and knowledge rather than on wider context.

Where has this choice got us? I cite various references that illustrate the poor state of cybersecurity, and the danger that poor cybersecurity poses to organisations in general and civilisation as a whole.

BreachLevelIndex.com is, well, here: https://breachlevelindex.com/

Rapid 7 on the number of CVEs is here: https://blog.rapid7.com/2018/04/30/cve-100k-by-the-numbers/

I specifically cited several recent breaches, namely those of the Marriot Hotel chain, Quora, Dell, Dunkin Donuts, and 1-800-Flowers - that show how frequent breaches are, all of those stories broke after I last gave this presentation, which is less than a week ago.

The Global Risks Report 2018 from the World Economic Forum can be obtained here: https://www.weforum.org/reports/the-global-risks-report-2018 , that lists cyber attacks as one of the most concerning weaknesses to humanity as a whole.

This method of practising reminds me of golf. Excelling at golf is based on individual skill, which is reflected in how a player performs in the game - because success in the game is based almost solely on individual performances. Even in a team game of golf, as part of a team and against an opposing team, there is very little your team-mates or opponents can do to directly affect your standard of play. And the actual course will be static also, apart from the vagaries of weather.

There is nothing wrong in practising like golf if you're going to play golf, however the practice of cyber security is nothing like the game of golf, I think we need to look at a different game.

Using this kind of analogies / metaphors in cyber security is supported by this paper http://www.evolutionofcomputing.org/Multicellular/Cyberfest%20Report.pdf from Sandia National Laboratories; but I mainly refer to TRIZ, In this version of the presentation I just used the idea of TRIZ, of abstracting problems and solutions in order to determine what kind of solution is required in a rapid way.

TRIZ on Wikipedia is here: https://en.wikipedia.org/wiki/TRIZ and for two of the main players in the UK check out Oxford Creativity https://www.triz.co.uk/ and Systematic Innovation http://www.systematic-innovation.com/ ; from limited experience both are definitely worth contacting.

So, if we're practising for golf, but not playing golf, what game are playing?

I argue that our industry feels a lot more like American Football. It is a ridiculously complex and violent sport, with many specialisms, and very much a team game where your success or failure is very dependent on the quality of your team and your ability to work with them, and how you act against and react to your opponent. In addition it's the sport that is closest to actual conflict - I drop in a quick quote showing that Condoleezza Rice agrees https://www.nytimes.com/2002/04/17/sports/on-pro-football-dream-job-for-rice-nfl-commissioner.html - I think cyber security has a lot to learn from wargaming - the simulation of war, and War Studies in general.

( as a side note, a French General, on watching the game around 1916, said something along the lines of "that isn't a sport, that is war" - if you know a good source for that quote please do get in touch )

Therefore we should look to learn lessons from a successful American Football team. American Football is the only sport where each team has essentially two squads on it - an Offense for when your team has possession of the ball, and a Defense for when your team does not.

I think that as defenders in cyber security, even red teamers are looking to improve the performce of the blue team and the survivability of defenders, we should look to the best Defense. Possibly influenced by personal biases, but backed up by many sports facts I'll quote in the novella length version of this description, I have chosen the Legion of Boom, the Seattle Seahawks defense from 2011 to 2015, as an example to follow.

Looking at the central tenets of the team, and the defensive philosophy of the Seattle Seahawks head coach, Pete Carroll ( who has approximately 40 years of experience and an exemplary record ), I pick some of the main lessons from the Seahawks successful Defense:

First lesson - train how you fight

Because American Football is such a complex game it is necessary to practice complex play calls and formations in advance, and to ensure that each individual knows their responsibility, and everybody else's responsibility, on each play so that they can function as a team.

Because the teams are so large there are enough players for the second and third string players in each squad to form "scout teams". These teams imitate the playing style and formations of upcoming opponents so that both they, and the first string players, understand what is coming up in next week's game, and also are less surprised by any of their opponents individual styles in the game.

This links into the concept, taken from the study of wargaming, of the Caffrey Triangle, showing how a red team - in a red team exercise specifically designed to assist the blue team - should act depending on the objectives of the engagement. The Caffrey Triangle is mentioned here https://paxsims.wordpress.com/2016/08/19/connections-2016-conference-report/ , I've had it explained to me in person, we all need to be talking about this a lot more, in both cyber security and wargaming. ( I believe Matt Caffrey's forthcoming book on this and his other concepts, is stil stuck at the United States' Government Printing Office ). I argue that in military simulations or exercises the red team or red force is often in the bottom left hand corner of that triangle, just there to give the blue team something to shoot at. Penetration testers work almost solely at the top of the triangle, being the most effective attackers they can regardless of genuine threats or limitations. I think commonly in cyber security the red force, whatever it is, should operate in the right hand corner, emulating the TTPs of genuine adversaries in order to prepare the blue team for their real world opponents.

Rory McCune is a good person to watch about the limitations of pentesting, the presentation of his that I refer to, Penetration Testing Must Die - Rory McCune at BSides London 2011, is here: https://www.youtube.com/watch?v=MyifS9cQ4X0

The Seahawks are known for their "full speed" practices, to ensure players aren't surprised by the intensity of a game. This approach is reflected in the recent findings and recommendations of the Close Combat Lethality Task Force, a description of their approach can be found here: https://breakingdefense.com/2018/11/mattiss-infantry-task-force-righting-a-generational-wrong/

As an example of the difference between trying to fix everything, and trying to fix only what our adversaries will exploit, I cite Jeremiah Grossman on the Kenna Security report, highlighting 2% of vulnerabilities are exploited; the specific tweet is here: https://twitter.com/jeremiahg/status/996469856970027008 I've got into interesting discussions on how true or untrue that figure may be, watch this space.

This relates to ensuring your organisations practice at the right time, which is as early as possible. This point is worth a presentation on it's own, but instead I briefly cite Adam Shostack's keynote from Brucon covers this nicely, and can be found here: https://www.youtube.com/watch?v=-2zvfevLnp4

This issue also reminds me of "The Base of Sand Problem", the RAND report that highlights the problems in military modelling/simulations/wargaming that, for me, resonate with issues we face in cyber security. This paper can be found here: https://www.rand.org/pubs/notes/N3148.html. This report essentially says that the military modelling and analysis industry has made some crucial mistakes about what it focuses on, which leads to the ineffective use of its resources. In this context, a footnote that states military victories are based on the ratio of effective forces, not on who simply had the largest force compared to their opponent.

One last point on this before I moved on... analysis of players in team sports, described here https://phys.org/news/2018-12-joint-successes-chances.html demonstrates that the ability of players to play together has a greater positive effect on the teams they move to than their raw skill level. And the ability to play/work together is, of course, enhanced by realistic and dedicated practice.

Second lesson - eliminate the big play.

There is not time to explain the Seahawks' use of "Cover-3 with a single high Free Safety", and their general approach of keeping the ball in front of the defenders to ensure the Defense always has another chance to prevent their opponents scoring, so I look at personnel choices.

Most NFL defenses, when choosing personnel, have emphasised their Defensive Line, the first line of defense against an opponent, who line up closest to the "enemy". Carroll has always specifically looked to the Defensive Backs, the last line of defense, most notably the Free Safety position, which is what he played in college.

This is reflected in the NIST Cyber Security Framework, and the five Core Functions. I am old enough to remember when Identify and Protect were the only aspects seen as useful, but slowly we are learning that Detect, Respond, and Recover are at least as important in surviving an attack, rather than believing in the "Defender's Dilemma", that if an attacker breaches us we have immediately lost.

I cite Adrian Sanabria from his presentation at the RSA conference earlier this year: https://www.youtube.com/watch?v=bMkVjDx3cqQ, on "It’s Time to Kill the Pentest", but just he has a great slide on how a hack is a series of steps, not a single event. This is like a "drive" in an NFL game ( https://www.sportingcharts.com/dictionary/nfl/drive.aspx ) where an opponent can gain yards, but your aim is to stop them scoring points.

I would argue that the emphasis should be on Detect, Respond, Recover - the last line of defense, not the first.

Here I crowbar in Sounil Yu's "Cyber Defense Matrix", using it just to show how the majority of our products focus on the "Protect" function, and come into effect before a breach. I use slides from his presentation at the RSA Conference 2017. https://www.rsaconference.com/videos/solving-cybersecurity-in-the-next-five-years-systematizing-progress-for-the-short-term; showing how you can take the functions of the NIST Cyber Security Matrix, and the assets that form the infrastructure, and you can map what products fit where. There's much more to this idea, and it's really worth your time watching that video. Of note here, although Yu's work is at least a year old, is the gap within Detect, Respond, Recover on the "Applications" row of that matrix. However a couple of vendors who were at DevSecCon ( where I presented a different remix of this ), SysDig, and Contrast Security, would appear to have products within that space.

This ability to recover is important because we all work in "Cyber Resilience" now, where the emphasis is on recovering from a breach, not just on preventing it. It's worth reading "Cyber Resilience", Phil Huggins' Black Swan Security blog here: http://blog.blackswansecurity.com/2016/02/cyber-resilience-part-one-introduction/. I emphasise the "Pace of Decision Making" aspect.

This links to John Boyd's OODA loop, OODA loops are described well on Wikipedia https://en.wikipedia.org/wiki/OODA_loop, here, please pay me to research these concepts further, I think this one is particularly key.

Through a description of the OODA loop process: Observe your current situation and decide all the relevant factors, Orient yourself and your adversaries within that space, Decide on the next course of action, and then Act to execute that execution, Boyd argued that by going through this process faster than your opponent, by "getting inside their OODA loop", you could defeat your opponent through speed rather than sheer power. The problem is, as we're defenders, the opponent always starts their OODA loop before we start ours, so how do we catch up? The answer lies in the final lesson...

Final lesson - Out hit your opponent

I reference the "The Base of Sand Problem" again https://www.rand.org/pubs/notes/N3148.html, because it states that first order determinantes of victory in conflict is based on processes, tactics, and strategy - these are harder to define and measure; but I argue we should focus on our own way of thinking, our opponents way of thinking, and crucially how we can affect our opponents' processes, tactics, and strategy.

It is a physical game, it is a collision sport, and there are psychological and as well as other gains to be made by simply hitting your opponent as hard as you can.

Also this tallies with the previous aim, to eliminate the big play, as it physically puts the defenders in an excellent position to tackle or otherwise collide with their opponents - but I never have time to tie together that aspect of the sport. For this I use clips from Richard Sherman, Earl Thomas, but mainly Kam "Bam Bam" Chancellor executing the "Shoulder Punch", a Seahawks tackling technique which is as it sounds.

The Seahawks tackling video summarising their techniques is shown here: https://www.youtube.com/watch?v=6Pb_B0c19xA; for Chancellor himself, I think this video sums up what he provided in the narrow focus I use, and if you've seen the presentation you'll recognise part of it: https://www.youtube.com/watch?v=qgh8HmKVja8

The article from the Bleacher Report, that gives a quick summary of the Legion of Boom, can be read here: https://bleacherreport.com/articles/2806038-i-dont-fear-it-the-seahawks-are-russell-wilsons-team-but-is-he-enough

The aim here is to inflict pain on your opponent, and to reduce the speed of their OODA loop. In this blog format I should specifically state that I'm not advocating any kind of "strikeback" methodology, but I'm showing that on the blue team we've forgotten that we're facing an opponent, and we can affect that opponent. The pyramid of pain I refer to is David Bianco's, taken from http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html; that illustrates the more complex aspects of their practice are of more value to your adversaries, so when you understand them and can act against them, you cause them the greatest amount of pain.

To me the explanation of why we haven't taken that approach in cyber security, why we treat an adversary's attacks in the same way that we'd respond to a natural disaster, comes from referring to Bartle's Taxonomy of player types, there's a good summary: https://en.wikipedia.org/wiki/Bartle_taxonomy_of_player_types; the "killers", people who like outwitting, defeating, demoralising a human opponent, those "killers" all join the Red Team when they enter cyber security, which means that aggressive and effective approach is lost from blue team strategies.

For this specific "performance", to highlight that we forget that our opponent has the same human weaknesses that we do, I use a card from Reciprocal Strategies, who can be found here: https://www.reciprocalstrategies.com/resources/brt_cards/.

Haroon Meer has been arguing for more hackers to join the blue team for several years, I show a clip from his Null Con keynote, which can be watched here: https://www.youtube.com/watch?v=2F3wWWeaNaM. Do persevere with the flickering screen.

To inflict that required pain I think deception is key, I'm reminded of Clifford Stoll's "The Cuckoo's Egg" book, and how incident response started with deception. Paul Midian's presentation can be found here: https://www.youtube.com/watch?v=KvksyvF6MN4.

There then followed a rather rapid set of references to how others, in some form, support this approach, beginning with Saumil Shah's keynote from Black Hat Asia in 2017 ( https://www.youtube.com/watch?v=834S-rqEmFA ) he states, as one of his Seven Axioms of Security, they we need a creative defense, don't give the adversary something they expect.

Similarly in his presentation from London DevSecCon this year, Petko Petkov covered Honey Tokens and Dark Nets: https://www.youtube.com/embed/GoS2MXbH23Y?rel=0 ; why not use your control of your network to set traps for attackers and improve your position. Also, if they suspect such traps are in place, they could or should slow your adversaries down.

Also a talk from the day before, Matt Pendlebury highlighted the surprising demise of attack aware applications: https://www.youtube.com/embed/HQxs3xn7tLA?rel=0 ; again, your application has high fidelity information on whether it's being attacked, and is in the best position to respond appropriately.

I'm reminded of a presentation by Alex Davies at BSides London earlier this year, showing how we should work together, and being able to share information efficiently and quickly will be to the benefit of all. It can be found here: https://www.youtube.com/watch?v=yfEiuJFMisY. This increases the pain imposed on your adversary, as any other campaigns they are running against any other targets will be similarly affected.

The aim is to turn the Defender's Dilemma into the Intruder's Dilemma, which is nicely summarised in a presentation from BSides Munich https://www.youtube.com/watch?v=PQgsEtRcfAA .

The language to use to describe these attacks is MITRE's ATT&CK framework, I refer to this presentation by Katie Nickles and John Wunder from BSides Las Vegas earlier this year: https://www.youtube.com/watch?v=p7Hyd7d9k-c

There are many more ideas in the presentation "Gaslighting with Honeypits and Mirages" from Kate Pearce, but only the slides are available online http://www.secvalve.com/images/Kate_Pearce_honeypits_ACSC2017.pdf ; I hope she has a chance to present it in future where we can all see the recording.

Because the whole point is to slow the adversary down, make them so unsure of their environment, and whether they're being monitored, that they have to go through more and more checks to make sure they're not burning valuable resources unnecessarily, and that puts their current campaign and all similar campaigns at risk. The emphasis of Change Control was always to ensure that an infrastructure change would not damage the company, force your opponents to move that slowly because they are so unsure of the environment they're in.

This solution is not a product you can buy, but it's a thing you can do. Otherwise we are doomed to keep being golfers trying to play a much different game. The emphasis here, just as an overarching strategy, is not to only think how you can improve your own abilities and skills, and the strengths of your organisation, but how you can degrade the effectiveness of your adversary.

END

Questions on supporting evidence are welcome by email or in the comments or even on Twitter, and overall if you've any questions please do get in touch.

And while I realise it's not the most useful of documents, a PDF of the presentation should be with ISSA UK to publish where they see fit.

Monday 26 November 2018

Lessons From The Legion - Q&A with Daniel Tilley

After my presentation of Lessons from the Legion at "Cyber London hosted by Capital One", way back in July of this year Dan Tilley dropped me an email with some interesting questions. As he asked such good questions, and I was happy with my answers, with his permission I've put this together as a blog post.

What do you thinks security vendors can do to improve and adopt the approach you describe in your presentation?

Partly what yours seems to be doing, which is improve visibility of both your assets and attacks to them. Those basics are hard ( Wendy Nather has been tweeting some useful stuff about this recently ), just making something like asset management easier makes everything else easier for security practitioners.

Then mainly it's inter-operability, shared standards for classification/taxonomies and for how they're communicated, that makes it much easier to add a product into an environment, and integrate it into existing systems, with minimal extra effort by security teams. My presentation is the beginning of an approach, I *think*, that emphasizes detection and response over prevention/protection.

Oh, and ease of use in general, not just for inter-operability. Thinkst's Canary is a huge advance in this area, and has sales to match that advance - also I gather that's why Duo Security are doing so well too. Technically a product only has to be good enough if it's actually usable by existing resources, it doesn't matter how well engineered it is if security staff just don't have the time or skills to implement it - and as with Canary, a very aggressive approach to false positives is essential to not suffer "alert fatigue".

I would like to hear more about your analogy of cyber security/resilience to military operations and whether you think it is more akin to asymmetric warfare?

I think in general there's a lot to learn from the right parts of kinetic warfare ( i.e. some parts of that RAND document are hugely relevant, the importance of human factors, the importance of effective forces over sheer size, and so on ) but also there's much to ignore - it's much easier to be innovative in the cyber domain than in the physical world, and the nature of "cyber weapons" is so different to that of convention weapons I sometimes wonder whether the same term should be used.

As for asymmetric warfare - kind of. Some parts are very relevant, for example the attackers tend not to have a home base, they "live off the land", and can operate using guerrilla tactics - which makes them ephemeral and requires much greater resource from the defenders than the attackers. But on the other hand the success of insurgents tends to be based on the populace they operate within, whereas that "populace"... the computers and networks under attack... should be completely loyal to, and under the control of, the defenders - so a lot of the "hearts and minds" type factors simply don't apply.

Without going into too many half-researched analogies, also with military operations success seems to be depend, to a great deal, on preparation and logistics, I'd argue cyber security/resilience is the same - by the time you're in the conflict you should have already won it.... and there are inevitably relevant Sun Tzu quotes ;)

Do bear in mind my understanding of asymmetric warfare comes from a few wargames and a couple of books ( The War of the Flea, and The Defense of Jisr Al Doreaa spring immediately to mind ), so education welcome. I think it's a fascinating subject, and certainly has enough lessons within it that the cyber security industry could save itself decades by using the right ones, and will learn something by figuring out when analogies don't work or are taken too far.

You mentioned playbooks in your talk how can we go about setting up essentially a database/storage of these playbooks and more importantly how could we go about sharing this information across the community in a relevant way?

First of all you need a structure to put the adversary's TTPs into - Fusion-X's "enhanced cyber kill chain" looks interesting, and also I've glanced very briefly at MITRE's ATT&CK, but they're both on the "to do" list. From there I know STIX and TAXII are relevant, but I have an elementary "wikipedia level" understanding of both.

From there it's the building of trust networks through things like CISP I think... but this is where I think I'd advocate a very different approach. As with everything else above, this is a high level feeling I'd love a chance to start exploring through actual facts and research... but I wonder whether the Intelligence background of the industry, and particularly the number of ex-Intelligence people in the industry, has actively harmed our efforts. While it's understandable in standard warfare, or espionage, not to let the enemy find out what you know about them - in cyber conflict it might be different - which comes down to what "weapons" are, as I mentioned above. All weapons are based on knowledge, a lot of which is given up in their use, so the gain in telling everyone about the new enemy weapon you've uncovered through being attacked by it is much greater than what you lose by revealing to the enemy that their weapon has been uncovered - also the revelation forces the enemy to upgrade. If you share information with as many other blue teams as possible, and they reciprocate, that makes the size of your "team" larger, and the size of the pool of information on adversaries much larger, forcing the enemy to upgrade more and more frequently, which should become harder.

Also the advantage to the defenders is that they gain operational effectiveness by working together in this way, whereas it's something attackers can't replicate. If attackers share their methods with others, meaning they receive less individual benefit from them, they'll be less effective - so the blue teams have a strategy the attackers can't match. This assumes every other "blue team" is in a position to make use of that knowledge, and many other things... these are half-thought through ideas. I don't know if they're of value or not, which is partly why I'm being much more enthusiastic about sharing them... get my ideas out into the world and see if they survive.

Tuesday 23 October 2018

Lessons From The Legion - The London DevSecCon Remix

Overview

A summary of my presentation "Lessons From The Legion", from Friday October 19th 2018. I've been offered a couple more of opportunities to present this, from which I'm hoping to generate more interesting and useful conversations, so this will undoubtedly evolve, your feedback is welcome. Information on DevSecCon can be found here: https://www.devseccon.com/

A very high level summary would be this tweet, if you want to point someone at a very short summary:

"people trying to excel at self-taught technical skills are sub-optimal at strategic decisions required for a nebulous conflict, their emphasis should be on team work, and on the strategies of, and constraints on, their adversaries; they should seek inspiration elsewhere"

As a less brief summary is as below. A slide by slide summary is too dense, and makes me realise how many ideas I've pushed into the audience's heads in forty minutes.

The presentation was recorded and will be published by DevSecCon in future, I'm hoping they keep the fire alarm in.

Logical Progression of the talk

Introduction

I have a question - In Cyber Security - if we're all so smart, which we are, and we all work so hard, which we do, why is everything so awful?

To try and figure this out my presentation is an "investigation wall", a set of interconnected ideas and theories where I try and figure out what the solution is to this mystery.

I start with John Kindervag's presentation "Winning the Cyberwar With Zero Trust", which explains the difference between a strategy "The Big Idea" and the tactical and operational level solutions you use to achieve your big idea. John Kindervag's "Win the War With Zero Trust" can be found via BrightTalk here: https://www.brighttalk.com/webcast/10903/280059

So what "big idea" has emerged from the tactics we've chosen?

The main three areas I see DevSecOps convering is:

  • System Administrators
  • Developers
  • Security Operations

( yes, this is a different three to previous versions... but I think the issue is endemic )

The strategy in all three areas is based on being the most technically skilful practitioner you can - make your systems as hard as possible, your code as secure as possible, configure what you have to the best of your abilities. Arguably the way this strategy has come about is because of how we train and practice for each area - all of which is based on self-motivated learning, and a passion for the job that is often described as "eat, sleep, breathe security". Therefore the emphasis is on individual skill and knowledge rather than on wider context, on putting in dedicated time focused on a narrow range of knowledge.

I would also argue that the same is true of our processes and systems, it's all about making what we create as secure as possible, and then releasing it out into the world to see how it fares. If it doesn't survive, then we try to make a new version and make that better, the original version survives or dies depending on how well it was made.

Where has this choice got us? I cite various references that illustrate the poor state of cybersecurity, and the danger that poor cybersecurity poses to organisations in general and civilisation as a whole.

BreachLevelIndex.com is, well, here: https://breachlevelindex.com/

Rapid 7 on the number of CVEs is here: https://blog.rapid7.com/2018/04/30/cve-100k-by-the-numbers/

The Global Risks Report 2018 from the World Economic Forum can be obtained here: https://www.weforum.org/reports/the-global-risks-report-2018

( note, this is not the nuanced point of view it should be, I hope to spend more time looking at this - and I'm reminded of Michael Santarcangelo's thoughts on this from a couple of years ago, likening the impact of cybercrime to the impact of fraud )

This method of practising reminds me of golf. Excelling at golf is based on individual skill, which is reflected in how a player performs in the game - because success in the game is based almost solely on individual performances. Even in a team game of golf, with a team and against an opposing team, there is very little your team-mates or opponents can do to directly affect your standard of play. And the actual course will be static also, apart from the vagaries of weather.

There is nothing wrong in practising like golf if you're going to play golf, however the practice of cyber security is nothing like the game of golf, I think we need to look at a different game for a solution to our current predicament.

Using this kind of analogy, and cross-pollentating ideas, between areas is generally derided, but if you look hard enough there are examples where this works. In this version of the presentation I just used the idea of TRIZ, of abstracting problems and solutions in order to determine what kind of solution is required in a rapid way.

TRIZ on Wikipedia is here: https://en.wikipedia.org/wiki/TRIZ and the main British consultancy, as far as I can tell, is here: https://www.triz.co.uk/

From a very shallow reading of some management consultancy concepts, I think we're at a "Strategic Inflection Point" as an industry, where we've got the most out of our current way of thinking, and more and more effort results in smaller and smaller incremental gains. We need to jump to a different strategy to make the gains that we should from the resources we're putting in.

So, if we're practising for golf, but not playing golf, and that explains what we're doing wrong... what game are playing?

I argue that our industry feels a lot more like American Football. It is a ridiculously complex and violent sport, with many specialisms, and very much a team game where your success or failure is very dependent on the quality of your team and your ability to work with them, and how you act against and react to the opposing teams. In addition it's the sport that is closest to actual conflict - and I think cyber security has a lot to learn from wargaming - the simulation of war, and War Studies in general.

( as a side note, a French General, on watching the game around 1916, said something along the lines of "that isn't a sport, that is war" - if you know a good source for that quote do get in touch, I've been looking for it for ages )

Therefore we should look to learn lessons from a successful American Football team. American Football is the only sport where each team has essentially two squads on it - an Offense for when your team has possession of the ball, and a Defense for when your team does not.

I think that as defenders in cyber security, even the red teamers are looking to improve the performce of the blue team and the survivability of defenders, we should look to the best Defense. Possibly influenced by personal biases, but backed up by many sports facts I'll quote in the novella length version of this description, I have chosen the Legion of Boom, the Seattle Seahawks defense from 2011 to 2017, as an example to follow.

Looking at the central tenets of the team, and the defensive philosophy of the Seattle Seahawks head coach, Pete Carroll ( who has approximately 40 years of experience and an exemplary record ), I pick some of the main lessons from the Seahawks' successful Defense:

First lesson - "shift left" your conflict

Because American Football is such a complex game it is necessary to practice complex play calls and formations in advance, and to ensure that each individual knows their responsibility, and everybody else's responsibility, on each play so that they can function as a team.

Because the teams are so large there are enough players for the second and third string players in each squad to form "scout teams". These teams imitate the playing style and formations of upcoming opponents so that both they, and the first string players, understand what is coming up in next week's game, and also are less surprised by any of their opponents individual styles in the game. So when they come to actually play the game... they've already been playing that game for the preceeding week, and so are better prepared, especially as Carroll and the Seahawks advocate particularly aggressive practices.

This links into the concept from wargaming of the Caffrey Triangle, showing how a red team - in a red team exercise specifically designed to assist the blue team - should act depending on the objectives of the engagement. The Caffrey Triangle is mentioned here https://paxsims.wordpress.com/2016/08/19/connections-2016-conference-report/ , I've had it explained to me in person, we all need to be talking about this a lot more, in both cyber security and wargaming. I argue that in military simulations or exercises the red team or red force is often in the bottom left hand corner of that triangle, just there to give the blue team something to shoot at. Penetration testers and similar threat simulations work almost solely at the top of the triangle, being the most effective attackers they can regardless of genuine threats or limitations. I think commonly in cyber security the red force, whatever it is, should operate in the right hand corner, emulating the TTPs of genuine adversaries in order to prepare the blue team for their real world opponents.

And this should happen as early as possible, Adam Shostack highlights leaving threat modelling too late https://www.youtube.com/watch?v=-2zvfevLnp4 as one of the many traps of threat modelling from his presentation at Brucon. Similarly earlier on in DevSecCon Stuart Winter-tear highlighted how we can, and need to, automate threat modelling https://www.devseccon.com/london-2018/session/threat-modeling-speed-scale/.

How do we discuss this threat model, so we can automate it and discuss it? Look at MITRE's ATT&CK Framework, which is described well in this presentation from BSides Las Vegas 2018: https://www.youtube.com/watch?v=p7Hyd7d9k-c .

This issue also reminds me of "The Base of Sand Problem", the RAND report that highlights the problems in military modelling/simulations/wargaming that, for me, resonate with issues we face, can be found here: https://www.rand.org/pubs/notes/N3148.html. This report essentially says that the military modelling and analysis industry has made some crucial mistakes about what it focuses on, which leads to the ineffective use of its resources. In this context, a footnote that states military victories are based on the ratio of effective forces, not on who simply had the largest force compared to their opponent.

It's all about understanding what your opponent will do in what situation and countering those options specifically, rather than trying to think of all attacks and prevent all of them.

As an example of the difference between trying to fix everything, and trying to fix only what our adversaries will exploit, I cite Jeremiah Grossman on the Kenna Security report, highlighting 2% of vulnerabilities are exploited, is here: https://twitter.com/jeremiahg/status/996469856970027008 I've got into interesting discussions on how true or untrue that figure may be, watch this space. Also his blog at https://blog.jeremiahgrossman.com/2018/05/all-these-vulnerabilities-rarely-matter.html?m=1.

Second lesson - eliminate the big play.

There is not time to explain the Seahawks' use of "Cover-3 with a single high Free Safety", and their general approach of keeping the ball in front of the defenders to ensure the Defense always has another chance to prevent their opponents scoring, so I look at personnel choices.

Most NFL defenses, when choosing personnel, have emphasised their Defensive Line, the first line of defense against an opponent, who line up closest to the "enemy". Carroll has always specifically looked to the Defensive Backs, the last line of defense, most notably the Free Safety position, which is what he played in college.

This is reflected in the NIST Cyber Security Framework, and the five Core Functions. I am old enough to remember when Identify and Protect were the only aspects seen as useful, but slowly we are learning that Detect, Respond, and Recover are at least as important in surviving an attack, rather than believing in the "Defender's Dilemma", that if an attacker breaches us we have immediately lost.

I cite Adrian Sanabria from his presentation at the RSA conference earlier this year: https://www.youtube.com/watch?v=bMkVjDx3cqQ, on "It’s Time to Kill the Pentest", but just he has a great slide on how a hack is a series of steps, not a single event. This is like a "drive" in an NFL game ( https://www.sportingcharts.com/dictionary/nfl/drive.aspx ) where an opponent can gain yards, but your aim is to stop them scoring points.

Also, all too quickly, I run through Sounil Yu's Cyber Defense Matrix, grabbing slides from his presentation at the RSA Conference 2017. https://www.rsaconference.com/videos/solving-cybersecurity-in-the-next-five-years-systematizing-progress-for-the-short-term; showing how you can take the functions of the NIST Cyber Security Matrix, and the assets that form the infrastructure, and you can map what products fit where. There's much more to this idea, and it's really worth your time watching that video.

Of note here, although Yu's work is at least a year old, is the gap within Detect, Respond, Recover on the "Applications" row of that matrix. However a couple of vendors who were at DevSecCon, SysDig, and Contrast Security, would appear to have products within that space.

This ability to lose ground but not lose, this ability to recover, is important because we all work in "Cyber Resilience" now, where the emphasis is on recovering from a breach, not just on preventing it. NCSC has a good blog on this concept here: https://www.ncsc.gov.uk/blog-post/cyber-resilience-nothing-sneeze. It's worth reading "Cyber Resilience", Phil Huggins' Black Swan Security blog here: http://blog.blackswansecurity.com/2016/02/cyber-resilience-part-one-introduction/. I emphasise the "Pace of Decision Making" aspect.

This links to John Boyd's OODA loop, OODA loops are described well on Wikipedia https://en.wikipedia.org/wiki/OODA_loop, here, please pay me to research these concepts.

Through a description of the OODA loop process, Observe your current situation and decide all the relevant factors, Orient yourself and your adversaries within that space, Decide on the next course of action, and then Act to execute that execution, Boyd argued that by going through this process faster than your opponent, by "getting inside their OODA loop", you could defeat your opponent through speed rather than sheer power. The problem is, as we're defenders, the opponent always starts their OODA loop before we start ours, so how do we catch up?

One approach, which I advocate but I need to spend more time on, was put forward by Paul Schwarzenberger during his presentation the previous day: https://www.devseccon.com/london-2018/session/journey-continuous-cloud-compliance/.

Final lesson - Out hit your opponent

I reference the "The Base of Sand Problem" again https://www.rand.org/pubs/notes/N3148.html, because it states that first order determinants of victory in conflict is based on processes, tactics, and strategy - these are harder to define and measure; but I argue we should focus on our own way of thinking, our opponents way of thinking, and crucially how we can affect our opponents' processes, tactics, and strategy. As I say... the problem is, as we're defenders, the opponent always starts their OODA loop before we start ours, so how do we catch up?

It is a physical game, it is a collision sport, and there are psychological and as well as other gains to be made by simply hitting your opponent as hard as you can.

Also this tallies with the previous aim, to eliminate the big play, as it physically puts the defenders in an excellent position to tackle or otherwise collide with their opponents - but I don't have time to go into this level of detail on the game. For this I use clips from Richard Sherman, Earl Thomas, but mainly Kam "Bam Bam" Chancellor executing the "Shoulder Punch", a Seahawks tackling technique which is as it sounds.

The Seahawks tackling video summarising their techniques is shown here: https://www.youtube.com/watch?v=6Pb_B0c19xA; for Chancellor himself, I think this video sums up what he provided in the narrow focus I use, you may recognise part of it: https://www.youtube.com/watch?v=qgh8HmKVja8

The aim here is to inflict pain on your opponent, and to reduce the speed of their OODA loop. I've learnt here to specifically state that I'm not advocating any kind of "strikeback" methodology, but in showing that on the blue team we've forgotten that we're facing an opponent, and we can affect that opponent. This links to the pyramid of pain I refer to, this is David Bianco's, taken from http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html; that illustrates the more complex aspects of their practice are of more value to your adversaries, so when you understand them and can act against them, you cause them the greatest amount of pain.

To me the explanation of this situation, why we're not doing this, comes from Bartle's Taxonomy of player types, there's a good summary on this page: https://en.wikipedia.org/wiki/Bartle_taxonomy_of_player_types; the "killers", people who like outwitting, defeating, demoralising a human opponent, those "killers" all join the Red Team when they enter cyber security, which means that aggressive and effective approach is lost from blue team strategies.

The previous day at the conference Yan Cui highlighted that people are often the weakest link in the security chain, yet we ignore the humans who are our adversaries and focus on technical defenses and techniques.

Haroon Meer has been arguing for more hackers to join the blue team for several years, I show a clip from his Null Con keynote, which can be watched here: https://www.youtube.com/watch?v=2F3wWWeaNaM. Do persevere with the flickering screen.

To inflict that required pain I think deception is key, I'm reminded of Clifford Stoll's "The Cuckoo's Egg" book, and how incident response started with deception. Paul Midian's presentation can be found here: https://www.youtube.com/watch?v=KvksyvF6MN4.

From here there's a "brain dump" of references... in his keynote from Black Hat Asia in 2017 ( https://www.youtube.com/watch?v=834S-rqEmFA ) Saumil Shah states, as one of his Seven Axioms of Security, they we need a creative defense, don't give the adversary something they expect.

Similarly in his presentation from the day before, Petko Petkov covered Honey Tokens and Dark Nets: https://www.devseccon.com/london-2018/session/open-dev-sec-ops/; why not use your control of your network to set traps for attackers and improve your position. Also, if they suspect such traps are in place, they could or should slow your adversaries down.

Also a talk from the day before, Matt Pendlebury highlighted the surprising demise of attack aware applications: https://www.devseccon.com/london-2018/session/whatever-happened-attack-aware-applications/ ; again, your application has high fidelity information on whether it's being attacked, and is in the best position to respond appropriately.

I'm reminded of a presentation by Alex Davies at BSides London earlier this year, showing how we should work together, and being able to share information efficiently and quickly will be to the benefit of all. It can be found here: https://www.youtube.com/watch?v=yfEiuJFMisY. This increases the pain imposed on your adversary, as any other campaigns they are running against any other targets will be similarly affected.

The aim is to turn the Defender's Dilemma into the Intruder's Dilemma, which is nicely summarised in a presentation from BSides Munich https://www.youtube.com/watch?v=PQgsEtRcfAA .

There are many more ideas in the presentation "Gaslighting with Honeypits and Mirages" from Kate Pearce, but only the slides are available online http://www.secvalve.com/images/Kate_Pearce_honeypits_ACSC2017.pdf ; I hope she has a chance to present it in future where we can all see the recording.

Because the whole point is to slow the adversary down, make them so unsure of their environment, and whether they're being monitored, that they have to go through more and more checks to make sure they're not burning valuable resources unnecessarily, and that puts their current campaign and all similar campaigns at risk. The emphasis of Change Control was always to ensure that an infrastructure change would not damage the company, force your opponents to move that slowly because they are so unsure of the environment they're in.

A couple of other quick ideas... taken from Kelly Shortridge's presentation at Countermeasure 2017 ( which is impressively a blog and a slidedeck and a video https://medium.com/@kshortridge/the-red-pill-of-resilience-in-infosec-65f2c5d5e863 ), if your environment is resilient, and you can let something like Netflix's Chaos Monkey loose, then an adversary looking to maintain persistence on your network has to be as resilient as you with their C2 infrastructure.

And lastly - and sometimes I'm tempted just to forego my presentation and play this one instead - Sounil Yu takes us through the last five decades of cyber security, matches them up with the NIST Cyber Security Framework, and shows how advances such like DevSecOps are the solution. Resilience fixes the problems of cyber security.

This solution is not a product you can buy, but it's a thing you can do. Otherwise we are doomed to keep being golfers trying to play a much different game.

END

Questions on supporting evidence are welcome by email or in the comments or even on Twitter, and overall if you've any questions please do get in touch.

And while I realise it's not the most useful of documents, a PDF of the presentation is here - especially as I find libreoffice's conversation process "underwhelming"... but that might be my lack of knowledge. Lessons From The Legion - DevSecCon 2018

Lessons From The Legion - October CyberTech at the Huckletree

Overview

A summary of my presentation "Lessons From The Legion", from Thursday October 18th 2018. I've been offered a couple more of opportunities to present this, from which I'm hoping to generate more interesting and useful conversations, so this will undoubtedly evolve, your feedback is welcome.

A very high level summary would be this tweet, if you want to point someone at a very short summary:

"people trying to excel at self-taught technical skills are sub-optimal at strategic decisions required for a nebulous conflict, their emphasis should be on team work, and on the strategies of, and constraints on, their adversaries; they should seek inspiration elsewhere"

As a less brief summary, but trying to keep things snappy, my reasoning is as below. A slide by slide summary is too dense, and makes me realise how many ideas I've pushed into the audience's heads in half an hour.

Logical Progression of the talk

Introduction

I have a question - In Cyber Security - if we're all so smart, which we are, and we all work so hard, which we do, why is everything so awful?

To try and figure this out my presentation is an "investigation wall", a set of interconnected ideas and theories where I try and figure out what the solution is to this mystery.

I start with John Kindervag's presentation "Winning the Cyberwar With Zero Trust", which explains the difference between a strategy "The Big Idea" and the tactical and operational level solutions you use to achieve your big idea. John Kindervag's "Win the War With Zero Trust" can be found via BrightTalk here: https://www.brighttalk.com/webcast/10903/280059

So what "big idea" has emerged from the tactics we've chosen?

The main three areas I'm familiar with, as a cyber security practitioner, are:

  • System Administrators / Developers
  • Penetration Testing
  • Incident Response

The strategy in all three areas is based on being the most technically skilful practitioner you can - sysadmins patch as quickly and as thoroughly as they can through knowledge of their operating systems, developers code as securely as they can through knowledge of their languages; penetration testing aptitude and success is based entirely on how technically adept the pentester is and how well they apply the "flaw hypothesis methodology"; and incident responses work on their forensics skills to learn how to spot different attacks, and prepare playbooks to run through once an attack has been detected.

Arguably the way this strategy has come about is because of how we train and practice for each area - all of which is based on self-motivated learning, and a passion for the job that is often described as "eat, sleep, breathe security". Therefore the emphasis is on individual skill and knowledge rather than on wider context.

Where has this choice got us? I cite various references that illustrate the poor state of cybersecurity, and the danger that poor cybersecurity poses to organisations in general and civilisation as a whole.

BreachLevelIndex.com is, well, here: https://breachlevelindex.com/

Rapid 7 on the number of CVEs is here: https://blog.rapid7.com/2018/04/30/cve-100k-by-the-numbers/

The Global Risks Report 2018 from the World Economic Forum can be obtained here: https://www.weforum.org/reports/the-global-risks-report-2018

This method of practising reminds me of golf. Excelling at golf is based on individual skill, which is reflected in how a player performs in the game - because success in the game is based almost solely on individual performances. Even in a team game of golf, with a team and against an opposing team, there is very little your team-mates or opponents can do to directly affect your standard of play. And the actual course will be static also, apart from the vagaries of weather.

There is nothing wrong in practising like golf if you're going to play golf, however the practice of cyber security is nothing like the game of golf, I think we need to look at a different game.

Using this kind of analogy, and cross-pollentating ideas, between areas is generally derided, but if you look hard enough there are examples where this works. In this version of the presentation I just used the idea of TRIZ, of abstracting problems and solutions in order to determine what kind of solution is required in a rapid way.

TRIZ on Wikipedia is here: https://en.wikipedia.org/wiki/TRIZ and the main British consultancy, as far as I can tell, is here: https://www.triz.co.uk/

So, if we're practising for golf, but not playing golf, what game are playing?

I argue that our industry feels a lot more like American Football. It is a ridiculously complex and violent sport, with many specialisms, and very much a team game where your success or failure is very dependent on the quality of your team and your ability to work with them, and how you act against and react to your opponent. In addition it's the sport that is closest to actual conflict - and I think cyber security has a lot to learn from wargaming - the simulation of war, and War Studies in general.

( as a side note, a French General, on watching the game around 1916, said something along the lines of "that isn't a sport, that is war" - if you know a good source for that quote do get in touch )

Therefore we should look to learn lessons from a successful American Football team. American Football is the only sport where each team has essentially two squads on it - an Offense for when your team has possession of the ball, and a Defense for when your team does not.

I think that as defenders in cyber security, even red teamers are looking to improve the performce of the blue team and the survivability of defenders, we should look to the best Defense. Possibly influenced by personal biases, but backed up by many sports facts I'll quote in the novella length version of this description, I have chosen the Legion of Boom, the Seattle Seahawks defense from 2011 to 2017, as an example to follow.

Looking at the central tenets of the team, and the defensive philosophy of the Seattle Seahawks head coach, Pete Carroll ( who has approximately 40 years of experience and an exemplary record ), I pick some of the main lessons from the Seahawks successful Defense:

First lesson - train how you fight

Because American Football is such a complex game it is necessary to practice complex play calls and formations in advance, and to ensure that each individual knows their responsibility, and everybody else's responsibility, on each play so that they can function as a team.

Because the teams are so large there are enough players for the second and third string players in each squad to form "scout teams". These teams imitate the playing style and formations of upcoming opponents so that both they, and the first string players, understand what is coming up in next week's game, and also are less surprised by any of their opponents individual styles in the game.

This links into the concept from wargaming of the Caffrey Triangle, showing how a red team - in a red team exercise specifically designed to assist the blue team - should act depending on the objectives of the engagement. The Caffrey Triangle is mentioned here https://paxsims.wordpress.com/2016/08/19/connections-2016-conference-report/ , I've had it explained to me in person, we all need to be talking about this a lot more, in both cyber security and wargaming. I argue that in military simulations or exercises the red team or red force is often in the bottom left hand corner of that triangle, just there to give the blue team something to shoot at. Penetration testers work almost solely at the top of the triangle, being the most effective attackers they can regardless of geneuine threats or limitations. I think commonly in cyber security the red force, whatever it is, should operate in the right hand corner, emulating the TTPs of genuine adversaries in order to prepare the blue team for their real world opponents.

Rory McCune is a good person to watch about the limitations of pentesting, the presentation of his that I refer to, Penetration Testing Must Die - Rory McCune at BSides London 2011, is here: https://www.youtube.com/watch?v=MyifS9cQ4X0

As an example of the difference between trying to fix everything, and trying to fix only what our adversaries will exploit, I cite Jeremiah Grossman on the Kenna Security report, highlighting 2% of vulnerabilities are exploited, is here: https://twitter.com/jeremiahg/status/996469856970027008 I've got into interesting discussions on how true or untrue that figure may be, watch this space.

I also use the presentation "Playbooks - Common Traps & Pitfalls in Red-teaming" by Andrew Davies and Jon Medvenics from CRESTCon is here: https://www.youtube.com/watch?v=bYTrwzFUSSE; where they state that a great deal of what they've seen from red teamers is from the "Advanced Penetration Testing" book, so you can key on those specific methods in order to detect attacks.

This issue also reminds me of "The Base of Sand Problem", the RAND report that highlights the problems in military modelling/simulations/wargaming that, for me, resonate with issues we face, can be found here: https://www.rand.org/pubs/notes/N3148.html. This report essentially says that the military modelling and analysis industry has made some crucial mistakes about what it focuses on, which leads to the ineffective use of its resources. In this context, a footnote that states military victories are based on the ratio of effective forces, not on who simply had the largest force compared to their opponent.

Second lesson - eliminate the big play.

There is not time to explain the Seahawks' use of "Cover-3 with a single high Free Safety", and their general approach of keeping the ball in front of the defenders to ensure the Defense always has another chance to prevent their opponents scoring, so I look at personnel choices.

Most NFL defenses, when choosing personnel, have emphasised their Defensive Line, the first line of defense against an opponent, who line up closest to the "enemy". Carroll has always specifically looked to the Defensive Backs, the last line of defense, most notably the Free Safety position, which is what he played in college.

This is reflected in the NIST Cyber Security Framework, and the five Core Functions. I am old enough to remember when Identify and Protect were the only aspects seen as useful, but slowly we are learning that Detect, Respond, and Recover are at least as important in surviving an attack, rather than believing in the "Defender's Dilemma", that if an attacker breaches us we have immediately lost.

I cite Adrian Sanabria from his presentation at the RSA conference earlier this year: https://www.youtube.com/watch?v=bMkVjDx3cqQ, on "It’s Time to Kill the Pentest", but just he has a great slide on how a hack is a series of steps, not a single event.

I would argue ( I forget I actually said during the presentation ) that the emphasis should be on Detect, Respond, Recover - the last line of defense, not the first. I had a very interesting but quick conversation about that with Panaseer afterwards. Unfortunately I was interrupted while on their BrightTalk webinar earlier in the month, and couldn't make time to read the 451 Research report, but I'm intrigued in how their services fit in, or don't fit in, with my current way of thinking.

This ability to recover is important because we all work in "Cyber Resilience" now, where the emphasis is on recovering from a breach, not just on preventing it. It's worth reading "Cyber Resilience", Phil Huggins' Black Swan Security blog here: http://blog.blackswansecurity.com/2016/02/cyber-resilience-part-one-introduction/. I emphasise the "Pace of Decision Making" aspect.

This links to John Boyd's OODA loop, OODA loops are described well on Wikipedia https://en.wikipedia.org/wiki/OODA_loop, here, please pay me to research these concepts.

Through a description of the OODA loop process, Observe your current situation and decide all the relevant factors, Orient yourself and your adversaries within that space, Decide on the next course of action, and then Act to execute that execution, Boyd argued that by going through this process faster than your opponent, by "getting inside their OODA loop", you could defeat your opponent through speed rather than sheer power. The problem is, as we're defenders, the opponent always starts their OODA loop before we start ours, so how do we catch up?

Final lesson - Out hit your opponent

I reference the "The Base of Sand Problem" again https://www.rand.org/pubs/notes/N3148.html, because it states that first order determinantes of victory in conflict is based on processes, tactics, and strategy - these are harder to define and measure; but I argue we should focus on our own way of thinking, our opponents way of thinking, and crucially how we can affect our opponents' processes, tactics, and strategy.

It is a physical game, it is a collision sport, and there are psychological and as well as other gains to be made by simply hitting your opponent as hard as you can.

Also this tallies with the previous aim, to eliminate the big play, as it physically puts the defenders in an excellent position to tackle or otherwise collide with their opponents - but I don't have time to go into this level of detail on the game. For this I use clips from Richard Sherman, Earl Thomas, but mainly Kam "Bam Bam" Chancellor executing the "Shoulder Punch", a Seahawks tackling technique which is as it sounds.

The Seahawks tackling video summarising their techniques is shown here: https://www.youtube.com/watch?v=6Pb_B0c19xA; for Chancellor himself, I think this video sums up what he provided in the narrow focus I use, you may recognise part of it: https://www.youtube.com/watch?v=qgh8HmKVja8

The aim here is to inflict pain on your opponent, and to reduce the speed of their OODA loop. I've learnt here to specifically state that I'm not advocating any kind of "strikeback" methodology, but in showing that on the blue team we've forgotten that we're facing an opponent, and we can affect that opponent. The pyramid of pain I refer to is David Bianco's, taken from http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html; that illustrates the more complex aspects of their practice are of more value to your adversaries, so when you understand them and can act against them, you cause them the greatest amount of pain.

To me the explanation of this comes from Bartle's Taxonomy of player types, there's a good summary: https://en.wikipedia.org/wiki/Bartle_taxonomy_of_player_types; the "killers", people who like outwitting, defeating, demoralising a human opponent, those "killers" all join the Red Team when they enter cyber security, which means that aggressive and effective approach is lost from blue team strategies.

For this specific "performance" we had just watched Sarka give a great presentation on human weaknesses, yet we ignore the humans who are our adversaries and focus on technical defenses and techniques. The specific slide used a card from Reciprocal Strategies, who can be found here: https://www.reciprocalstrategies.com/resources/brt_cards/.

Haroon Meer has been arguing for more hackers to join the blue team for several years, I show a clip from his Null Con keynote, which can be watched here: https://www.youtube.com/watch?v=2F3wWWeaNaM. Do persevere with the flickering screen.

To inflict that required pain I think deception is key, I'm reminded of Clifford Stoll's "The Cuckoo's Egg" book, and how incident response started with deception. Paul Midian's presentation can be found here: https://www.youtube.com/watch?v=KvksyvF6MN4. For more on deception please refer to my post about my DevSecCon "remix" of these ideas.

But the emphasis here, just as an overarching strategy, is not to only think how you can improve your own abilities and skills, and the strengths of your organisation, but how you can degrade the effectiveness of your adversary.

END

Questions on supporting evidence are welcome by email or in the comments or even on Twitter, and overall if you've any questions please do get in touch.

And while I realise it's not the most useful of documents, a PDF of the presentation is here.Lessons-CyberTech-4point7-presentedpdf

Friday 28 September 2018

Media Review - 28th September 2018

A useful article on how to use wargames is here https://thestrategybridge.org/the-bridge/2017/3/30/communicating-uncertainty-in-wargaming-outcomes; it explains how to get the most out of running wargames, and how to use their results. I'm a massive advocate of wargaming as a tool, as an effective way to practice certain situations so you operate more effectively when you're in the real version of the same thing. This article reflects the thinking of a friend and colleague of mine, John Curry, on how wargames can be used to understand a situation, and familiarise yourself with it, and can be used to estimate the likelihood of different outcomes, but shouldn't be used to predict outcomes.

"What 5G means for gaming" is here https://www.bcs.org/content/conWebDoc/59796, while the article is worth reading a summary would be that the consitency of connectivity, and the much improved latency, could open up possibilities for gaming on 5G networks which 4G networks weren't capable of. I'm not a mobile gamer myself at all, hell, I'm still working my way through my Xbox 360 "pile of shame", but still interested in where this will go in the next few years - especially combined with how much extra time people will have while on automated transport.

The "The Future of Penetration Testing....", which is here https://www.brighttalk.com/webcast/8325/328949; is worth putting on in the background. The point that really stood out for me was Holly Graceful's - saying that essentially it doesn't matter what you call a penetration test, as long as you and the customer agree.

It goes well with Rory McCune's "Night Of The Living Dead Pentest", from BSides Leeds earlier this year, You can watch the 45 minutes or so yourself here https://www.youtube.com/watch?v=Ndd8irMjUB8 on YouTube. This is a really good summary of the issues with penetration testing, the benefits of testing, and gives a good outline of the way forward. It's also good to see some of the unsayable things being said - i.e. PTES hasn't been updated in so long. I keep meaning to get more of my thoughts into written form, I might use this as a springboard for some thoughts on penetration testing.

As with many podcasts, the WB40 podcast is one I'll dip into if the subject matter looks particularly relevant or useful. I enjoyed Episode 78, an interview with psychologist Nancy Doyle about neurodiversity and the tech industry - which is here https://wb40podcast.com/2018/09/17/episode-78-neurodiversity/. The interview technique is great, reminding me of Ian Farrar from the Industry Angel in that Matt Ballantine asks a good question and then gets out of the way of the interviewee. Do listen to the podcast if you're interested in neurodiversity in the workplace, from a quick skim through Doyle's report mentioned in the show notes, it's well worth your time to read through it - either to learn, or to support any points you want to make.

Wednesday 19 September 2018

Presenting tips for new and nearly new speakers

I've a note on presenting guidance I make a point of reading whenever I'm putting something together, I figure it would be useful to others too:

( updated 19th September 2018 )

Recommended reading

Before you start reading, an excellent summary of most advice you'll receive is in Zach Holman's "The Talk on Talks", which is here https://www.youtube.com/watch?v=YVb2GsJHejo ; watch that, it's about fifty minutes long. Then visit Zach's site at https://speaking.io/

Now you've got an overview, concentrate on the overall format using How to give an Effective Presentation: https://qz.com/work/1110377/how-to-give-an-effective-presentation/

This presents the idea of how to structure the talk overall... and how to get your ideas together. I would emphasise this, I still struggle with trying to put too much in, spending days on research, and then realising I have a 90 minute presentation for a fifty minute slot. The sooner you can determine what is and isn't in the talk, the more time you'll save.

If you want to be above average then avoid all of the "anti-patterns" Troy Hunt mentions in this article... a bit of preparation and avoiding very basic pitfalls can take you a long way: https://www.troyhunt.com/speaker-style-bingo-10-presentation/

For a step beyond that, Thom Langford has written a good three part guide to presenting, this is worth working through:

Part 1: https://thomlangford.com/2018/05/18/the-art-of-the-presentation-part-1-of-3/

Part 2: https://thomlangford.com/2018/05/30/the-art-of-the-presentation-part-2-of-3/

Part 3: https://thomlangford.com/2018/07/20/the-art-of-the-presentation-part-3-of-3/

If you want the level beyond that then make the most of the transitions, moving on to the next slide on the right word can make all the difference, as described in this piece. However this does require practice, I've only become this good after repeatedly giving the same presentation four times a day for a week: https://medium.com/@saronyitbarek/transitions-the-easiest-way-to-improve-your-tech-talk-ebe4d40a3257

Confidence tips

Practice. Preferably in front of family or colleagues or friends, but if it comes down to it just practice in a room on your own with the laptop and a timer. That way you know the timing works, you'll at least have an inkling of what slide is coming next as you present - and if you can get any kind of audience they'll give you feedback on what does or doesn't make sense.

If you need confidence just before a performance... power pose with fists clenched... and by "power pose" I mean that rather silly looking stance Tory MPs have adopted recently where they stand with their legs slightly too far apart. In that pose your body will automatically think you're about to enter some kind of conflict and boost you with the right chemicals for a fight.

This idea was mentioned to me by my first mentee for BSides London, just imagine that the audience are penguins... flapping and squawking away, or waddling down to the water's edge for fish. It's silly, and nonsensical, but if it works it puts a smile on your face before you walk out there to face everyone.

And one last point, if you're an introvert, as per this tweet: https://twitter.com/KevinGoldsmith/status/963850794440187904, presenting gives you something to talk about with people, rather than having to navigate the morass of small talk.

Thursday 30 August 2018

Dual Booting Linux and Windows on an Acer Travelmate B117-M

I'm pretty sure no-one subscribes to this blog, which means I'm free to post relatively obscure, relatively technical, articles that will probably save someone a day or two.

I've an Acer Travelmate B117-M, the version with a 64gig eMMC drive with Windows S, with a free option to upgrade to Windows 10 Professional.

I intended to, somehow, fit Windows 10 and Debian Linux into that 64 gigabyte drive ( using a vendor's method of calculating disk space in gigabytes, so actually nearer 58G of space ) and dual-boot between the two. But then I happened across this YouTube video about how to change the battery on the laptop:

Acer TravelMate B1 (B117-M-C4XR) notebook - How to remove battery

and from here I can see there's an 2.5" SSD sized space for the SSD version of this laptop, even though a thin "m2" format drive is shown on that video. As I had a spare 2.5 inch SSD drive I took off the back of the laptop I tried it, as it was 7.5mm thick it just fitted in nicely, and it was seen by the BIOS.

( note 9mm thick 2.5" drives are too thick, which I think elimates all standard hark disks and thicker SSD drives )

From here I could install Debian, but bear in mind with the B117-M, and the stable distribution of Debian, you will need to download the "iwlwifi" package separately to use the internal WiFi card.

Also bear in mind "unzip" isn't within Debian stable by default, so download the tar.gz file of the extra firmware for Debian.

At this point you might have to go into the BIOS, and disable the "Secure" option under UEFI, to boot the Debian install USB key you made and get the process started.

Once you've made an install key for Debian and installed the operating system, and rebooted... you'll find you can't boot Debian, and if you hit F12 your SSD won't come up in the options. They way to solve this is wonderfully counter-intuitive, basically follow the advice here: https://forum.siduction.org/index.php?topic=6272.0 , just the BIOS related advice from that article, point 5 onwards. So select Secure boot under UEFI, then go back to the Security options and find the efi file in your Debian install, select it, reboot back into the BIOS, disable the secure boot, reboot again, and then you should have Debian as an option. Making sure to save the BIOS settings every time you exit.

Bonus Material

From very limited testing, the device can be turned into a ChromeBook too, check out Arnold The Bat's website here. I recommend using one of the "special" builds rather than the "vanilla" build.

Friday 24 August 2018

Making Clonezilla work on an Acer Travelmate B117-M

Another entry that will help that one person on the Internet who's facing the same problem I have.

You have an Acer Travelmate B117-M, and wish to back it up over its wireless interface to a network drive; using Clonezilla booted from a USB stick, not a DVD.

What to do...

Firstly, boot the laptop, press F2 to take you into the BIOS.

Go to the "boot" option, and change the option of UEFI from Secure to... whatever the other option is, I'm backing up my own device as I type this.

Go to the Clonezilla website, and download the "bionic" version. The different downloads are listed here: https://clonezilla.org/downloads.php

Install it on a USB key by, very carefully, following the instructions on: https://clonezilla.org/liveusb.php

Note - the stable version of Clonezilla loads the right iwlwifi driver for the Intel Wireless 7265 network card, but it doesn't work. Check the output of "dmesg" for further information, but I couldn't see any way of fixing tha easily.

Note - the size of USB stick needed is slightly larger than those old 256MB USB sticks you've got lying around... so don't bother.

Boot the laptop, press F12 to take you to the boot options.

Choose your USB key.

Boot into Clonezilla.

Use the alt key, and either F2 or the right arrow, to move to a terminal window. Type "sudo bash" to become the root user.

The instructions for setting up wireless networking are here: https://www.linuxbabe.com/command-line/ubuntu-server-16-04-wifi-wpa-supplicant, but the important lines are:

wpa_passphrase your-ESSID your-passphrase > /etc/wpa_supplicant.conf
wpa_supplicant -c /etc/wpa_supplicant.conf -i wlan0

That will associate you with your local wireless access point.

From there just run clonezilla as normal.

( a couple of extra notes: while trying to solve this I upgraded the laptop's BIOS to version 1.25, which I downloaded and ran from here: https://www.acer.com/ac/en/GB/content/support-product/6660?b=1 ; the Clonezilla USB key only booted when I set it up with an MBR partition using the "o" option in fdisk... or that might have been me forgetting to run "makeboot.sh" )

Monday 20 August 2018

Media Review - 20th August 2018

Your Whole Life Is Borrowed Time

The latest entry from the useful Raptitude blog, which can be found here: https://www.raptitude.com/2018/08/your-whole-life-is-borrowed-time/. I'd sum it up with this quote:

...it gave me a vital bit of perspective: I happen to be alive, and there’s no cosmic law entitling me to that status. Being alive is just happenstance, and not one more day of it is guaranteed.

This thought instantly relieved me of any angst over that particular day’s troubles: technical issues on my website, an unexpected major expense, an acute sense that I’m getting old.

I realise such entries can be trite, but that doesn't stop them being useful when you need a bit of a boost to get on with things.

Q: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible? A: Because Keynote Speakers Make Bad Life Decisions and Are Poor Role Models

James Mickens' keynote from the recent USENIX security symposium, which can be found here: https://www.usenix.org/conference/usenixsecurity18/presentation/mickens ; while using humour and a small number of points, a really well put case for more skepticism in cyber security. There are many excellent points made about how technology, especially machine learning, are amplifying mistakes and biases.

I'm not sure about some of the humour, but I really like that the emphasis is on an engaging presentation, that Mickens gets attention to a very important message by being different, and I'm incredibly envious of the presenting format used - featuring animation and other techniques beyond simple slides.

The concept of "Technological Manifest Destiny" is particularly useful to have available in future conversations...

Systemic Innovation e-zine Issue 196, July 2018

This can be found at http://www.systematic-innovation.com/assets/iss-196-jul-18.pdf. I'm hoping to meet up with Darrell Mann from here at some point, I think TRIZ is a very interesting way of approaching the world, and has the potential to solve a lot of problems in innovative and useful ways. In particular this issue of his company's e-zine was intriguing, most notably the article about changes to how the NHS deals with "frequent callers", the few people who appear most often at A&E. Dealing with their underlying problems has lead to huge gains for a particular hospital, and now looks like it'll be rolled out nationwide...

Sunday 5 August 2018

Media Review - 5th August 2018

InfoSec Recruiting – Is the Industry Creating its own Drought?

This can be found at: https://www.liquidmatrix.org/blog/2018/07/16/infosec-recruiting-industry-creating-drought/ . An interesting point of view from someone on both sides of the recruitment process. I don't think the problem Fischer highlights is respnsible for the drought, but poor recruiting and evaluation processes certainly don't help.

Why you should have your own black box | Matthew Syed | TEDxLondonBusinessSchool

This can be found on YouTube at https://www.youtube.com/watch?v=MmVCYqs3mko, presented by Matthew Syed. This is a really well put contrast between the growth mindset of the aeronautical industry, and the fixed mindset of the healthcare industry - and how the difference in culture between the two makes such a difference in outcomes. He then expands on that, and makes some excellent points. This is worth fifteen minutes of your time.

A Complete Guide to Getting What You Want

I probably read to many self-help articles as I try and figure out what I actually want to do and where I fit. But contrary to the comment at the start of it, this is a relatively short read: https://www.raptitude.com/2018/06/getting-what-you-want/ , and steps through the stages of figuring out what you want and how to get it.

Favourite quote, which I must remember to use elsewhere, is

"We’re fearful creatures after all, with an evolutionary impulse to cling to virtually any tolerable status quo, no matter how dull or crappy it is."

The Only Thing You Need to Get Good At

I recently discovered raptitude.com, another useful resource highlighted to me by the wonderful Career Shifters.

This blog post in particular is at https://www.raptitude.com/2017/03/only-thing-get-good-at/ ; and is a very high level explanation of stoicism, which has always intrigued me, and this post explains how you should only be concerned with that which you can control. Easier said than done, but wirth saying all the same.

As a side note, it highlighted to me my attitudes towards Left wing and Right wing politics in general; In general I think the policies of the left wing are far more effective, but think their tactics are awful; and while I generally dislike the policies of the right wing for various reasons - most significantly because they don't work, but I think their tactics are far more effective. I also think that's a massive over-generalisation for now and I'll stick with it in practice and see if it survives. That's something I'm tempted to expand on... but of course I've no political influence, and nothing personally to gain from doing so, so that would be a waste of time from a stoic point of view.

From chaotic ripples to complicated waves

Due to my interest in TRIZ I came across Ron Donaldson's blog, and this entry, at https://rondon.wordpress.com/2018/07/23/from-chaotic-ripples-to-complicated-waves/ , was particularly interesting. I like the aim of just enough rules to enable other teams in different areas to follow a successful example, rather than taking something that's worked and trying to make different areas, with different cultures, identical. I also liked that staff happiness, rather than just patient happiness, is seen as a gain - there can be too much emphasis on the "customer", with staff just seen as another resource to be manipulated appropriately.

Sunday 29 July 2018

Media Review - 29th July 2018

Cyber resilience - nothing to sneeze at

NCSC explains the concept of "cyber resilience" using an analogy to the human body's defences: https://www.ncsc.gov.uk/blog-post/cyber-resilience-nothing-sneeze

I'm not sure how far this analogy stretches - i.e. how do the aims of a cold virus compare to the aims of a typical attacker - but the "Prepare - Absorb - Recover - Adapt" feels much more likely to succeed than "Protect Protect Protect... ".

The bendy circus performers who help keep watch at a disused Birmingham school

This BBC report on four property guardians learning circus skills, found at https://www.bbc.co.uk/news/resources/idt-sh/balancing_act , was particularly interesting, and really impressive at how they've approached a difficult economic situation. I'm disappointed that this isn't seen as "adult"-ing, whereas the traits they exhibit: "self-reliance, building networks, learning skills, having fun, financial planning in a difficult environment, thinking unconventionally" as I put it in a tweet, are exactly what adults should aim for IMHO.

Why Diversity Wins

There is a lot of "politics" around diversity, what it means and who it applies to, and everything else that goes with that. I'm mainly interested in the arguments put forward in this sub-four minute video from Everything Is A Remix: https://www.youtube.com/watch?v=4Dn8NuiMADY ; that diversity gives you a better chance at solving complex problems than if you operate in its absence.

Don’t Leave Hungry! Plan a Full Red Teaming Meal

Another well put article from Reciprocal Strategies here: https://www.reciprocalstrategies.com/the-full-red-teaming-meal/ . The main take away is the distinction between:

  • Gegenspiel, or thinking like an opponent; and
  • Kontraspiel, or thinking like a contrarian or devil’s advocate.

I think Kontraspiel is a really useful approach to adopt when looking at a project, or any significant corporate decision... or decision in life for that matter.

Of course neither matches the current definition of Red Teaming used in penetration testing, which is essentially "goal orientated pentesting, mainly technical, with some social engineering sprinkled on top". And a pentester's Gegenspiel will be thinking like an opponent, rather than thinking like any of the opponents, but that's a discussion for a different time.

You should read the article, I like the use here of "all-role red teaming" to describe what Reciprocal Strategies offer.

I think I'm naturally inclined towards this kind of analysis, focusing on concepts, looking at overall issues, adopting different points of view and exploring them to see where they take me, and where they take whoever I'm working with and working for... and I think this kind of analysis is incredibly useful in all sorts of situations. However I gather Reciprocal Strategies is having to search for customers, and I'm disappointed to note the Twitter account has 85 followers at the time of writing - considering how smart and how knowledgeable Mark Mateski is that makes me incredibly wary of trying to turn this idea into a business myself.

Monday 23 July 2018

Lessons From The Legion - a summary

Overview

A summary of my presentation "Lessons From The Legion". I'm hoping to give the presentation more often, in order to generate more interesting and useful conversations, so this will undoubtedly evolve, your feedback is welcome.

Alternatively, if you've requested a copy of my slides... I've directed you to this summary instead. For various reasons I'm conscious of how complex copyright law is, and I think I'm just on the right side of it, but I'm not aiming to test that more than necessary. Also the aim is always that the slides help you understand what I'm saying, and help me to remember what to say next, if they're standalone then I've not presenting my ideas effectively.

A very high level summary would be this tweet, if you want to point someone at a very short summary:

"people trying to excel at self-taught technical skills are sub-optimal at strategic decisions required for a nebulous conflict, their emphasis should be on team work, and on the strategies of, and constraints on, their adversaries; they should seek inspiration elsewhere"

As a less brief summary, but trying to keep things snappy, my reasoning is as below. A slide by slide summary is too dense, and makes me realise how many ideas I've pushed into the audience's heads in less than an hour, so I've tried to be more logical below.

Logical Progression of the talk

Introduction

I have a question - In Cyber Security - if we're all so smart, which we are, and we all work so hard, which we do, why is everything so awful?

Most presentations will start with an explanation of who the speaker is, their history, and why you should listen to them, and then give you an answer to the technical question they posed. This presentation is more of an "investigation wall", where the investigator links diverse ideas and newspaper clippings and surveillance photos and post-it notes with string to try and reveal an idea.

Also those technical presentations tend to be tactical, and I think the problem is that we have unintentionally decided on a vague strategy based on tactical choices, rather than an informed strategic choice has decided which tactics we should use.

John Kindervag's presentation "Winning the Cyberwar With Zero Trust" is a good example of thinking at a strategic level and making informed tactical choices accordingly, I specifically mention it here because I borrow some of his slides.

The main three areas I'm familiar with, as a cyber security practitioner, are:

  • System Administrators / Developers
  • Penetration Testing
  • Incident Response

The strategy in all three areas is based on being the most technically skilful practitioner you can - sysadmins patch as quickly and as thoroughly as they can through knowledge of their operating systems, developers code as securely as they can through knowledge of their languages; penetration testing aptitude and success is based entirely on how technically adept the pentester is and how well they apply the "flaw hypothesis methodology"; and incident responses work on their forensics skills to learn how to spot different attacks, and prepare playbooks to run through once an attack has been detected.

Arguably the way this strategy has come about is because of how we train and practice for each area - all of which is based on self-motivated learning, and a passion for the job that is often described as "eat, sleep, breathe security". Therefore the emphasis is on individual skill and knowledge rather than on wider context.

Where has this choice got us? I cite various references that illustrate the poor state of cybersecurity, and the danger that poor cybersecurity poses to organisations in general and civilisation as a whole.

This method of practising reminds me of golf. Excelling at golf is based on individual skill, which is reflected in how a player performs in the game - because success in the game is based almost solely on individual performances. Even in a team game of golf, with a team and against an opposing team, there is very little your team-mates or opponents can do to directly affect your standard of play. And the actual course will be static also, apart from the vagaries of weather.

There is nothing wrong in practising like golf if you're going to play golf, however the practice of cyber security is nothing like the game of golf, I think we need to look at a different game.

Using this kind of analogy, and cross-pollentating ideas, between areas is generally derided, but if you look hard enough there are examples where this works. In particular the idea of TRIZ, of abstracting problems and solutions in order to determine what kind of solution is required in a rapid way.

So, if we're practising for golf, but not playing golf, what game are playing?

I argue that our industry feels a lot more like American Football. It is a ridiculously complex and violent sport, with many specialisms, and very much a team game where your success or failure is very dependent on the quality of your team and your ability to work with them, and how you act against and react to your opponent.

Therefore we should look to learn lessons from a successful American Football team. American Football is the only sport where each team has essentially two squads on it - an Offense for when your team has possession of the ball, and a Defense for when your team does not.

I think that as defenders in cyber security, even red teamers are looking to improve the performce of the blue team and the survivability of defenders, we should look to the best Defense. I classify Defense in American Football, and cyber security, as a "weak link game", where the overall ability of the team is decided by the ability of the worse players on your squad, not the best.

Possibly influenced by personal biases, but backed up by many sports facts I'll quote in the novella length version of this description, I have chosen the Legion of Boom, the Seattle Seahawks defense from 2011 to 2017, as an example to follow.

Looking at the central tenets of the team, and the defensive philosophy of the Seattle Seahawks head coach, Pete Carroll ( who has approximately 40 years of experience and an exemplary record ), I pick some of the main lessons from the Seahawks successful Defense:

First lesson - eliminate the big play.

There is not time to explain the Seahawks' use of "Cover-3 with a single high Free Safety", and their general approach of keeping the ball in front of the defenders to ensure the Defense always has another chance to prevent their opponents scoring, so I look at personnel choices.

Most NFL defenses, when choosing personnel, have emphasised their Defensive Line, the first line of defense against an opponent. Carroll has always specifically looked to the Defensive Backs, the last line of defense, most notably the Free Safety position, which is what he played in college.

This is reflected in the NIST Cyber Security Framework, and the five Core Functions. I am old enough to remember when Identify and Protect were the only aspects seen as useful, but slowly we are learning that Detect, Respond, and Recover are at least as important in surviving an attack, rather than believing in the "Defender's Dilemma", that if an attacker breaches us we have immediately lost.

I would argue ( and if I remember during the presentation I'll actually say it ) that the emphasis should be on Detect, Respond, Recover - the last line of defense, not the first.

Second lesson - train how you fight

Because American Football is such a complex game it is necessary to practice complex play calls and formations in advance, and to ensure that each individual knows their responsibility, and everybody else's responsibility, on each play so that they can function as a team.

Because the teams are so large there are enough players for the second and third string players in each squad to form "scout teams". These teams imitate the playing style and formations of upcoming opponents so that both they, and the first string players, understand what is coming up in next week's game, and also are less surprised by any of their opponents individual styles in the game.

This links into the concept from wargaming of the Caffrey Triangle, showing how a red team - in a red team exercise specifically designed to assist the blue team - should act depending on the objectives of the engagement. I argue that penetration testers work almost solely at the top of the triangle, being the most effective attackers they can, when they should operate in the right hand corner, emulating the TTPs of genuine adversaries in order to prepare the blue team for their real world opponents.

Also I stress here that any kind of practice is required, as the discipline of Incident Response is notorious for organisations referring to their plans for the first time, or even writing them the first time, during an actual incident. Constant practice is crucial, especially when we are moving from being in Cyber Security to proposing Cyber Resilience.

Third lesson - know your enemy

I briefly introduce John Boyd's concept of the OODA loop - Observe Orient Decide Act - as a way of understanding how you process information in a conflict, and how "getting inside your opponents OODA loop" by progressing through the steps faster than them, leads to victory.

This takes me to showing plays by Bobby Wagner, a Linebacker with the Seattle Seahawks, and arguably the best player at this position, and who has stated that the game is "90% mental" because as you can only get as fast or as strong as everyone else. I show a play whereby, even though on defense, even though on the side of the ball that's meant to react to the play, Wagner knows what the offensive is going to do and so is able to take advantage of that to disrupt his opponent.

As a side note - this emphasis on researching your opponent, and therefore being so much more confident during a game, has many more references from the entire Legion of Boom, especially their three most well known Defensive Backs. I have some references, and some video clips... when I say there's a three hour version of this presentation waiting to be written I'm not joking.

I link this to the RAND paper from 1991, the Base of Sand Problem, mainly because of the excellent footnote that explains that the effective of forces - their training, logistics, and positions - is much more crucial to deciding who is the victor in a particular conflict, than the sheer size of any force.

Also I use references that your opponent has a limited number of playbooks, and therefore learning them is an achievable aim, rather than attempting to defend all assets against all attacks from all possible adversaries.

Fourth lesson - out hit your opponent

The second of Pete Carroll's tenets, it is a physical game, it is a collision sport, and there are psychological and as well as other gains to be made by simply hitting your opponent as hard as you can.

Also this tallies with the first aim, to eliminate the big play, as it physically puts the defenders in an excellent position to tackle or otherwise collide with their opponents - but I don't often have time to go into this level of detail on the game.

For this I use clips from Richard Sherman, Earl Thomas, but mainly Kam "Bam Bam" Chancellor executing the "Shoulder Punch", a Seahawks tackling technique which is as it sounds.

The aim here is to inflict pain on your opponent, and to reduce the speed of their OODA loop. I've learnt here to specifically state that I'm not advocating any kind of "strikeback" methodology, but in showing that on the blue team we've forgotten that we're facing an opponent.

I link this to the standard Incident Response methodology, which is based on Gold, Silver, and Bronze Commanders, and is designed for what I'd describe as "non-sentient opponents". Maybe we need to add a "Francium Commander", based on the most dangerous of elements, where an incident would be handed over to someone who would specifically attempt to deceive and disrupt the enemy. This could be achieved through deception, in making your opponent so unsure of their context that they are reduced to ITIL type processes to ensure they aren't detected. Also I emphasise that this is a team game, and that sharing adversary TTPs with other defenders assists everyone and builds the size of your time.

Summary

Usually at this point I stress that I'm not sure of my ideas, but that F-Secure's purchase of MWR shows that someone else agrees, at least in general, with some of what I'm proposing.

Also that what I'm advocating, which is only half-thought through at best, is a change in strategy and/or doctrine and/or ideology. These are the most difficult changes to make, and the least liked, organisations prefer simple solutions that state they'll eradicate the problem, regardless of their actual effectiveness. However as many have said, for example Anton Chuvakin of Gartner, the industry does not have enough staff, and already has more than enough products, yet we are still facing more of the same problems.

At this point I summarise all of the above, and kind of finish indecisively to encourage questions rather than proposing I have definitive answers.

END

The latest set of references should be elsewhere on this blog, please scroll down, or up, the find the version related to whichever "performance" you watched.

Combining the logic above with those references is something I should do, but at the moment I'm happy to do that "live" during the performances of this talk. Questions on supporting evidence are welcome by email or in the comments, and overall if you've any questions please do get in touch.

Friday 20 July 2018

Media Review - 21st July 2018

The happy secret to better work

This TEDx presentation on how happiness leads to success - https://www.ted.com/talks/shawn_achor_the_happy_secret_to_better_work - was 12 minutes well spent. In particular how it advocates being happy in order to achieve success, rather than aiming for success, a goal that always moves once you reach it.

I think this kind of concept has massive ramifications for cyber security, a notoriously pessimistic industry. I mean the industry is understandably "glass half-empty" given the challenges it faces, but that doesn't mean those in the industry can't indulge in a little wilful self-deception, or confidence, to improve their abilities and chances for success.

Of course maybe the solution is to have a process as a goal: "always strive for a better job", with the knowledge that you are kind of always succeeding at that goal if you're always striving.

And, of course, that completely goes against everything you'll read about goal setting, which advocates the "Specific, Measurable, Attainable, Realistic, Timely", increasingly I think that's suitable for projects, not so much for your personal objectives.

The Utility of War Gaming

This can be found at https://wavellroom.com/2017/11/21/the-utility-of-war-gaming ; I'm "cheating" here slightly because I actually discovered this in November last year, but it came up in the Wavell Room's twitter feed, and as I started reading it, and enthusiastically nodding along, I realised I'd read it before.

Of particular interest is the emphasis on command, and how useful dice are in providing factors you weren't aware of or don't understand, as long as the umpires can explain the effect of the dice then they're just a device, rather than some kind of destiny or fate that decides if you win or lose.

DtSR Episode 302 - InfoSec Superhero Syndrome

This was an episode of the Down The Security Rabbit Hole, which you can download or listen to here: http://podcast.wh1t3rabbit.net/dtsr-episode-302-infosec-superhero-syndrome

I was driving at the time so I didn't take any notes, so all I can say is that this is worth your time. Excellent points on how cyber security people don't scale, and how security practitioners trying to do everything is not only inefficient but leads to burnout. It was just really refreshing to hear something I've been thinking but not really said: that it's OK to admit that you don't know something, and that actually it's better to do so than try to wing it.

A New Approach to Command Post Training

An interesting article from the Wavell Room, a thoughtful website I discovered thanks to a Peter Apps ( https://twitter.com/pete_apps ) tweet; you can find this specific article here: https://wavellroom.com/2018/07/10/a-new-approach-to-command-post-training/.

The article highlights how unrealistic current Command Post training is for the British Army, and the following points really stuck out for me, in relation to my own interest in wargaming, and my investigations into Incident Response training:

  • The unrealistic environment: it appears that these command posts are much more comfortable than those in the field, whereas you want people to be aware how those kind of situations affect their decision making ability.
  • Lack of friction: a common problem with wargames is modelling all the little things, the mis-communications, the misunderstandings, that just make life harder.
  • Steady injects: as a training exercise designer... as you would as an RPG DM, or as a video game designer... you want the exercise to adapt to the skill level of the players, and push them to become better - a predictable stream of injects at a regular pace won't do this.
  • Train tracks: the term used when a predictable set of injects is used. Understandable as it's easy to create and play, but terrible when you're training people to deal with the unexpected, especially as adversarial force.
  • Failure: to me one of the main points of a wargame is to have a "safe space" where players can fail, that way they learn what does and doesn't work, and they learn their limits, in a space with no consequences.
  • Playing divisions against each other: I love this idea from the article, because it reminds me of TRIZ's "use the problem as the solution" concept ( which I've probably over-simplified ). Training exercises suffer because there is no red team to play against, and also they're expensive because you have to run one for every division. So why not have the divisions fight each other, therefore running two exercises for the cost of one.

Wednesday 18 July 2018

Lessons From The Legion - references from my presentation at Cyber London

If all's gone to plan this blog post should appear just as questions are finishing at my presentation at Cyber London ( which was detailed here https://www.meetup.com/London-Cyber-Capital-One/events/252353488/ ).

As before, I've categorised references by type, kind of, I figure that's the easiest way for people to navigate this. They're in alphabetical order, but with an index of "the thing I think I put to the fore when mentioning this", which isn't the most objective criteria. Constructive feedback always welcome - I'm sure there's a better way to list these, but I'm not sure how.

Books:

"Bullshit Jobs" is by David Graeber, there's a description here https://www.penguin.co.uk/books/295446/bullshit-jobs/

The Numbers Game by Chris Anderson covers Strong Link and Weak Link games, and well, actually, I should buy this and read it, but this article covers all you need to know for the level I use it at: http://www.asalesguy.com/soccer-and-messi-basketball-and-lebron-how-one-is-like-sales-and-the-other-isnt/

Blog posts:

First Mover Advantage - Tenable's blog post on the how quickly attackers and defenders evaluate vulnerabilities: https://www.tenable.com/blog/quantifying-the-attacker-s-first-mover-advantage ; I've literally grabbed the headline thanks to the pointer to this from Dark Reading: https://www.darkreading.com/prnewswire2.asp?rkey=20180524PH05742&filter=3849

Peak security product - Anton Chuvakin's point on not having enough people is here https://blogs.gartner.com/anton-chuvakin/2018/06/21/is-security-just-too-damn-hard-is-productservice-the-future/ ( at the time of writing I don't expect to be able to mention it, but this Leviathan Security paper is a useful resource for highlighting that there will never be enough people: https://static1.squarespace.com/static/556340ece4b0869396f21099/t/559dada7e4b069728afca39b/1436396967533/Value+of+Cloud+Security+-+Scarcity.pdf ; with a hat tip to Harron Meer on this podcast: https://securityboulevard.com/2018/07/we-have-the-silver-bullet-for-bs-detection-ciso-security-vendor-relationship-podcast/ )

Rapid 7 on the number of CVEs is here: https://blog.rapid7.com/2018/04/30/cve-100k-by-the-numbers/

SR-71 Blackbird: this might have made it in as a very quick reference, I thought of that context listening to this podcast: https://www.lockheedmartin.com/en-us/who-we-are/business-areas/aeronautics/skunkworks/insideskunkworks.html#Episode-1

Presentations:

Blinky Boxes - Frasier Scott's presentation on threat modelling has slides here: https://speakerdeck.com/zeroxten/threat-modeling-the-ultimate-devsecops I think this is part of his current repetroire, so best caught live of course - at the time of preparing this I'm not sure how much he'll have used in his talk following mine.

CTFs - The Last CTF Talk You'll Ever Need from DEFCON 25, is here: https://www.youtube.com/watch?v=MbIDrs-mB20

CTRL+Break The OODA Loop by Abel Toro of Forcepoint from BSides London 2018 isn't up yet on their channel https://www.youtube.com/channel/UCXXNOelGiY_N96a2nfhcaDA ; it was on Track 3, I'm hoping that was recorded... or that Abel will be giving the presentation again.

The Cuckoo's Egg reference was inspired by Paul Midian's BSides Glasgow 2018 keynote "Everything You Know Is Wrong" - https://www.youtube.com/watch?v=KvksyvF6MN4

Cyber Defense Threshold - this is from Sean T Malone's presentation "Using An Expanded Cyber Kill Chain Model to Increase Attack Resiliency", the video is here: https://www.youtube.com/watch?v=1Dz12M7u-S8 and the slides are here: https://www.blackhat.com/docs/us-16/materials/us-16-Malone-Using-An-Expanded-Cyber-Kill-Chain-Model-To-Increase-Attack-Resiliency.pdf

Hackers being needed on the Blue Team comes from Haroon Meer's Nullcon Goa 2018 keynote: https://www.youtube.com/watch?v=2F3wWWeaNaM

Ian Fish - Crisis Management - from CrestCON 2018 - is here: https://www.youtube.com/watch?v=R1UOW3xGpZE

Intruder's Dilemma - is mentioned in this from BSides Munch 2018: https://www.youtube.com/watch?v=PQgsEtRcfAA

Penetration Testing Must Die - Rory McCune at BSides London 2011 - is here: https://www.youtube.com/watch?v=MyifS9cQ4X0

Playbooks - Common Traps & Pitfalls in Red-teaming by Andrew Davies and Jon Medvenics from CRESTCon is here: https://www.youtube.com/watch?v=bYTrwzFUSSE

Strategy - John Kindervag's "Win the War With Zero Trust" can be found via BrightTalk here: https://www.brighttalk.com/webcast/10903/280059

Reports:

The Base of Sand Problem, the RAND report that highlights the problems in military modelling/simulations/wargaming that, for me, resonate with issues we face, can be found here: https://www.rand.org/pubs/notes/N3148.html

The Cyber Resilience Report from KPMG, that really makes the point that preparation is key, can be found here: https://assets.kpmg.com/content/dam/kpmg/ie/pdf/2017/09/ie-cyber-resilience-2.pdf

The Global Risks Report 2018 from the World Economic Forum can be obtained here: https://www.weforum.org/reports/the-global-risks-report-2018

Outpost24's report on their survey of RSA attendees can be found here: https://outpost24.com/sites/default/files/2018-05/RSA-2018-Survey-Outpost24.pdf

State of CyberSecurity Report - InfoSecurity Magazine - where they highlight regulation can drive security - can be found here: https://www.infosecurity-magazine.com/white-papers/state-of-cyber-security-report/

Seattle Seahawks and other less cybery references:

The article "Bobby Wagner Can See into the Future" is here: https://www.si.com/mmqb/2017/06/29/nfl-bobby-wagner-seattle-seahawks

Bobby Wagner's PFF rating is from this tweet: https://twitter.com/PFF/status/1005914257563836416 ( as a side note, check out this video on Luke Kuechly, who's also on that list, that's basically team-mates and rivals saying how smart he is https://www.youtube.com/watch?v=umfgvffJ6M0 )

Fewest points allowed - this ESPN article summarises it nicely http://www.espn.co.uk/nfl/story/_/id/17886833/how-seattle-defense-dominated-nfl-five-years-running-nfl-2016

Introduction - the quick cartoon shoulderpunch is taken from this introduction to the game: https://www.youtube.com/watch?v=3t6hM5tRlfA

Kam Chancellor - I think this video sums up what he provided in the narrow focus I use, you may recognise part of it: https://www.youtube.com/watch?v=qgh8HmKVja8 ( as a side note, while I don't think it's relevant to the analogy, Kam Chancellor appears to have retired http://www.espn.com/nfl/story/_/id/23967587/kam-chancellor-seattle-seahawks-safety-appears-announce-retirement-via-twitter - this is a good video summary of what he provided to the team https://www.youtube.com/watch?v=8SltNCS4Jg0 )

Legion of Boom - there's a nice retrospective that's just a five minute video: https://www.youtube.com/watch?v=N73r5HemB0M

If you want an emphasis on the boom, watch this: https://www.youtube.com/watch?v=xF6OLtz280Y

My main source for Pete Carroll's philosophy, in many senses of the word, is here: https://www.fieldgulls.com/football-breakdowns/2014/2/3/5374724/super-bowl-48-seahawks-pete-carrolls-richard-sherman-marshawn-lynch ; I have a lot of reading to look forward to.

If you want to see just how many players there are on a team then you'll see the Seahawks roster here: https://www.seahawks.com/team/players-roster/

Tackling video - the Seahawks 2015 video summarising their technique is shown here: https://www.youtube.com/watch?v=6Pb_B0c19xA

Olivia Jeter, Defensive End for Blandensburg High School, is covered in these videos: https://www.youtube.com/watch?v=yAYS2VnfFi8 and https://www.youtube.com/watch?v=5xD8qjihHf8 Yes, I do realise that those videos are from 2014, but she provides such great soundbites, I must find out what happened to her.

YouGov's survey on British interest in various sports is here: https://yougov.co.uk/news/2018/01/10/what-most-boring-sport/

Tweets:

Jeremiah Grossman on the Kenna Security report, highlighting 2% of vulnerabilities are exploited, is here: https://twitter.com/jeremiahg/status/996469856970027008 I've got into interesting discussions on how true or untrue that figure may be, watch this space.

Websites:

AIS - The article from Cyberscoop on the DHS's Automated Information Sharing portal being underused is here: https://www.cyberscoop.com/dhs-ais-cisa-isnt-used-jim-langevin/

Bananas - Chiquita using pharmaceutical packaging is detailed here: http://archive.boston.com/business/globe/articles/2007/03/07/yes_we_have_one_banana/

Banks using mobile phone companies ... I had a single reference for this and lost it, so as per my real world presentation, I think I said something generic like "there's many examples of banks talking to many different industries", do get in touch if you find any particularly good ones.

Bartle's Taxonomy of player types is taken from this: https://en.wikipedia.org/wiki/Bartle_taxonomy_of_player_types

BreachLevelIndex.com is, well, here: https://breachlevelindex.com/

The Caffrey Triangle is mentioned here https://paxsims.wordpress.com/2016/08/19/connections-2016-conference-report/ , I've had it explained to me in person, we all need to be talking about this a lot more, in both cyber security and wargaming.

Cyber Resilience - Phil Huggins' Black Swan Security blog is here: http://blog.blackswansecurity.com/2016/02/cyber-resilience-part-one-introduction/

Cyberscape - the amount of tools we have, is taken from Momentum's cyberscape: https://momentumcyber.com/docs/CYBERscape.pdf

Dentistry using space technology is here: https://phys.org/news/2010-10-benefits-space-technology-dentists.html

DevSecOps WAF concept is described here: https://www.acrosec.jp/what-is-a-devsecops-waf/?lang=en

Emergency response - the three element model can be seen is some detail here on the College of Policing website: https://www.app.college.police.uk/app-content/operations/command-and-control/command-structures/

Francium - my main inspiration for choosing the element Francium is here: http://www.sparknotes.com/mindhut/2013/09/06/the-worlds-most-dangerous-elements

Full Spectrum Response - yes, it is something I've read about only briefly, and it is for the battlefield rather than as a viable response to your regional office receiving a phishing email. There's a short PDF here: https://www.northropgrumman.com/Capabilities/Cybersecurity/Documents/Events/Datasheets_IA15/IA15_FSO_Datasheet.pdf

HorseSenseUK - Equine Assisted Education - can be found here: http://horsesenseuk.com/

Incident Response, the four stages - I detailed that in this blog post: http://blog.sonofsuntzu.org.uk/post/2017/03/26/Notes-on-Incident-Response-from-the-SC-Congress

Incident Response Timelines - this is taken from the Logically Secure website, and can be found here: https://www.logicallysecure.com/blog/ir-metrics-part-1/

Intruder's Dilemma - I think the first reference to it from Richard Bejtlich is here: https://taosecurity.blogspot.com/2009/05/defenders-dilemma-and-intruders-dilemma.html

MWR - the TechCrunch article I refer to, where F-Secure note the need for offensive capability, is here: https://techcrunch.com/2018/06/18/f-secure-to-buy-mwr-infosecurity-for-106m-to-offer-better-threat-hunting/

Naval - Paul Raisbeck, who uses his naval experience in what is loosely described as management consultancy, can be found here: https://www.linkedin.com/in/paulraisbeck/ , and a relevant piece by him here: https://www.linkedin.com/pulse/what-could-your-business-learn-from-royal-navy-people-paul-raisbeck/

OODA loops are basically described here, but again, please pay me to research these concepts: https://en.wikipedia.org/wiki/OODA_loop

Playbooks, Rick Howard, the CSO of Palo Alto, on the small number of opponent's playbooks: https://www.csoonline.com/article/3207692/leadership-management/exploit-attacker-playbooks-to-improve-security.html

Practice - "any incident response plan is only as strong as the practice that goes into it" is from Mike Peters, Vice President of RIMS, the industry body for Risk Management. Best to search online for that specific quote and use whichever source will look best in your board level presentation.

RASP, being Runtime Application Security Protection, is described here: https://www.softwaresecured.com/what-do-sast-dast-iast-and-rasp-mean-to-developers/

Strikeback - yes, I have thought it's a bad idea for just over twenty years now: http://seclists.org/firewall-wizards/1998/May/69

TRIZ on Wikipedia is here: https://en.wikipedia.org/wiki/TRIZ and the main British consultancy, as far as I can tell, is here: https://www.triz.co.uk/

Overall

Sometimes I get the gist of something and use that. If you know any of these ideas better than I do, meaning that I've missed a nuance, or not read an important reference, please do get in touch. I always appreciate constructive corrections.

Tuesday 17 July 2018

Media review, just making notes on things I watched or read

Security Lessons from Dictators - Jerry Gamblin - 44CON2013

As I'm currently all aboard the analogy train I found this particularly interesting, Jerry looks at errors that dictators have made and compares that to errors that cyber security practioners make. It can be watched on YouTube https://www.youtube.com/watch?v=1Rya1GWOG2w and is worth 30 minutes of your time.

Jerry Gamblin is worth following on Twitter, his account can be found here: https://twitter.com/JGamblin ; and I quote this tweet of his every so often: https://twitter.com/jgamblin/status/845773296410910721?lang=en

Forget About Setting Goals. Focus on This Instead.

This can be read at https://jamesclear.com/goals-systems - basically emphasise the processes in your life and you will reach your goals, rather than choosing long-term goals and then striving to reach them.

I really like this idea, and think it's a good way to approach, well, basically everything. This ties in with the Japanese idea of Kaizen, and the general ideas of Stoicism, as far as I can tell. Concentrate on small, gradual, continual improvements - so it fits in with Agile and DevOps too, but at a really high level. I'm intrigued by where this comparison works or falls apart.

Interestingly I think this would contrast with something like Angela Duckworth's Grit, a book I was rather impressed by earlier this year. Now there's a book I should have written up on here, I might have to read it again.

Why your brain never runs out of problems to find

This makes interesting reading at https://theconversation.com/why-your-brain-never-runs-out-of-problems-to-find-98990 ; "It turns out that a quirk in the way human brains process information means that when something becomes rare, we sometimes see it in more places than ever." A few experiments were run where participants were told to define something as a threat, or as blue. Over the course of the experiment the number of items matching the original criteria was reduced, but the participants' analysis didn't reflect that. I've read the article, but not the paper, it should give you the gist.

Massive ramifications from this - regardless of changes in absolute terms does this mean humans will always find a percentage of things offensive, or expensive, or disturbing, or threatening, or....

How to become a Super-Forecaster

This article by Daniel Miessler https://danielmiessler.com/blog/how-to-be-a-super-forecaster/ was an interesting read, about the kind of people who are most proficient at predicting the future, and the qualities they have. I was particularly interested in this because I've always been intrigued by futurism, and in this case I like to think I possess all of the qualities listed. Those qualities are these by the way:

  • They are in the top 20% of intelligence, but don’t have to be at the very top
  • Comfortable thinking in guestimates
  • They have the personality trait of Openness (which is associated with IQ, btw)
  • They take pleasure in intellectual activity
  • They appreciate uncertainty and like seeing things from multiple angles
  • They distrust their gut feelings
  • Neither left or right wing
  • They’re not necessarily humble, but they’re humble about their specific beliefs
  • They treat their opinions as “hypotheses to be tested, not treasures to be guarded”
  • They constantly attack their own reasoning
  • They are aware of biases and actively work to oppose them
  • They are Bayesian, meaning they update their current opinions with new information
  • Believe in the wisdom of crowds to improve upon or discover ideas
  • They strongly believe in the role of chance as opposed to fate

I disagree on a couple of points, but only a couple, it'd be interesting to try this out.

Evolving The Creativity Scan

Taken from the TRIZ Journal, this article is here: https://triz-journal.com/evolving-the-creativity-scan/ ; I found it a cracking read and really intriguing, especially its descriptions of two types of intelligence, and that a lot of the criteria for creativity seemed to resonate with me. Further investigation required, as always, very interested in rating myself against the criteria listed.

Challenging local realism with human choices

At https://arxiv.org/abs/1805.04431 - it's been in my list of tabs for ages, it looks incredibly important but complex and would take several visits to "get my head around"

Project outcomes include closing of the freedom-of-choice loophole, gamification of statistical and quantum non-locality concepts, new methods for quantum-secured communications, a very large dataset of human-generated randomness, and networking techniques for global participation in experimental science.

I'm still trying to figure out a job where someone will pay me to read things like this. Advice welcome.

Sunday 15 July 2018

Lessons from the Legion - references from my presentation at DC151

Further to my presentation at DC151 please find a list of the most relevant references. It's almost all the same as those from earlier meetings, but I did want to highlight what a pleasure it was to present there, thanks to everyone who came, and to those who took part in the discussion afterwards - I've still got a couple of pages of notebook notes to work through.

As before, I've categorised references by type, kind of, I figure that's the easiest way for people to navigate this. Constructive feedback always welcome - I'm sure there's a better way to list these, but I'm not sure how.

Books:

"Bullshit Jobs" is by David Graeber, there's a description here https://www.penguin.co.uk/books/295446/bullshit-jobs/

The Numbers Game by Chris Anderson covers Strong Link and Weak Link games, and well, actually, I should buy this and read it, but this article covers all you need to know: http://www.asalesguy.com/soccer-and-messi-basketball-and-lebron-how-one-is-like-sales-and-the-other-isnt/

Blog posts:

Peak security product - Anton Chuvakin's point on not having enough people is here https://blogs.gartner.com/anton-chuvakin/2018/06/21/is-security-just-too-damn-hard-is-productservice-the-future/

Rapid 7 on the number of CVEs is here: https://blog.rapid7.com/2018/04/30/cve-100k-by-the-numbers/

Presentations:

Blinky Boxes - Frasier Scott's presentation on threat modelling has slides here: https://speakerdeck.com/zeroxten/threat-modeling-the-ultimate-devsecops I think this is part of his current repetroire, so best caught live of course; or seeing as he's in DevOps, it'll have iterated several times already.

CTFs - The Last CTF Talk You'll Ever Need from DEFCON 25, is here: https://www.youtube.com/watch?v=MbIDrs-mB20

CTRL+Break The OODA Loop by Abel Toro of Forcepoint from BSides London 2018 isn't up yet on their channel https://www.youtube.com/channel/UCXXNOelGiY_N96a2nfhcaDA ; it was on Track 3, I'm hoping that was recorded... or that Abel will be giving the presentation again.

The Cuckoo's Egg reference was inspired by Paul Midian's BSides Glasgow 2018 keynote "Everything You Know Is Wrong" - https://www.youtube.com/watch?v=KvksyvF6MN4

Hacker's being needed on the Blue Team comes from Harron Meer's Nullcon Goa 2018 keynote: https://www.youtube.com/watch?v=2F3wWWeaNaM

Ian Fish - Crisis Management - from CrestCON 2018 - is here: https://www.youtube.com/watch?v=R1UOW3xGpZE

Intruder's Dilemma - is mentioned in this from BSides Munch 2018: https://www.youtube.com/watch?v=PQgsEtRcfAA

Penetration Testing Must Die - Rory McCune at BSides London 2011 - is here: https://www.youtube.com/watch?v=MyifS9cQ4X0

Playbooks - Common Traps & Pitfalls in Red-teaming by Andrew Davies and Jon Medvenics from CRESTCon is here: https://www.youtube.com/watch?v=bYTrwzFUSSE

Strategy - John Kindervag's "Win the War With Zero Trust" can be found via BrightTalk here: https://www.brighttalk.com/webcast/10903/280059

Reports:

The Base of Sand Problem, the RAND report that highlights the problems in military modelling/simulations/wargaming that, for me, resonate with issues we face, can be found here: https://www.rand.org/pubs/notes/N3148.html

The Cyber Resilience Report from KPMG, that really makes the point that preparation is key, can be found here: https://assets.kpmg.com/content/dam/kpmg/ie/pdf/2017/09/ie-cyber-resilience-2.pdf

The Global Risks Report 2018 from the World Economic Forum can be obtained here: https://www.weforum.org/reports/the-global-risks-report-2018

Outpost24's report on their survey of RSA attendees can be found here: https://outpost24.com/sites/default/files/2018-05/RSA-2018-Survey-Outpost24.pdf

State of CyberSecurity Report - InfoSecurity Magazine - where they highlight regulation can drive security - can be found here: https://www.infosecurity-magazine.com/white-papers/state-of-cyber-security-report/

Seattle Seahawks and other less cybery references:

The article "Bobby Wagner Can See into the Future" is here: https://www.si.com/mmqb/2017/06/29/nfl-bobby-wagner-seattle-seahawks

Bobby Wagner's PFF rating is from this tweet: https://twitter.com/PFF/status/1005914257563836416 ( as a side note, check out this video on Luke Kuechly, who's also on that list, that's basically team-mates and rivals saying how smart he is https://www.youtube.com/watch?v=umfgvffJ6M0 )

Fewest points allowed - this ESPN article summarises it nicely http://www.espn.co.uk/nfl/story/_/id/17886833/how-seattle-defense-dominated-nfl-five-years-running-nfl-2016

Introduction - the quick cartoon shoulderpunch is taken from this introduction to the game: https://www.youtube.com/watch?v=3t6hM5tRlfA

Kam Chancellor - I think this video sums up what he provided in the narrow focus I use, you may recognise part of it: https://www.youtube.com/watch?v=qgh8HmKVja8 ( as a side note, while I don't think it's relevant to the analogy, Kam Chancellor appears to have retired http://www.espn.com/nfl/story/_/id/23967587/kam-chancellor-seattle-seahawks-safety-appears-announce-retirement-via-twitter - this is a good video summary of what he provided to the team https://www.youtube.com/watch?v=8SltNCS4Jg0 )

Legion of Boom - there's a nice retrospective that's just a five minute video: https://www.youtube.com/watch?v=N73r5HemB0M

If you want an emphasis on the boom, watch this: https://www.youtube.com/watch?v=xF6OLtz280Y

My main source for Pete Carroll's philosophy, in many senses of the word, is here: https://www.fieldgulls.com/football-breakdowns/2014/2/3/5374724/super-bowl-48-seahawks-pete-carrolls-richard-sherman-marshawn-lynch ; I have a lot of reading to look forward to.

If you want to see just how many players there are on a team then you'll see the Seahawks roster here: https://www.seahawks.com/team/players-roster/

Tackling video - the Seahawks 2015 video summarising their technique is shown here: https://www.youtube.com/watch?v=6Pb_B0c19xA

Olivia Jeter, Defensive End for Blandensburg High School, is covered in these videos: https://www.youtube.com/watch?v=yAYS2VnfFi8 and https://www.youtube.com/watch?v=5xD8qjihHf8 Yes, I do realise that those videos are from 2014, but she provides such great soundbites, I must find out what happened to her.

YouGov's survey on British interest in various sports is here: https://yougov.co.uk/news/2018/01/10/what-most-boring-sport/

Tweets:

Jeremiah Grossman on the Kenna Security report, highlighting 2% of vulnerabilities are exploited, is here: https://twitter.com/jeremiahg/status/996469856970027008 I've got into interesting discussions on how true or untrue that figure may be, watch this space.

Websites:

Bananas - Chiquita using pharmaceutical packaging is detailed here: http://archive.boston.com/business/globe/articles/2007/03/07/yes_we_have_one_banana/

Banks using mobile phone companies ... I had a single reference for this and lost it, so as per my real world presentation, I think I said something generic like "there's many examples of banks talking to many different industries", do get in touch if you find any particularly good ones.

Bartle's Taxonomy of player types is taken from this: https://en.wikipedia.org/wiki/Bartle_taxonomy_of_player_types

BreachLevelIndex.com is, well, here: https://breachlevelindex.com/

The Caffrey Triangle is mentioned here https://paxsims.wordpress.com/2016/08/19/connections-2016-conference-report/ , I've had it explained to me in person, we all need to be talking about this a lot more, in both cyber security and wargaming.

Cyber Resilience - Phil Huggins' Black Swan Security blog is here: http://blog.blackswansecurity.com/2016/02/cyber-resilience-part-one-introduction/

Cyberscape - the amount of tools we have, is taken from Momentum's cyberscape: https://momentumcyber.com/docs/CYBERscape.pdf

Dentistry using space technology is here: https://phys.org/news/2010-10-benefits-space-technology-dentists.html

Emergency response - the three element model can be seen is some detail here on the College of Policing website: https://www.app.college.police.uk/app-content/operations/command-and-control/command-structures/

Francium - my main inspiration for choosing the element Francium is here: http://www.sparknotes.com/mindhut/2013/09/06/the-worlds-most-dangerous-elements

HorseSenseUK - Equine Assisted Education - can be found here: http://horsesenseuk.com/

Incident Response, the four stages - I detailed that in this blog post: http://blog.sonofsuntzu.org.uk/post/2017/03/26/Notes-on-Incident-Response-from-the-SC-Congress

Incident Response Timelines - this is taken from the Logically Secure website, and can be found here: https://www.logicallysecure.com/blog/ir-metrics-part-1/

Intruder's Dilemma - I think the first reference to it from Richard Bejtlich is here: https://taosecurity.blogspot.com/2009/05/defenders-dilemma-and-intruders-dilemma.html

MWR - the TechCrunch article I refer to, where F-Secure note the need for offensive capability, is here: https://techcrunch.com/2018/06/18/f-secure-to-buy-mwr-infosecurity-for-106m-to-offer-better-threat-hunting/

Naval - Paul Raisbeck, who uses his naval experience in what is loosely described as management consultancy, can be found here: https://www.linkedin.com/in/paulraisbeck/ , and a relevant piece by him here: https://www.linkedin.com/pulse/what-could-your-business-learn-from-royal-navy-people-paul-raisbeck/

OODA loops are basically described here, but again, please pay me to research these concepts: https://en.wikipedia.org/wiki/OODA_loop

Playbooks, Rick Howard, the CSO of Palo Alto, on the small number of opponent's playbooks: https://www.csoonline.com/article/3207692/leadership-management/exploit-attacker-playbooks-to-improve-security.html

Practice - "any incident response plan is only as strong as the practice that goes into it" is from Mike Peters, Vice President of RIMS, the industry body for Risk Management. Best to search online for that specific quote and use whichever source will look best in your board level presentation.

TRIZ on Wikipedia is here: https://en.wikipedia.org/wiki/TRIZ and the main British consultancy, as far as I can tell, is here: https://www.triz.co.uk/

Overall

Sometimes I get the gist of something and use that. If you know any of these ideas better than I do, meaning that I've missed a nuance, or not read an important reference, please do get in touch. I always appreciate constructive corrections.

Adam Shostack discussing threat modelling on BrakeSec podcast 2017-36

This is a summary of what Adam Showstack said on an episode of the BrakeSec security podcast that I've only just made time to listen to. As the BrakeSec ( Brakeing Down Security Podcast ) page says "Adam Shostack has been a fixture of threat modeling for nearly 2 decades. He wrote the 'threat modeling' bible that many people consult when they need to do threat modeling properly."

This isn't a transcript, just me making some typed notes, corrections or comments welcome.

The link to the appropriate page is here: http://brakeingsecurity.com/2017-036-adam-shostack-talks-about-threat-modeling-and-how-to-do-it-properly

The link to the podcast is here: http://traffic.libsyn.com/brakeingsecurity/2017-036-Adam_Shostack-threat_modeling.mp3

Different threat modelling methods are:

STRIDE: It's a bad taxonomy, it's useful as a menumonic. It stands for Spoofing, Tampering, Repudiation, Information disclosure, Denial of Service, Elevation of privilege. It helps you think of how each endpoint or data flow or connection could be attacked.

Trike: Asset-centric, has a spreadsheet, it's its own methodology.

PASTA: Has seven steps, it's promoted as a "risk centric system", Adam describes it as useful for a consultant because it describes interview steps at the start and comes to risk at the end.

DREAD: Don't use it. "is a lovely acronym and a bad risk-management approach". You assign a 1-10 rating and average them out, with no guidance on how ratings are given.

Overall, the aim of this is to find threats, not to rate them.

Tuesday 3 July 2018

Gareth Southgate looking to other sports and areas for tactics and ideas

Just a brief summary of the articles I've found showing that Gareth Southgate has sought knowledge outside of his specific area:

BBC 26th June 2018 - https://www.bbc.co.uk/sport/football/44616567 - interesting that the Seahawks and the use of set-pieces are specifically mentioned.

Telegraph 26th June 2018 - https://www.telegraph.co.uk/world-cup/2018/06/26/gareth-southgate-fuelled-englands-world-cup-bid-inspiration/ - a useful summary of just how many other sports Southgate has referred to, notably the way NFL stars are presented to the media.

See also this from the Guardian https://www.theguardian.com/football/2018/jun/25/england-set-pieces-world-cup ; this from MyNorthwest in the USA http://sports.mynorthwest.com/477017/keen-to-embrace-us-sporting-ideas-southgate-revives-england/? ;

Hopefully the England team does well enough that I can use this quote from Southgate: "One of the reasons some of our guys have travelled is to see how the NFL operate because we don't have to do things the way they've always been done, we can try different things that work" ( my emphasis ) - from http://www.espn.co.uk/football/england/story/3371770/england-boss-gareth-southgate-looks-to-super-bowl-for-inspiration

And a note to myself, if I rewrite my current presentation with more soccer references, Sir Bobby Robson is very quotable: https://www.bbc.co.uk/sport/football/44605562 ; this from Alan Shearer:

As a player, I always knew there would be opportunities at set-pieces, if not for me then for one of my team-mates.

At Newcastle, Sir Bobby Robson would tell us "there is always one dope who falls asleep" and we would try to pick out the defender who would let his side down.

- page 1 of 3