
Search Results
108 results found with an empty search
- Q & A Follow-Up- Control System Cyber Security 2021 Annual Report: The Future of Industrial Security
By Andrew Ginter, VP Industrial Security, Waterfall Security Solutions January 19, 2022 We hosted a (CS)²AI Online™ seminar on January 12, 2022 that focused on (CS)²AI - KPMG Control System Cyber Security Annual Report 2021. Here is a bit about the event: This session presents key findings from the (CS)²AI-KPMG 2021 Annual Control System Cyber Security Report. Each Report is the culmination of a year-long research project led by the Control System Cyber Security Association International and draws on input from our 21,000+ global membership and thousands of others in our extended (CS)2 community. Based in decades of Control System (CS) security survey development, research and analysis led by (CS)2AI Founder and Chairman Derek Harp and Co-Founder and President Bengt Gregory- Brown, and backed with the support and resources of our Global Advisory Board members, the (CS)2AI Fellows, our Strategic Alliance Partners (SAPs), and many other SMEs. . We asked key questions about personal experiences in the front lines of operating, protecting, and defending Operational Technology (OT) systems and assets costing millions to billions in capital outlay, impacting as much or more in ongoing revenues, and affecting the daily lives and business operations of enterprises worldwide. Over five hundred and fifty of them responded to our primary survey and many others participated in numerous secondary data gathering tools which we run periodically. This pool of data, submitted anonymously to ensure the exclusion of organizational politics and vendor influences, has offered insights into the realities faced by individuals and organizations responsible for CS/OT operations and assets beyond what could fit into this Report. We hope the details we have selected to include serve the decision support need we set out to answer. Speakers: Derek Harp: (CS)2AI Founder and Chairman William Noto: Director OT Product Marketing, Fortinet Andrew Ginter: VP of Industrial Secyurity, Waterfall Security Solutions Brad Raiford: Director, Cyber Security, KPMG in US As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. **************** Many questions came in that there was no time to address in the recent CS2AI webinar giving a preview of the annual report / survey results. Here are some questions I would really like to have addressed - all of them in the theme of "looking forward." Ie: what does today's survey (and other context) tell us about the future of industrial security? Cloud / Internet / Remote Connectivity First let's look at three closely related questions: (1) What makes the Internet of Things so susceptible to being compromised? (2) How do you see the future of OT security with the emergence of cloud service in ICS and the increase of remote connection due to Covid? (3) Where does the trade-off between conveniece (remote access) and security (protecting the ICS) balance out? Which technologies or products can prevent remote exploits or additional cyber attack vectors? Well, to start with, all data flows from outside control-critical networks into those networks represent attack vectors. This is because all cyber-sabotage attacks are information, and every information flow can encode attacks. So, when we look at online communications - out to Internet-based remote access laptops, or cloud services, or vendor support websites - when we look at these communications, too many people assume that encryption makes us secure. They think encryption gives us "secure communications." This is a mistake. In fact, encryption & authentication give us a degree of protection from Man-in-the-Middle (Mim) attacks. Encryption does nothing to protect us from compromised endpoints. If malware has taken over our remote access laptop, or the cloud service that monitors and controls our ICS "edge" devices, or the trusted vendor's website, then encryption buys us nothing. The attack information comes at the ICS targets INSIDE the encrypted connections we have open to the compromised endpoint on the Internet. To question (1) - attacks inside cloud/Internet connections are the single biggest new cyber-sabotage risk with the IIoT. This is a huge impediment to adoption of cloud services or the IIoT in many enterprises, and indeed in entire industries. To questions (2) and (3), we deal with this new risk by asking the right questions. If we ask the wrong questions and we get meaningless answers. The right questions include: a) What is the benefit of cloud/Internet/remote connectivity? Almost always, the benefit is increased efficiencies that reduce the cost-per-output of the industrial process, or that reduce the time-to-output and so indirectly reduce the cost-to-output. "Convenience" is never the driver. When was the last time you heard a large industrial enterprise say "our top priority this year is increasing convenience for our employees and contractors"? This doesn't happen. Almost always, the benefit and motive for cloud/Internet/remote connectivity is increased efficiencies / cost reductions. b) Next question - with the benefits of connectivity clearly established, what are the costs? Cost is tricky. Think about it. Nowadays most industrial sites already have a security program that these new cloud/Internet/remote connections have to fit into. That security program already mitigates certain risks and accepts other risks. The level of residual risk the organization is willing to accept is something the organization has already decided on, and acted on, and deployed security solutions and procedures for. So when we connect to the Internet for cloud / IIoT or remote access services, we have to understand what new risks we add to that residual risk mix. And then we have to ask how much we will have to spend to change our security program to once again reduce all of our risks back to the point we've decided is acceptable. Cloud/Internet/remote connections increase risks materially. If we are not careful, the cost of reducing total residual risk back to the level we've decided is acceptable can exceed the efficiency benefits we hoped to gain from the new automation and connectivity. c) The last question we should ask is what alternatives are there to these very expensive new security measures? An alternative that many sites are deploying is unidirectional gateway technology - between edge/ICS systems and cloud/Internet systems. Most of the time, all or nearly all efficiency benefits of cloud/Internet/remote connections comes from data that flows OUT of the control-critical network. A unidirectional gateway supports that flow, and physically prevents any cyber attack from pivoting from the cloud/Internet/remote laptop back into the ICS target. The bottom line - neither cloud/Internet connectivity, nor remote access, nor convenience are ends unto themselves. Nobody says "the top priority for my large industrial enterprise this year is increased connectivity." Connectivity is a means to an end. The end is efficiency. Modern approaches such as unidirectional gateways give us the benefits of cloud connectivity, without the security risks and the associated costs of material changes to security programs. Compliance Another long question had to do with compliance, arguing that compliance limits innovation, and that "by tightening the hands of developers, scientists and engineers, the company is limiting how far they can go and invent or discover new things." For starters, one point I made in the webinar comes from an upcoming Industrial Security podcast with Suzanne Black of Network Security Technologies. She points out, rightly, that security programs are worthless without compliance. Compliance programs measure whether the business is doing what that business decided had to be done security-wise. If nobody complies with that security program the business so carefully created, then the program is not reducing risks, is it? When it comes security vs. innovation, I recommend episode #25 of the Industrial Security Podcast. The guest was Kenneth Crowther, a product security leader at GE Global Research. Kenneth fights the security vs innovation fight every day - he works with engineers to embed security capabilities into GE products. His conclusion - you have to be watching what's coming out of these innovators very closely to figure out when is the right time to intervene and start inserting security into their designs. Start too early and yes, you slow innovation. That means your competition comes out with product before you do, and you lose the first-mover advantage. But intervene too late, and it can take enormous time and effort to insert security into a design after the fact, again losing first-mover advantage. So yes, security slows innovation, but lack of security renders great innovations unmarketable. So compliance experts in innovative companies have to walk a fine line. Security Program Cost One last question: "Cyber Security Control System Programs can be applied to companies in third world countries, which have critical SCADA systems, but do not have the possibilities of investing a lot of money as countries like the United States or Europe do? Many of the brands related to cybersecurity do not see it profitable to work with several countries, due to the size of their companies. What are your recommendations?" There are a couple of answers to this question. One is that cybersecurity concerns only arise when industrial operations have been automated with computers - usually to reduce costs. My earlier point applies here - when organizations anywhere in the world deploy automation, no matter how much money they have, they need to look at benefits vs costs. They need to compare the efficiency gains of the automation to the cost of deploying security programs strong enough to keep cybersecurity risks at an acceptable level. If an organization can't afford the security, well, they should reconsider deploying the automation. That said, though, there is a real lack of advice out there as to how "poor" organizations can secure their systems. To help make progress in this arena, Waterfall Security has volunteered me to work with a government agency right now to put some advice together for small water utilities - ETA for the report is Q3/22. Even in wealthy countries, the smallest water utilities might have less than 5,000 customers, no IT people on staff, and certainly no industrial security people on staff. But - these same utilities constitute critical infrastructures. I mean, if a hacktivist decides to "take revenge" or something on an unsuspecting population, which is a better target for making lots of people sick - a large, well-defended water system, or a couple of tiny, poorly defended ones? I won’t go into detail, but let me say that the principle of the report and its advice is the same as above. If we want to avoid spending a lot of money on a security program that can maintain residual risks at a level appropriate to critical infrastructures, well then, we must be prepared to give up at least some of the least-valuable benefits of indiscriminate automation its associated connectivity. Further Reading That’s probably enough for now. Anyone who would like to follow up with me one-on-one is welcome to connect with me on LinkedIn, or submit a “contact me” request at the Waterfall website. And for more information about Waterfall Security Solutions or Unidirectional Security Gateways, please do visit us at https://waterfall-security.com
- Q&A Follow Up with Jules Vos: Deciphering the Value of Zero Trust & CARTA in Operational Technology
By Jules Vos, Head OT cyber security services - NL at Applied Risk - Critical Infrastructure Made Secure August 2021 We hosted a (CS)²AI Online™ seminar on August 26, 2021 that focused on Deciphering the Value of Zero Trust & CARTA in Operational Technology. Here is a bit about the event: IT and OT are increasingly becoming one and the same entity, and are approaching a common set of business goals and objectives for the future of many industries. Driven by the increase of Industrial Internet of Things (IIoT), Industry 4.0 and new business opportunities presented by digital transformation, many organizations in the energy sector are already entering the IT/OT integration journey and embracing the benefits as well as risks associated with such business models. This integration introduces new dynamics especially for IT and OT cybersecurity teams and a consolidation of responsibility for strategy. The need for a proven and different security approach beyond traditional defense in depth is becoming a necessity for many organizations in light of emerging cyber threats. Modern concepts that have been gaining traction over the last few years are Forrester’s Zero Trust model and Gartner’s Continuous Adaptive Risk and Trust Assessment (CARTA). • What is Zero trust and Continuous Adaptive Risk and Trust Assessment (CARTA) in OT? • Why these models are game changer for OT? • Where are key benefits and how to embrace this journey for your OT? • Case Study: applying zero trust to include IIoT and OT at major energy company Speaker: A forward thinking industrial cyber security expert with over 30 years of outstanding experience in engineering, consulting and mastery in industrial automation. With a hybrid skill-set in detailed control system engineering (DCS/SIS) and consultancy Jules Vos has been involved in a number of complex oil and gas production and power generation environments, in addition to cyber security and standardisation processes. Jules is a ICSJWG panel member and has collaborated closely with EUROSCSI and the Dutch NICC cyber security initiatives. As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. ******************* To view the full recording of his talk, please visit https://www.cs2ai.org/sponsored-seminars. Question: You state OT monitoring is important. Do you have experience with these and how would connect that to the SOC? Answer by Applied Risk: Basically any feasible kind of monitoring should be considered. The more monitoring the better. Options are: 1) use of vulnerability management tools to catch anomalies like Nozomi, Claroty, CyberX, Forescout, Tenable etc. These tools can integrate (API level) with SIEM solutions (LogRhythm, Splunk, Arcsight etc.). 2) Collect syslog files form devices such as network devices (Firewall, routers) or none-windows machines like Linux. 3) collect windows log files using WMI/WEF/WEC type of solutions 4) use smart attack path vulnerability scanners, to find blind spots or network configuration weaknesses (Skybox, Tufin). These tools are not OT specific but are commonly used in IT. Because the path from internet to OT often uses IT networks they may be useful. These tools often utilize firewall monitoring consoles as well like Paolo Alto Panorama etc. 5) Virus scanner (McAfee, Symantec) orchestrators (e.g EPO) integrated with SIEM. Question: Thanks for hosting the event, if we are interested in learning more how would you reccomend we do so? Books, videos etc? Answer by Applied Risk: There is a lot on internet to be found about Zero Trust. Gartner CARTA, Forrester ZT, Microsoft ZT articles. Many solution providers also have insightful articles about the subject. Question: A key balance in cyber security is enabling ths business to continue to do business, Does 0t reduce the ability of a business to do business and if it does how do you limit that impact? Answer by Applied Risk: This is a very good and fundamental question. Cyber security must enable the business and help business continuity. So, all measures taken shall not be a blocker. Of course, measures like strict identity management, meaning giving up the commonly used ‘OT group accounts’ may be perceived as annoying or a blocker. But it shouldn’t be. Identity and access management shall be architected in such a way that it makes use if named users easy for the business. Next to that awareness sessions shall be held for the business to explain the rational behind zero trust, behind identity management. I haven’t come across a street where everybody is using the same front door key, so why would this approach be expectable critical production processes! Question: In what ways can a Zero Trust Architecture simultaneous improve security through more authentication/anuthorization/observerations/interventions, yet at same time potentially reduce security through then acceptance of riskier endpoints where underestimate risk of those endpoints? Answer by Applied Risk: One of the key elements of zero trust is strong segmentation. If riskier endpoints for example means obsolete (e.g., Windows XP), they will be put in a separate segment to manage access best. A fundamental part of any architecture is to understand the risks and design against business objectives, business criticality and technical capability of components in that architecture. Question: How do we ensure engineers agree to zero trust tactics, i.e. provide them enough comfort that it will not intervene with the essential or critical functions (Safety, loss of control, etc.)? Answer by Applied Risk: First and foremost: ZT doesn’t introduce new solutions in the OT perse. It improves the use existing features like identity management. Monitoring solutions are all proven in use so not new either. Engineers in general are lacking deep cyber security knowledge and cyber security risk understanding. So education is the first step to take. Next collaboratively solution designs have to be developed. OT engineers are very disciplined in how they design hardware (e.g. cabinets, auxiliary rooms), safety controls (although the safety system key switch is often not well managed) and sometimes also physical access. But in general, they are far too relaxed when it comes to digital access and controls because risks are not well understood. This needs to change. Question: Do you see Zero Trust as something you achive and done or is it a constant journey that will continue to change? Answer by Applied Risk: ZT will definitely be a journey. The principle will remain the same, however the solutions will develop rapidly. Maybe in future, based on these principles, we will move to self-controlling connections between devices, so network controllers (like firewalls) are becoming redundant. That would be the ideal world. Question: What are the unique challenges of the industrial segment regarding the adoption of zero trust and CARTA. Answer by Applied Risk: The introduction of stricter identity management in the OT and integration with the corporate identity management system (based on zero trust connection) will be the main challenge. Question: Can zero trust can be applied on existing legacy systems, if yes, can you share your best practices Answer by Applied Risk: Yes it definitely can. Applying network segmentation is one thing. Identity management can be applied by implementing named users in each and every system. So get rid as much as possible of group account. Next extract users ID’s from these systems (manual or in an automated way) to a central management system and manage identities from there. Preferably incorporate this data into the corporate identity system but if this is not yet feasible manage identities in the OT so if people move or leave the company the identity can be removed from OT systems. Next to that network vulnerability monitoring can already be applied (see earlier reply including potential solutions) Question: Do you think all the cloud providers in market today think about ZT and CARTA. Any examples you can share of those initiatives? I know of MIcrosoft working on Defender XDR. How about AWS and GCP? Answer by Applied Risk: Yes they are however I don’t have specific examples. Please keep in mind that ZT is very much how we as end users design our IT and OT. Basically all solutions are available however, as end users, we have to architect ZT to our needs. Question: Do you see a prioritized list of areas to start? Network segmentation vs. User Identity (ACL, Least Privs), etc.? Answer by Applied Risk: It may have become clear from my previous answers that my focus is very much on architecting strong identity management and network segmentation. Identity management is complex and requires a lot of time to design the right solution. So don’t underestimate this but it is fundamental and really needs to be protected against current cyber threats. Question: How often are the manufacturers devices checked for secuiry and spying Answer by Applied Risk: Manufacturers continuously update their vulnerability databases and check devices for new vulnerabilities. However it is up to end-users to ensure that continuous monitoring and remediation of vulnerabilities and measures against new threats are being executed. The governance and evergreening is the end-users accountability and contracts with suppliers need to be managed. Question: Does Zero Trust also apply to IoT, and more specifically to Edge Computing model? Answer by Applied Risk: Absolutely. If IoT is really used as IoT, meaning direct internet connection into the cloud, ZT and strict identity management is essential. Remember that an IoT device also is an identity that needs strict management. Question: Could Zero Trust be applied to brown field OT (given the rigid and conservative nature of it) or only is applicable to new deployments? Answer by Applied Risk: Definitely. Segmentation, strict identity management, vulnerability monitoring, logging and integration to SOC/SIEM as examples has been done in many brownfield cases. These activities are not easy and need comprehensive design and preparation but are critical and necessary. Question: Are you familiar with Open Process Automation alliance and how would you consider that to be suitable for Zero Trust implementation? Answer by Applied Risk: OPAF has adopted IEC-62443. So, I don’t see an issue with OPAF and Zero Trust. Remember Zero Trust is very much about architecting a ZT OT using existing technology. ZT is not a technical ‘solution’ but an architecture and a way of working and managing your OT. Question: Operationally, if the business does not have a SOC operation (as yuo mentioned already), how would operation staff react to cybersec issues even if monitoring tools generate an alert? Who on the shop floor can understand the alert and decide how to react? As a security tool developer, we need to understand our audience to develop appropriate tools. Answer by Applied Risk: Companies and OT management need to understand that investments in OT cyber security are inevitable. Incident and event management procedures and processes need to be developed amongst many other things. It could help to develop smart tools that help shop floor users or OT cyber security focal points, if no SOC is implemented, to run effectively through the incident/event management process, also providing guidance on what to do to reduce the impact as much as possible. So that you don’t need to be a specialist to be able to contain an incident as quickly as possible. Question: Is it possible to do threat evaluation by correlating OT environment risk with user risk? if yes,any recommendations? Answer by Applied Risk: This is something for the future I think I haven’t seen this yet implemented. But yes, CARTA for example is based on the adaptive risk principle, meaning that users in a certain environment or circumstances could be trusted more or less, so they get more or less privileges. This approach definitely will be further developed in future and may replace or enhance role-based access control (RBAC) or attributed based access control (ABAC) which are difficult to implement and maintain approaches. Question: Can the process opted during Zero trust strategy for IT be used or referred while defining it for OT? Answer by Applied Risk: I don’t fully understand this question. IT and OT must collaborate and further integrate. OT continuing to stay independent and ‘isolated’ will degrade security levels instead of protecting the OT. The digitalization is moving fast and so does the need for OT data and optimization. There is must in common between IT and OT despite the substantial differences. So combining effort and architecting an zero trust based integrated IT-OT is the way forward in our opinion. Question: Is it true that OT is changing fast with greater IOT Online and in the cloud. I think Colonial Pipeline (CP) was a perfect example of Why OT should attempt to maintain Some isolation! By isolation, I don't mean complete disconnection but isolation and zero trust operation seems to be key. We want access to monitor and do supervisory control But Not Alot of Onlining of IOT Operation in Clouds. And IMO the idea of Digital Twins hosted in Cloud is :-p. Answer by Applied Risk: Colonial Pipeline clearly was not an OT issue but failing billing (office IT) systems forced to company to stop production. Zero Trust indeed is the way to go in my opinion when it comes to integrating IT and OT. It means that nothing is allowed unless explicitly approved. This applies to all OT elements including the inevitable cloud integration. IoT but also specific OT functions like advance process control as well as services like condition monitoring will more and more be cloud based. Question: Preventing lateral movement is important But we have to develop technology to prevent exfiltration in general. There are too many ways for botnets to do command/control from the Internet once a quorum of IOT devices has been co-opted. We've seen this especially with Router devices (some of which have been in operation way beyond expected lifetimes). Old hw/sw solutions are a concern when they've been networked but are not being updated against attacks. Answer by Applied Risk: Fully agree. So, this is why end user must develop a comprehensive governance framework and operating model to manage compliance and devices and keep the OT evergreen. Cyber security must be a part of daily operations. ANSWERS PROVIDED BY:
- Q&A Follow-Up with Rick Kaun: Navigating the New TSA Directive for Pipelines
By Rick Kaun, VP Solutions at Verve Industrial Protection October, 2021 We hosted a (CS)²AI Online™ seminar on September 22, 2021 that focused on Navigating the New TSA Directive for Pipelines. Here is a bit about the event: Navigating the new TSA directive for pipelines (and other future industry targets) – Lessons learned from a regulated industry. The recent increase in ransomware events coupled with one of the targets being a large pipeline company has compelled the TSA to issue a new cyber security directive. This means many OT organizations are now scrambling (some more or less than others) to stand up a multi-disciplined security program for a very diverse, distributed OT environment. This looks and feels a lot like the Power Industry was confronted with when NERC CIP was first introduced and so we, as security practitioners, can learn a great deal of lessons from an industry that has already run down this path. Challenges in understanding scope, standing up multiple security initiatives, organizational changes for responsibility, maintenance and response activities and most notably day to day maintenance and compliance can be significant obstacles for operating companies to overcome. Join us to review a number of security learnings around setting up and maintaining an OT security compliance program such as: • A multi-disciplined approach is key – treating individual security tasks as silos will create gaps, increase effort and decrease efficiency • Remediation is a key consideration – simply mapping vulnerabilities or enabling perimeter/network monitoring is just a drop in the bucket – need to be able to reduce risk and attack surface as well as react to emerging situations • Monitoring – as risk is reduced and new threats emerge the current risk status is always in flux. Being able to monitor and report on current status, changes to the threat landscape or show progress/compliance are key components of a sustainable program • Automation – as many of these tasks and insights that can be automated the better. OT staff is spread too thin and traditional OT risk reduction approaches are far to manual to provide meaningful and consistent risk management As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. ******************* QUESTION: What are the Cyber Risks facing Phasor Measurement Unit (PMU) devices in the power sector and how to mitigate these cyber attacks? ANSWER: This is a very specific question that I do not have a very specific answer for other than to suggest that ALL OT/IIOT devices are gaining much more scrutiny from both ‘good’ and ‘bad’ guys. Any class of ‘embedded’ asset (or asset in general) can be a target for anyone looking to do harm. The explosion of IIOT devices and connectivity in general have supersized our technical footprint and therefore our security debt as well. Most IIOT devices rush to market with ‘problem solving’ and ‘ease of access/connectivity’ as primary promises to the consumer. What is a distant (if even present at all) consideration is security. So we see many organizations wanting to stitch data together but do so in a less than secure way. So to answer your specific question – what can be done? Work with your vendor to provide secure by design solutions. Add data (if possible) from these systems and the networking gear connecting them to an SIEM or alerting tool set. Use this feedback to tune your end point deployment, incident response programs and, above all else, security language into your procurement language and projects as well as back to the original vendors. QUESTION: Awareness of Zero Trust and Secure Access Service Edge concepts are growing across industries. What parts of the TSA directives can be addressed through Zero Trust/SASE implementation (likely in the 2nd generation approach)? ANSWER: Interestingly enough the second TSA directive relative to cyber security explicitly calls out the need for Zero Trust. The specific ask is generic – they say ‘Implement a Zero Trust policy that provides layers of defense to prevent unauthorized execution’ then points to specific details in the directive document. And Zero trust (according to Palo Alto) is “Zero Trust is not about making a system trusted, but instead about eliminating trust.” But what the end user needs to know is that this is a concept that can/should be applied to as many policies, procedures and technologies as possible. Specific examples include network segmentation and firewall rules, remote access technologies and policies (ie, implement 2 factor authentication, minimize applications/permissions on the ‘jump server’, include logging and monitoring of that system, the subnet it is on and the logged in users, etc. This concept needs to be incorporated into every layer of security throughought an entire program for inventory to protection, detection and response/recovery. QUESTION: This is a great model and case study. What happens when the OT team and the IT team do not get along? Have seen client environments where the two teams do NOT collaborate = Demark at the air gap. Tips on getting past that situation? ANSWER: This is not an ideal situation but is unfortunately all too common. Worst case is you can have a ‘negotiation’ whereby you do a RACI chart or an accountability matrix that shows who gets to (has to) do/decide on what topic. For example, IT can monitor the landscape for Vulnerabilities (ie, what is in the wild) but just for identification. OT then needs to take the baton to map known risks to their assets and come up with a plan. IT can be expected to help understand/navigate the actual risk and possible compensating controls (ie, what can we do if we can’t actually patch?) but then OT needs to roll out the remediation. Ideally this type of forced collaboration *should* lead to a better working relationship over time as both sides get to know one another better. Maybe starting with an exercise to understand the other perspective would work too? We hosted a whole webinar about it here if you want to dig in more. Reality though is you need organizational support. Meaning the senior management needs to identify, facilitate and, if necessary, support a collaborative environment. We wrote about that recently too since so many organizations are currently examining their manual/patch work efforts and wondering how to plan a better solution. QUESTION: Any specific thoughts for medium to smaller companies who may not have nearly the budget of enterprise sized companies? It can be a bit daunting on smaller budgets. Why do you state that moving IT to an OT security role will not work? ANSWER: There are two questions in here! First – small and medium companies can absolutely participate without enterprise wide budgets. Though I must put the obligatory plug for educating the finance department that our mission is to ensure the safe, reliable, expected operation of the facility. The very facility that makes the company money. We can get money for spare equipment (sometimes very expensive equipment) because it relates to uptime. Why can’t we see the same correlation to cyber? Anyways. Small/medium companies can be smart and build automated tools into a multi-phased approach over multiple budget cycles. They can also find a growing list of professional (managed) services that cater specifically to OT. These options help to move security along in increments but not break the budget. Really what you need is a view of what you are trying to achieve (ie, what does ‘done’ look like) and then break that journey into multiple, achieveable phases. And where you can invest up front in tools that provide returns on multiple fronts or that introduce OT safe automated remediation and management then you really get the benefit. Remember a technology without an owner is a waste of money! B. Second question. I did not necessarily say IT to an OT security role would not work, what I wanted to point out was that both disciplines (IT and OT) are very specialized and can be very technical. To expect a single person to be an expert at both is asking too much. When I built my team at Matrikon I only had traditionally trained IT types to draw from and had to teach them the basic nuances of OT. Essentially – do no harm and do not touch unless OT says you can. My point was simply that if you want the best possible solution you are better off pairing an IT expert with an OT expert to work together towards an OT safe program. QUESTION: How has the OT Organizations reacted to the TSA Cybersecurity Directive? Acceptance? Reactive? ANSWER: The answer to this is yes. All of the above and some others inbetween. Like most regulatory initiatives there are leaders and laggards. For a select few they are looking at this as acceptance and building appropriate responses. (read: Not just knee jerk reaction to tasks like “Within 120 days, implement and complete a mandatory password reset of all passwords within OT systems, including PLCs” While this does need to be done, it is not a one time task (or at least in the context of building a repeatable, sustainable program) it is part of what should be a new policy/procedure. And to incorporate this, along with many other exisitng or pending control, the opportunity is now to build a proper maintenance program. Unfortunately some do take the approach of ‘I don’t want to be first, I don’t want to be last – I just want to put this behind me and move on’. QUESTION: What are thoughts on the deployment of NERC CIP versus the Pipeline Guidelines? Do you see any possible lessons learned? Specifically the prescription of security functions. ANSWER: I indeed do see many parallels. And the biggest challenge NERC CIP had (and Pipelines likely will have) is in maintenance. (There are a host of other challenges NERC CIP had around enforcement, language, in or out of scope, etc. We saw large power companies physically and logically split their generation units into smaller, independent units to avoid megawatt thresholds!) But the single biggest lesson Pipelines needs to learn is that whatever path they choose to meet these requirements they really need to build in a maintenance plan. Expecting already overburdened OT staff to add inventory, user, software verification and patch/remediation tasks to their list is not sustainable. Automated tools, management support and budget are all fundamentally required. Just like a safety program! QUESTION: Is it advisable to have a in-house central team or third party? ANSWER: In house is good if you can afford it and find the team. Third party expertise and offerings continue to grow so are a good option if it is a better fit for your organization. However be sure to be articulate in your procurement and contract language just exactly what is or is not being provided and know that in the NERC CIP world, in house or contracted, mistakes or oversights are the fault of the owner/operator. QUESTION: What is Rick's take on alternative measures on Section 5 of the directive and how we can use that if we can't meet the intent of a directive? ANSWER: Alternative measures at first glance appear to me to be what NERC CIP built in for ‘Technical Feasibility Exceptions’. Essentially this is a ‘we cant technically do this on this specific asset’ exemption. Usually you need to prove what/why you cant do something, for example legacy or low functioning networking gear not being able to provide logging data. The second thing these types of exemptions require is a plan/path to remediate these exceptions at the next possible opportunity. You also need to submit what you are doing as compensating controls in lieu of the intended directive. So if you can’t patch a system for a specific risk but you can block the port/service it targets upstream at a firewall or layer 3 switch then that needs to be submitted as an alternative. QUESTION: What is the consequence of non-compliance to TSA directives for owner operators? ANSWER: At the moment there is not a formal fine/violation/audit function for pipelines but given the TSA is already a regulatory body and has other powers it can’t/won’t be long before some form of enforcement is added to the mix. QUESTION: What help if any on due care due diligence on TSA directives can help an owner operator, where we are not able to fullfill all directives but have a program in place to fix? ANSWER: I think this is an extension to the question about alternative measures listed above. In general more regulatory programs are looking for taking the intent and doing your best to achieve it. Specific, granular tasks/technologies are not typically prescribed but doing nothing due to a lack of budget or support is not an option. For those who show due care and due diligence and tie back existing measures as well as alternate measures back to directive specifics should be in a pretty good spot to pass a ‘compliance’ check. QUESTION: My question is on interpretation of TSA fines if we show due diligence on following TSA directives though not able to fullfill all of directive, what is Rick's take on that? ANSWER: Again this is about alternate measures. What I can say I have seen from NERC CIP is that if your organization is doing all that is reasonable (meaning you do free up someones time to change all passwords or to research, purchase and implement multi-factor authentication technologies) and have a timeline for completion you should be OK. What is not likely going to be well received is if you were to try to defer multi-factor (as an example) to Q1 or Q2 of next year because there are immediate operational projects underway. They have given direction and timelines and only technical limitations are likely acceptable excuses for non-compliance. QUESTION: Does Rick envision that TSA will issue more directives taking cue from NERC CIP and it will become a 2nd generation compliance program from the regulator? ANSWER: Crystal ball type of question here. My bet is yes. My sense is the TSA is merely starting with this specific set of directives. You can see within them (like documentation of privileged accounts) there are requirements for regular review. This is indicative of the expectation that pipeline companies need to be building a repeatable program. Not just change passwords and add multi-factor authentication once and never look back. QUESTION: Patching timelines are very aggressive on 35 day testing, 35 implementation for security updates on OT assets, how does Rick sees that being achieved keeping operational disruption/outages in mind? ANSWER: For NERC CIP it is a bit more nuanced than that. And not everyone even patches. If you dig into the details of NERC CIP language it can get complex but the general process is as follows: Within 35 days of a patch being released you need to first review it for applicability and assess its potential impact on your specific organization. You then have another 35 to either deploy it or deploy compensating controls in lieu of the patch. All of this needs to be documented. Now some NERC CIP entities deferred all patches and pointed to strong network controls (ie, data diode, physical security, etc) to justify the lack of patching but that is not a long term view (in my honest opinion). QUESTION: Which is the importance or the place of OPC UA for systems and security convergence across the industry? ANSWER: Any security tool or standard is welcomed and would be an improvement. And historically OPC broke a lot of security by virtue of its function to tie together otherwise unconnected systems. However as technology continues to be expanded upon and proliferates within operating environments technologies like OPC or IIOT appliances and apps absolutely need to be selling their virtues alongside their security capabilities. For all of you owner/operator types reading this PLEASE put minimum compliance security language into your procurement language and your project specifications going forward. Otherwise minimum compliant functional bid will be cost effective but not likely secure. QUESTION: There is no one-size-fits-all "standard" solution to security, so what does Regulation Do except add change to already difficult job? ANSWER: Great question. And all security practitioners always say ‘Compliance is not Security’. I am not a favor of ‘the stick’ and would prefer ‘the carrot’ myself but the challenge is that many organizations just do not take security seriously. So does regulation help get more organizations moving towards improved security? I hope so. But what it can do is at least provide a guideline as to what they *should* be doing within security. Again, it is not ideal but I have been doing this for 20 years now and for every 1 proactive security organization I see is see 3 that are half as good, 3 that have not started but are dipping their toes in the water and another dozen who have not nor will not do anything. (Until they are hit by ransomware perhaps?)
- Q&A Follow-Up with Mark Bristow: Developing & Leading a Top ICS Incident Response Team
By Mark Bristow, Branch Chief, Cyber Defense Coordination (CDC) at Cybersecurity and Infrastructure Security Agency (CISA), (CS)²AI Fellow August, 2021 We hosted a (CS)²AI Online™ seminar on August 11, 2021 that focused on Stop Tomorrow's Crisis Today - Developing and Leading a Top ICS Incident Response Team. Here is a bit about the event: Incident response can be one of the most challenging times a process may face. The key to success is pre-coordination, preparation and training. (CS)²AI founding fellow Mark Bristow will take you through strategies in setting up and training your ICS incident response capability to make sure you are ready for this challenging day. With the right staffing model, incident response plan, pre-arranged internal and external partnerships, pre-built mitigation strategies and the right frame of mind, responding to an OT cyber incident can be effectively managed. Mark has worked on hundreds of incident response efforts impacting or threating process control environments in his long career with CISA’s Threat Hunting teams (formerly ICS-CERT). Speaker: Mark Bristow is Branch Chief, Cyber Defense Coordination (CDC) at Cybersecurity and Infrastructure Security Agency (CISA). He previously served as Director of the US Department of Homeland Security's (DHS) National Cybersecurity and Communications Integration Center (NCCIC), responsible for Incident Response efforts of the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) and the United States Computer Readiness Team (US-CERT). As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. ******************* On August 11th 2021 (CS)²AI hosted me in a conversation about developing and leading incident response teams. In the webinar we covered how incident response can be one of the most challenging times an organization may may face. We covered some keys to success and strategies in setting up and training your ICS incident response capability to make sure you are ready for this challenging day. You can see the presentation in full in the member library (https://www.cs2ai.org/member-resource-library) but we had way more questions than we had time to answer. We decided to take some of the best questions we were not able to get to and post them in blog format to continue this important conversation. QUESTION: Can you have Mark comment on his guidance on incident disclosure - what MUST be disclosed, and what might need to be disclosed, and what might be able to be kept internal? There is an intersection of ethics and business reputation that may need to be dealt with and also specific compliance requirements depending on the client - i.e. Fed client vs commercial client. ANSWER: This is a really complicated topic and there are no “one size fits all” solutions. Each organization needs to internally decide what if anything they are required to disclose, what they plan to disclose and to whom. Business and reputational risk must be carefully weighted and agreed on by management. Not having a pre-built communications plan for common incident scenarios is one of the biggest mistakes I see organizations make and trying to build such a plan during an incident leads to rushed and not fully contemplated decisions. In my view, more disclosures are needed and details of incidents need to be shared with the broader community. Too often are organizations hit by similar activity because others have not shared their experiences. This is where disclosure to external organizations like the Cybersecurity and Infrastructure Security Agency (CISA) or an Information Sharing and Analysis Center (ISAC) can be really helpful in communicating the relevant tactics and observables in a way that can keep the identity of the organization confidential. Ultimately, many incidents go undisclosed. I recently completed a survey of OT/ICS practitioners for SANSwhere this topic was explored more in depth. Most respondents indicated that they had an incident in the last 12 months with many organizations having multiple incidents with operational impacts. Outside of this and (CS)2AI’s own annual surveys and reports I’m not aware of many of these incidents being publicly acknowledged or disclosed. Until we get over the stigma of reporting, and view public incidents with empathy and understanding instead of snide remarks and scorn, incidents will continue to be under-reported and future victims will continue to not benefit from the hard-earned lessons from those already impacted. QUESTION: Do you have suggestions for practice lab environments? ANSWER: Building out an ICS lab is an important step to training and maturing your incident response team. The best plan for building out a lab is to use equipment representative of your current, as-built process so you can test how your response tactics will perform on the equipment you have in the field. I like to recommend taking virtual copies of your windows-based systems in your ICS and stand them up in a virtualized environment to aid in your lab setup. While the system will not have all of the expected IO for the process, this will provide a mostly realistic environment for how an adversary may view your process control network and allow you to test detections without the investment of setting up an entire mirror process. If your environment isn’t yet ready to build out a fully replicated lab, you can work with external organizations such as the ICS Village to leverage labs already built in the community to accelerate your lab buildout. QUESTION: How can we build relationships with other security professionals in order to share intel? ANSWER: This is definitely a key challenge faced by many and can be solved in a few key ways. The first is to always ask questions. This may be tough for many but you’ll find that even when approaching ICS cyber “legends” almost all are willing to support you and help so don’t hesitate to reach out. Anyone not willing to help out or answer a reasonable question isn’t really worth YOUR time anyway. Joining communities like (CS)²AI is also a great way to meet others of like mind in the industry. Conferences and events can really help build your network of trusted partners as well even while most are currently virtual. The best advice is to not be afraid to ask questions of others and be receptive to others when you are asked. QUESTION: What changes would you recommend specifically to Corporate Cyber Incident Response plan to ensure it has correct inclusions for OT? ANSWER: If you have a corporate IR plan, you are off to a great start! The next step is to ensure that OT considerations and operations are reflected in the plan. I like to run a table top exercise (TTX) with the IT IR team and a few people from operations. It’s best to use whatever is in the news recently so ransomware is a perfect scenario. Try running the IT plan on an incident impacting OT and identify the gaps that emerge (there should be a few). Then take a look at any safety plans and emergency plans that the OT team already have on the shelf, and evaluate how the new plan can incorporate and compliment the existing safety plans. Finally add some OT specific scenarios to your overall IR plan to ensure that when you have an incident you are not starting from a blank page. QUESTION: What recommendations do you have for performing red team exercises? ANSWER: Red teaming is an important step in the maturity of an ICS IR team, however, to be effective it requires a level of maturity from the IR team and a red team who is proficient in emulating adversary behavior in an OT network. Make sure you have clearly defined exercise goals and rules of engagement and you’ve accomplished some pre-requisite actions such as comprehensive, continuous asset identification program, a developed collection framework to support your ICS security monitoring, correlation of security monitoring with process data, integration with IT monitoring, and have a robust risk management program. Without these emulating a high order adversary the red teaming will not be particularly effective and resources would be better invested in ensuring fundamentals and security postures are prepared for such an exercise. Finally, for an effective exercise it’s best to conduct it in an environment that replicates your production control system network, some tips covered above (Question 2) when addressing building a lab environment. QUESTION: Do you have (or can you point me to) a standard/document that can be used to design the turtle mode you described? Also do you have an estimate on what percentage of the industry is already using this technique? ANSWER: I’m not aware of anything out there that describes “turtle mode” but I know other ICS security experts have a similar concept in their repertoire. Developing a defensible cyber position is really something that needs to be tailored to the specific process, threat model, and management decisions on acceptable impacts from a defensible position. In the end, “turtle mode” is about having a plan to temporarily reduce your cyber risk surface area while minimizing process impacts. Some key things to consider are: 1) What connectivity can you temporarily disable safely? 2) Can you disconnect all pathways to the internet from the ICS? 3) Are there functional elements that might be able to temporarily disabled (perhaps with an impact to efficiency)? 4) Can you temporarily limit accounts that access the ICS from on or off site? 5) Can you temporarily move to a secure out of band communications framework? 6) Can you temporarily limit processes changes to ensure your baselines are validated? QUESTION: Where do you see plans fall apart the most? Is it right at the start or is there a common point along the plan execution you see it fall apart the most? Bit of an abstract question, but any insight would be great. ANSWER: There are two areas where I see organizations stumble, communications and experience. Most IR engagements begin to fall apart when it becomes necessary to communicate with partners, customers or the public and few plans have a communications plan as a key part of the IR. Every organization believes that all aspects of an incident can be handled either “in-house” or with existing resources, this is almost never the case for an effective response. Finally, often motivated by the above, organizations attempt to leverage only internal resources without the needed expertise and experience to handle an incident. This often leads not only to the response not being completed correctly but in the destruction of evidence or un-necessary impact to the process/business. It’s ok to admit you are in over your head, organizations who do this well know what they can handle in-house, and who they can call when they can’t, and the self-awareness to know the difference. QUESTION: When looking for partners, what are the key skill sets that you feel are hardest to find? ANSWER: Find partners who are strong where you are weak. If you have really great host analysis team, make sure your partners compliment you with strong network analysis capabilities. Perhaps you have a really mature team that doesn’t have the time to build and maintain an ICS testing lab. The community has really grown over the last few years and there are some really great organizations out there who can fill a lot of gaps. The hardest part is making an honest assessment about your capabilities and where you need to reach out for help.
- BOOK Synopsis: Security PHA Review for Consequence Based Cybersecurity- Jim McGlone & Edward Marszal
By Jim McGlone, Chief Marketing Officer at Kenexis Consulting Corporation Co-author of Security PHA Review We are looking forward to hearing from Jim McGlone on September 15, 2021 at the OT Cyber Risk - Taking it Down symposium at 1:00PM - 5:00PM EDT. During a cybersecurity project on an oil refinery in Europe, the ideas used in this book became a reality. The project was ultimately being led by IT focused people and the industrial control systems were subcontracted to our company. Everyone on the project wanted to do what they knew how to do, and I recognized that the safety functions and the industrial control system were at great risk. Trying to explain to them that they were focused on the wrong thing was frustrating. Local engineering didn’t want us there and the control systems were all cross connected and essentially a flat network for an entire refinery and all the auxiliary processes. This facility couldn’t even shut its blast doors, so it was frustrating. What really pointed at the problem was the traditional cybersecurity risk calculations. The team wasn’t concerned about equipment that could blow up or emit toxic gas. Meanwhile, there were three different vendors remotely connecting to the power generation equipment alone and no one knew it until we found the modems. At least one more vendor was connecting remotely to DCS network too. During several calls back to the other author, Ed pointed out that the risk calculations should already have been completed for the safety functions in their PHA or HAZOP. The local engineers gave me a PHA from 1969. This is how we got to the ideas that are conveyed in the ISA book Security PHA Review for Consequence-based Cybersecurity. By focusing on the possible consequence of a loss of control scenario we were able to determine if the cause and safeguards for that scenario were vulnerable to cyberattack. If they were vulnerable, we devised a method to determine the Security Level – Target (SL-T) from the ISA/IEC 62443 standard or devise an alternative safeguard to prevent the attack from causing the consequence from occurring.
- After six years we have definitely reached an exciting stage for (CS)²AI!
By Derek Harp, (CS)²AI Founder, Chairman and Fellow April, 2021 Dear Members, After six years we have definitely reached an exciting stage for (CS)²AI with significant growth every month across the community engagement metrics we are tracking. The global operations team wants thank you for attending the CS2AI Online sessions in record numbers and for contributing your inputs. All your positive feedback is also serving as rocket fuel for us personally and we sincerely thank you for that. We will continue to experiment, innovate and work hard to support your critical endeavors. At this juncture we can only reach our organization’s true potential by growing our team and starting more Members Helping Members initiatives: One way to do that is to activate more global volunteer positions. For those of you that have already notified us of your interest, look for an email soon regarding opportunities. If you have not already, please tell us you want to GET INVOLVED and we will be in touch. Today, I would also like to announce that a few new part-time global team positions will qualify for some compensation. If you interested in learning more and have the flexibility to consider that option, please email us at input at cs2ai.org and we will add you to our list of candidates to contact about those positions as they become available. Regards, Derek
- OT Cyber Risk Management – You’re Doing It Wrong
The 3 Most Common Problems That Nearly ALL Cyber Risk Management Programs Have, and How to Solve Them Submitted by: Clint Bodungen President & CEO at ThreatGEN and (CS)²AI Fellow August 11, 2021 This article was previously posted on LinkedIn and can be found here. In this article, I will discuss the 3 most common mistakes people still make when assessing and addressing OT cyber risk management (hint: most of you are still doing it backwards), and ways that you can make your process more efficient and effective (including more cost-effective). I get. Dealing with risk is not exciting… or easy. Risk is a term that turns most people off. It is certainly not as exciting as “penetration testing” and “threat hunting”. More common subjects like threat monitoring, vulnerability assessments, and incident response also definitely have a greater mind-share for most of you tasked with OT cybersecurity. While every single one of these elements areimportant (critical, even), the truth of the matter, however, is that all these things feed into the bigger picture that is your overall risk: risk to safety, risk to production, and ultimately, risk to the business. Furthermore, not one of these things on its own gives you everything you need to see the entire picture. They each provide a different piece of the puzzle; a puzzle that, when completed, gives you what you need to (“stop me if you’ve heard this one”) create an accurate, targeted, and cost-effective risk mitigation strategy. (Eesh... I know, right? We have all heard this before…) I think that most people probably understand this overall concept. The problem is, there are so many aspects to managing risk, and so many moving parts that, (a.) most organizations do not have the resources (people and/or budget) to do it all (so, they are forced to choose one or two areas that will give them the biggest “bang for their buck”), and/or (b.) they don’t truly understand the differences between these tasks (most notably in the assessment tasks), what information and value they should be getting out of them, and how that feeds into their risk management. So, let’s unwrap the problems, identify solutions, and simply this whole thing. Problem 1: Confusion Over Assessment Terminology I cannot tell you how many times I’ve had a customer come to asking for a penetration test, but what they really wanted was a more comprehensive vulnerability identification (a vulnerability assessment), with risks identified and prioritized (a risk assessment). With all the different terms and types of assessments, it can be understandably overwhelming. Not understanding each of the assessment types, steps, and what you actually need can also lead to an incomplete risk assessment. I will cover more specifics about each of the different assessment types and steps in subsequent articles, but for now, let’s break down and define all the terms commonly used to describe the different assessments. Common Cyber Risk Assessment Terminology · Breach Assessment (“Threat Hunting”) A search for evidence that you have been breached. This can be formed as an initial step or as part of threat monitoring. It is also part of incident response, to identify where, how, and when a breach may have occurred. · Vulnerability Assessment This is an examination of your attack surface through vulnerability identification. It is sometimes incorrectly referred to as a vulnerability scan but that’s only part of it. A vulnerability scan is an automated search for technical vulnerabilities. A complete vulnerability assessment also looks for procedural/process, and human vulnerabilities. · Vulnerability Scan Part of a vulnerability assessment that uses an automated tool to identify technical vulnerabilities such as software bugs, configuration weaknesses, and missing patches. · Gap Assessment (Gap Analysis) Part of a vulnerability assessment that identifies missing security controls and procedures, which are outlined (or required) by best practices, standards, or regulations. · Audit A formal gap assessment often carrying penalties for failures. · Penetration Test Part of a vulnerability assessment and uses hands-on adversarial (“hacker”) techniques to validate findings (exploit feasibility), identify more complex vulnerabilities, or test existing security controls (breach feasibility). Note: A penetration test, on its own, is not meant to provide the level of comprehensive vulnerability identification that a complete vulnerability assessment would. · Red Team Exercise (Red Team Test, “Red Teaming”) An exercise that simulates a realistic cyber-attack (sometimes also deploying physical intrusion), where the red team (the attackers) deploy actual offensive techniques and strategies in an attempt to breach the target network and system(s). Such exercises are meant to test an organization’s overall security defenses, including threat monitoring and response capabilities (the blue team). · Purple Teaming A more recent term to better emphasize the blue team engagement during a red team exercise. (red + blue = purple) · Risk Assessment The entire process of identifying assets, vulnerabilities, consequences and impacts of incidents, and prioritization based on a risk score. Note: If the final result of the process does not include some level of risk scoring or prioritization, it’s not a risk assessment. Additionally, vulnerability scoring (i.e., CVE/CVSS) is not necessarily risk scoring. Risk scoring should be specific to your organization and consider the impact to your business (financial, production, safety, etc.). Again, I will cover more specifics about each of these in subsequent articles. For now, hopefully this helps clear up some of the confusion between the different terms, assessment types, and what you actually need for your situation. Problem 2: Most Risk Assessment Processes Backwards Most risk assessment processes I see go like this: Step 1. Identify vulnerabilities (in some cases they just perform a penetration test or a gap assessment). Step 2a. Attempt to prioritize remediation based on the vulnerability CVSS score. Or Step 2b. In some cases, organizations will assign a “criticality” score to assets, and then prioritize remediation by asset risk, calculated using the asset criticality and vulnerability scores. (Good job! But you’re still doing it wrong.) In the end, you’re still left with a heap of problems to fix, but at least they’re prioritized. Right? So, what’s wrong with this process and why am I calling it backwards? Most organizations spend too much time and money performing exhaustive vulnerability assessments on most, if not all, of their assets, prioritizing every finding, and trying to tackle all the remediation. Risk prioritization is good and should be performed. However, by reordering some of the steps normally performed in the risk assessment process, you may find that you can significantly reduce the amount of vulnerability assessment work needed, as well as the number of high priority “fixes” on your list. In doing so, saving you time and money. Here is the process I recommend: 1. Identify your assets and organize them by “criticality”. Criticality is a term used to describe what the impact level would be for a given asset if that asset were out of service. Performing this step at the beginning of your assessment, rather than near the end, can potentially save you a lot of time and effort throughout the rest of the process. Case in point, assets with a low criticality (i.e., low risk) can be moved lower down on the list, or even eliminated from the assessment altogether in some cases. 2. Identify all the access vectors to each of those assets, starting with the assets with the highest criticality. This is a step I see that is missed in most assessments. Yet, it is probably the step that could save you the most time in the long run. Access vectors are the ways in which humans or other assets can access another asset. This could be a network connection or physical access. Identifying the number and type of access vectors can lower the overall risk rating of an asset, thus, reducing the level of risk assessment and management effort needed for that asset. For example, if there are no network connections to an asset, the vulnerability assessment, and especially the remediation, for that asset only needs to take into consideration local access for now. Note: In this case we are only considering network and physical access vectors. Logical access control is not considered in this step. 3. Prioritize based on criticality plus the number of access vectors. Assets with fewer access vectors and/or those inside trusted/private zones would also be lower on the priority list (requiring less attention and level or effort later) than those with more access vectors and/or in less trusted zones. Trust zones and internet access should also be considered. For example, assets without internet access and/or in more trusted zones could be lower on the list. 4. Perform vulnerability assessments, starting with your highest priority assets first for the technical assessments. This step is where you should see your earlier work really start to pay off. Vulnerability assessments typically make up the bulk of the work of a risk assessment. So, by starting the prioritization process at the beginning of the assessment, you should be able to postpone, if not eliminate, some of the vulnerability assessment work here. Remember, your vulnerability assessments should include human, processes and procedures, and technical vulnerability identification. But for asset specific evaluation, especially technical inspection, you can move lower risk assets (i.e., those with lower criticality and fewer access vectors) further down on the list. Assets lower on the list can be assessed later in the process, moved to a time when resources allow it, or eliminated from the assessment process altogether in some cases. Note: I recommend avoiding automated vulnerability scanning on most production OT networks. 5. Prioritize risk and remediation by asset, not by vulnerability. At the end of a vulnerability assessment, many organizations prioritize every vulnerability (usually by CVSS score) and remediate each of the vulnerabilities according to priority. Even prioritized, this can be a huge undertaking. Especially if you used automated vulnerability scanning! Instead, create a cumulative vulnerability score for each asset. I prefer to create a score based on the number and severity level of critical and high vulnerabilities, as well as the number of vulnerabilities that would allow an attacker to gain access to the systems (e.g., remote code execution, remote access, etc.). I then use the asset critically rating and number of attack vectors as modifier values to create a final overall risk score for each asset, which I can use to provide the final asset prioritization. The details of the formula I use are not important for the purpose of this article. You can use whatever formula or calculation method that makes sense to you. The important thing is that you use the same calculation method for every asset, and the final result allows you to prioritize assets based on the values that are important to you in your prioritization. Hint: When you’re preparing to remediate vulnerabilities and you see hundreds of them, remember that most of them all have a common fix. A common OS patch or removing unneeded applications like Adobe, for example. I even use a spreadsheet to help simulate the amount of attack surface reduction I get by applying these fixes one at a time. In summary, prioritization should start at the beginning of the entire assessment process, not wait until the end. Problem 3: Assessments Are Treated as a “One-Off” Almost every organization I speak with treats vulnerability assessments, and even risk assessment, as a “one-off” project or annual occurrence. Some of those organizations are at least comparing subsequent assessments with previous ones, to hopefully show progress. That is what you should be doing at the very least. However, always having an understanding of your risk profile at any given time, and showing progress throughout the year, rather than from year to year, is much more effective. This doesn’t mean you need to be performing multiple risk assessments throughout the year. Having a framework, on the other hand, that can ingest data regularly and always provide an active “living” risk profile as you make progress, is recommended. Having such a framework also helps keep assessment data and remediation tasks organized. Most of the top OT specific threat monitoring platforms (e.g., Forescout, Claroty, Dragos, Nozomi, Tenable.ot, Microsoft Azure Defender for IoT, etc.) have risk scoring and tracking mechanisms built in to varying degrees. However, I also recommend using a platform that is dedicated to OT risk tracking and management by correlating data from multiple sources such as SecurityGate.io and GPAidentify. Conclusion OT risk management isn’t easy, and for most people it probably doesn’t seem very glamorous. To make matters worse, most organizations spend more time, effort, and money on the process than they need to. By understanding all the terms and steps involved in a risk assessment, rethinking how and when you prioritize your assessments and findings, and using an ongoing process combined with a risk management framework, your risk management process and program will be more efficient, effective, and won’t tax your resources nearly as much. About the Author Clint Bodungen is a world-renowned industrial cybersecurity expert, public speaker, published author, and gamification pioneer. He is the lead author of Hacking Exposed: Industrial Control Systems, and creator of the ThreatGEN® Red vs. Blue cybersecurity gamification platform. He is a United States Air Force veteran, has been a cybersecurity professional for more than 25 years, and is an active part of the cybersecurity community, especially in ICS/OT. Focusing exclusively on ICS/OT cybersecurity since 2003, he has helped many of the world’s largest energy companies, worked for cybersecurity companies such as Symantec, Kaspersky Lab, and Industrial Defender, and has published multiple technical papers and training courses on ICS/OT cybersecurity vulnerability assessment, penetration testing, and risk management. Clint hopes to revolutionize the industry approach to cybersecurity education and help usher in the next generation of cybersecurity professionals, using gamification. His flagship product, ThreatGEN® Red vs. Blue, is the world’s first online multiplayer cybersecurity computer game, designed to teach real-world cybersecurity. About ThreatGEN ThreatGEN bridges the “Operational Technology (OT) cybersecurity skills gap” utilizing the ThreatGEN® Red vs. Blue cybersecurity gamification platform and our OT Security Services, both powered by our world-renowned OT cybersecurity experts and published authors. The ThreatGEN® Red vs. Blue cybersecurity gamification platform uses cutting-edge computer gamification to provide an exciting & modernized approach to OT cybersecurity training, both practical and cost effective! Our OT Security Services use our decades of industry experience combined with strategically chosen partnerships to create a holistic service offering. For more information, visit our company website at https://ThreatGEN.com, follow us on LinkedIn at https://www.linkedin.com/company/threatgenvr/, or follow us on Twitter: @ThreatGEN_RvB. For further sales information, send an e-mail to sales@threatgen.com. Derezzed Inc. D/B/A ThreatGEN 14090 Southwest Freeway #300 Sugar Land, Texas 77478 +1 (833) 339-6753 #OT #cybsersecurity #riskmanagement
- Free Admission to Virtual Official Cyber Security Summit Featuring FBI, NSA, Google, Verizon & More
Earn 8 CPE Credits (CS)²AI is proud to continue to partner with the Official Cyber Security Summit throughout its Official 2021 Virtual Cyber Security Summit Series. Admission is normally $95 but we have secured Exclusive FREE Admission. Secure your pass to your respective region’s Official Cyber Security Summit with code CS2AI21 at CyberSecuritySummit.com Join us virtually and learn about the latest cyber security threats facing your company, best cyber hygiene practices, solutions to protect against a cyber attack, and much more – all from the comfort and safety of your home/office. Silicon Valley / Northern California - June 9 Seattle, WA / Portland, OR / Pacific Northwest - June 23 Philadelphia, PA - June 29 St. Louis, MO / Oklahoma City, OK - July 7 Detroit / MI - July 14 DC Metro - July 21 Chicago / IL - August 24 Miami / South Florida - September 16 Charlotte / Carolinas - September 23 Columbus / OH - September 30 Scottsdale / AZ - October 13 New York Metro - October 20 Los Angeles - October 27 Boston / New England - November 17 Houston / San Antonio - December 2 Hear from thought leaders from the NSA, U.S. DHS / CISA, Center for Internet Security, Verizon, Darktrace, Google, IBM, Cybercrime Support Network, and many more. Please note: These virtual events are for C-Suite/Senior Level Executives, Directors, Managers, and other Cyber Security Professionals & Business Leaders. Those in Sales/Marketing and Students are not permitted. You are welcome to share this invitation with your IT Security Team & Colleagues also. Attendance is limited so please RSVP today to confirm your participation. If you are interested in speaking/sponsoring at an upcoming Cyber Security Summit, please contact Megan Hutton at MHutton@CyberSummitUSA.com. For full details and registration, please visit https://CyberSecuritySummit.com/
- Colonial Pipeline Cyberattack
Submitted by: Steve Mustard President & CEO at National Automation, Inc. and (CS)²AI Fellow May 16, 2021 REGISTER HERE FOR OUR SPECIAL INCIDENT DEBRIEF THURSDAY, MAY 20 @11:30AM - 1:00PM EST If your incident response plan for recovering from a ransomware attack is to pay the ransom, you need to rethink your plan. Reporting indicates that Colonial Pipeline did just this, and still ended up recovering their billing system from backups. Some voices in the ICS security community have pointed out that Colonial Pipeline’s ransomware incident involved IT, and not ICS equipment. While this is true, in a critical infrastructure organization this distinction is surely meaningless. The IT and ICS equipment is there to provide a series of services that allow the company to operate and impacting any equipment to the point where operations are shutdown has serious implications for the nation. In this case, panic buying and gas shortages across the southeast of the country, and the potential for interruption of critical services, such as airports, that are dependent upon fuel supply. Whenever there is an attack on a critical infrastructure organization, we in the ICS security community should be concerned. We should also help organizations like Colonial Pipeline learn from such incidents to improve their response plans for all scenarios, IT and ICS. Will this incident trigger some long-awaited action from critical infrastructure operators to improve their security posture? Back in 2005 I organized a conference in the UK on security of distributed control systems and said “Process automation systems are key to the organizations behind the UK’s Critical National Infrastructure (CNI) as they both monitor and control critical processes involved in the production and transportation of gas, electricity and water. As these systems become more ‘open’ – using Ethernet, TCP/IP and web technologies – they become vulnerable to the same threats that exist for normal IT systems”. Sixteen years later we are still saying the same thing, but given the fact that incidents like Colonial Pipeline, Oldsmar, Ellsworth, and others continue to happen, it appears we are still not adequately addressing this problem. It is unfair to say that all critical infrastructure operators have the same security posture. Many operators are taking action, but the type of incidents we see indicate that we still collectively have a long way to go. As cyber incidents like Oldsmar, Ellsworth, and Colonial Pipeline continue to make the news, along with non-cyber incidents like the Texas freeze, there will be increasing pressure on the government to take action. Regulations already exist in some sectors, notably electricity and chemical. Views vary on the effectiveness of the regulatory approach. Many see the approach as a check-box exercise, and even the threat of fines for non-compliance does not deter some operators – Duke Energy was fined $10M in 2019 for 127 violations of NERC CIP, many of which were easily actionable, such as providing awareness training to employees. Reporting indicates that Colonial Pipeline’s did have cyber insurance with $15M cover. Although there is no confirmed reporting that their insurer did pay, it is likely. Perhaps this is one reason why some critical infrastructure operators are not making more effort to reduce their risk. This form of risk transfer may, on the surface, seem effective: Colonial Pipeline may have incurred little or no financial loss as a result of this incident, depending on what the insurance policy covered. This raises the question of how long insurers will be prepared to support this transfer of risk within the current parameters. Perhaps policies will become prohibitively expensive, or even not offered, to operators who cannot demonstrate a basic level of cybersecurity preparedness, such as a good incident response plan supported by regular validation exercises. While there may not be regulations for all critical infrastructure sectors, there are international standards that can be used to define the reasonable expectations of an operator. The ISA/IEC62443 standard, security for industrial automation and control systems, defines the requirements for a cybersecurity management system needed to manage cybersecurity risk in critical infrastructure organizations that depend on ICS equipment. Some sectors already base their internal policies on this standard, but it is clear that it is far from universal across all sixteen sectors in the US. Some may say that the likelihood of a cybersecurity incident in an ICS environment is vanishingly small. Even if this is true, the consequences of such an incident are extremely serious, and high impact, low probability events must be properly managed – they cannot be dismissed simply because they have either never happened or seem unlikely. In many cases, even moderate expectations such as the application of basic cyber hygiene are not being met in our critical infrastructure operations. We are long past the point where this is acceptable.
- Making the National Cyber Director Operational With a National Cyber Defense Center
Submitted by: Daryl Haegley Director, Mission Assurance & Cyber Deterrence at the DOD and (CS)²AI Fellow Original Source: https://www.lawfareblog.com/making-national-cyber-director-operational-national-cyber-defense-center By James N. Miller, Robert Butler Wednesday, March 24, 2021 The Biden administration has doubled down on cybersecurity, adding two senior positions in the Executive Office of the President: a new deputy national security advisor for cyber and emerging technology and a new national cyber director. To avoid churn within the administration and confusion elsewhere, the administration should clearly define the roles of these two positions. Perhaps the most critical role for the Office of the National Cyber Director (ONCD), one unsuited for the deputy national security advisor, is to lead interagency planning and operational coordination for cyber defense; it should fulfill this role through a new National Cyber Defense Center (NCDC). The United States needs a proactive whole-of-nation cyber defense campaign to bolster national security in the face of adversaries’ sustained efforts to steal U.S. intellectual property, sow disinformation, gather sensitive intelligence, and prepare to disrupt or destroy U.S. critical infrastructure through cyberspace. This cyber defense campaign should have four key elements: cyber deterrence, active cyber defense, offensive cyber actions in support of national cyber defense, and incident response. Planning and coordinating such a cyber defense campaign is an inherently interagency task, but it would fit poorly in the National Security Council (NSC) because of the NSC’s past difficulties with operational roles and its staff ceiling of 200. Such interagency planning and coordination would also fit poorly in the departments of Homeland Security, Defense, or Justice, or in the intelligence community, because none of these institutions has the full range of authorities necessary to the task. An NCDC needs to comprise personnel detailed from key departments and agencies, and liaisons from the private sector. Fortunately, such staffing is implicitly authorized in the recent legislation creating the ONCD. The degree of whole-of-government and whole-of-nation planning and coordination we propose for the NCDC would go beyond what the Cyberspace Solarium Commission has specifically recommended for the ONCD. This comprehensive approach is essential in the face of adversaries who are deliberately exploiting the seams between U.S. departments’ and agencies’ authorities, and between the U.S. government and the private sector. Without an NCDC, the ONCD will fail to move the needle in improving the U.S. cybersecurity posture. Roles of the National Cyber Defense Center The NCDC would conduct cyber defense campaign planning and coordinate U.S. government actions below the level of armed conflict, while also conducting contingency planning for cyber defense in the event of crisis or war. In support of each of these roles, the NCDC would plan and coordinate four intertwined lines of effort: cyber deterrence, active cyber defense, offensive cyber actions in support of defense, and incident management. On a day-to-day basis, below the level of armed conflict, the NCDC would plan and coordinate a sustained cyber defense campaign across the U.S. government, while also enabling appropriate coordination with the private sector, state and local governments, and key allies and partners. This cyber defense campaign would focus particular attention on China and Russia, as the most capable cyber adversaries of the United States, but would also address North Korea, Iran, the Islamic State and other cyber adversaries. The importance and urgency of having a proactive, coordinated and sustained whole-of-nation cyber defense campaign is difficult to overstate. The stakes in the ongoing competition below the level of armed conflict include the health of U.S. democracy, social cohesion, and U.S. technology advantages that undergird the nation’s military edge and economic growth. The NCDC would also lead interagency contingency planning for cyber defense of the United States in the event of a crisis or conflict. The most important work would focus on China and Russia, which have extensively infiltrated U.S. critical infrastructure with implanted cyber capabilities of a scale and sophistication that far exceed any other potential U.S. adversaries. In the event of a severe crisis or conflict, China and Russia could use cyber weapons to hobble the U.S. military, cripple the U.S. economy, and sabotage systems that deliver life-critical services— all while conducting cyber-enabled disinformation and deception efforts to sow discord among the American people. Contingency planning for cyber defense in crisis or conflict would improve the U.S. posture to deter aggression or coercion and would also inform cyber defense campaign efforts below the level of armed conflict. On the one hand, an overly passive U.S. approach below the level of armed conflict could invite adversaries to keep pushing out the limits until U.S. leaders finally feel compelled to respond with decisive force. On the other hand, an overly aggressive approach by the United States could cause a spiral of escalation. A well-calibrated approach in peacetime—based on an assessment of adversary interests and goals, and an explicit assessment of escalation risks (which requires contingency planning for crisis or conflict)—is needed to minimize the prospects of both failed deterrence and inadvertent war. More broadly, U.S. cyber defense activities in peacetime provide the essential foundation for cyber operations in crisis or conflict, and so are essential to improving the U.S. ability to deter war. The organizations, processes, and trust relationships needed to inform and shape an effective active cyber defense of U.S. critical infrastructure, rapid decision-making for coordinated countermeasures at home and offensive cyber operations overseas, and cyber incident management cannot be created instantaneously when a crisis arises—they must be developed, exercised, and matured in peacetime if they are to be available in the event of crisis or conflict. U.S. peacetime cyber activities, including private-public partnerships that enable the real-time sharing of sensitive information and coordination of actions, provide a “platform” for cyber operations in crisis and conflict; adversary perceptions of these U.S. capabilities in action can help to reduce the risk of great power war. In furtherance of these two roles, the NCDC would plan and coordinate four interrelated lines of effort. Cyber deterrence aims to reduce adversaries’ perceived benefits and increase the perceived costs of major cyber intrusions, attacks or cyber-enabled campaigns. Such sustained adversary efforts have included China’s theft of intellectual property and Russia’s efforts to sow domestic discord in the United States. Because of the extensive vulnerabilities of existing U.S. networks, deterrence by denial will not be adequate against advanced adversaries, particularly China and Russia. Deterrence by cost imposition will be essential; this requires intelligence-driven planning to help policymakers assess what responses may be sufficient to promote deterrence but not so strong as to lead to undesired escalation. Shifting from a reactive to a proactive cyber deterrence posture will require integrating diplomatic, informational, military, financial, intelligence, and law enforcement tools, as well as coordination with the private sector and U.S. allies and partners. Active cyber defense presumes that advanced adversaries, China and Russia in particular, have substantial resources and highly skilled teams that will allow them to penetrate even well-protected U.S. networks and systems. Active cyber defense aims to rapidly detect and mitigate intrusions, increase the attacker’s “work factor” (time and resources required to achieve its aims by expanding laterally, exfiltrating information, and the like), and reduce the attacker’s confidence that intrusions have succeeded and that any information extracted is accurate. Examples of active cyber defense tactics including “hunting” for cyber intrusions on one’s own (and partners’) networks, creating “honeypots” and “tarpits” to lure and trap cyber intruders in decoy servers, embedding false information on networks that may mislead intruders, and publicly releasing insights into adversary cyber tools and tradecraft. Active cyber defense is increasingly being conducted by both the U.S. government and the private sector, but not in a comprehensive coordinated campaign approach. There is much room for improved sharing of operationally relevant (timely and specific) information, intelligence and insights. Offensive cyber actions in support of cyber defense can be both necessary and appropriate, as exemplified by U.S. Cyber Command’s reported operations to thwart the Russian Internet Research Agency troll farm in the 2018 and 2020 U.S. elections. While the Defense Department would retain the lead for offensive cyber operations, embedding its cyber defense-focused efforts in an interagency campaign would better posture the U.S. to deal with the reality that cyber adversaries are operating increasingly from within U.S. territory as well as overseas (as was reportedly the case in the expansive SolarWinds and the Microsoft Exchange cyber penetrations). U.S. Cyber Command’s actions in support of the 2018 and 2020 U.S. elections have been widely applauded for being carefully considered and well coordinated. However, as adversaries increasingly buy, lease, or hijack U.S. infrastructure to conduct subsequent cyberattacks, the United States will need greater interagency coordination between the actions taken domestically and abroad to be successful. Cyber incident response will always remain a key part of U.S. cyber defense efforts, quite simply because the United States faces capable and committed cyber adversaries. Unlike the other lines of effort proposed for the NCDC, a well-rehearsed interagency process for cyber incident response is already established. However, because cyber incident response is so intertwined with the other NCDC lines of effort, the NCDC should provide oversight of interagency Cyber Unified Coordination Groups. These were established under Presidential Policy Directive 41 to coordinate U.S. government responses to major cyber incidents. In parallel, the NSC would be able to shift its focus from operational coordination to strategic decision-making and oversight, including prioritizing U.S. government support in the event of widespread cyber intrusions or attacks, and holding the NCDC and the national cyber director accountable for conducting its operational role. To enable the NCDC’s planning and coordination efforts, it will need to share information and provide a shared perspective of the current situation including a visualization of potential developments, and do so at appropriate classification levels. This requires not only a platform for secure information sharing but also a platform for conducting (human and machine) simulations and analyses aiming to anticipate the most likely and most dangerous future adversary courses of actions—including responses to actions that the United States might take. Providing shared perspectives on the current situation and future developments, through tailored visualization tools based on a wide range of data sources, and at various classification levels, would be a key role of the NCDC. Such a continuous net assessment process could not “predict” precisely what the adversary will do, but over time with continued reality-testing it would help improve the United States’s ability to anticipate, deter, defeat and/or respond swiftly to potential adversary courses of action. A continuous net assessment process for cyberspace would be supported by intelligence/counterintelligence assessments and informed by tabletop war-gaming, modeling and simulation, and results from cyber range activities. Such a net assessment process would help highlight areas where additional information and intelligence is most needed. Because adversaries are adapting as they exploit emerging cyber vulnerabilities, this net assessment process could also generate testable hypotheses regarding next adversary moves, so that intelligence assets can be directed appropriately, defensive measures taken and offensive measures preplanned. To counter adapting adversaries, this net assessment process must exploit new technologies such as artificial intelligence and machine learning. NCDC Organizational Structure and Staffing Figure 1, which shows a potential organizational structure for the National Cyber Defense Center, illustrates the need for interagency planning and coordination outside the U.S. government to improve U.S. cyber posture. Figure 1. Potential organizational structure of the National Cyber Defense Center. The director of the NCDC would report to the national cyber director. The organizational structure of the NCDC could, and probably should, evolve over time, but a few guiding principles should be followed: The NCDC director should be a senior civilian with both senior-level U.S. government and private-sector experience, and the support of the national cyber director and the deputy national security advisor for cyber and emerging technology. The vice director should also be an experienced public leader, with complementary expertise and background, and would likely be active duty, a reservist, or a member of the National Guard. Deputy directors should, as a group, have experience across all key departments and agencies, including the departments of Homeland Security, Defense, Justice, State, and Treasury, as well as various elements of the intelligence community. Offices should be organized not by department or agency, but by function, with each having an interagency composition and comprised mainly of detailees from key departments and agencies. To ensure a continued focus on cyber adversaries, critical planning and coordination activities should be organized as “country cells” (China, Russia, and the like), staffing for each of which would be drawn from multiple departments and agencies. Federal cyber centers, such as the Federal Bureau of Investigation’s National Cyber Investigative Joint Task Force and the Department of Homeland Security’s Cyber and Infrastructure Security Agency (CISA) Central, would continue their work, while supporting planning and coordinated campaigns orchestrated by the NCDC. Similarly, the intelligence community’s Cyber Threat Intelligence Integration Center would see the NCDC as a critically important customer—even as it continued to provide strategic intelligence to the NSC—as it would build on its capacity to provide operationally relevant and timely intelligence to the NCDC. The NCDC would be an extraordinarily lucrative target for cyber espionage and attack and, as such, it would need a top-notch chief information officer and chief information security officer, who would both need to exemplify as well as enable a diverse set of advanced tools and techniques for active cyber defense. How a National Cyber Defense Center Would Operate The NCDC’s interagency staff would conduct planning, coordinate already-approved interagency actions, and raise new proposals and any concerns regarding department or agency noncompliance with the NSC. The NCDC director would also request approval for new activities from the department(s) or agency head(s) with the requisite authorities, sending the request simultaneously to the deputy national security advisor for cyber and emerging technology for interagency consideration. The deputy national security advisor would have the prerogative—and the responsibility—to determine whether to call for NSC meetings, and if so with what urgency at what level (full NSC chaired by the president, Principals Committee chaired by the national security advisor, Deputies Committee chaired by the deputy national security advisor, or a supporting interagency working group). For extremely urgent decisions, department and agency heads could approve actions prior to interagency consideration. In this case, an operation could be initiated before the NSC had given its concurrence. The relevant department or agency head would be accountable to the president for justifying the choice to proceed. In cases that involved both urgency and limited escalation risks, this decision authority could be delegated further over time, with the objective of having as many actions as reasonably possible delegated to department and agency heads. Of course, the president may still direct execution, or nonexecution, of a proposed new activity at any time. Making Use of the Office of the National Cyber Director’s Authorities The legislation creating the Office of the National Cyber Director (ONCD) specifies a range of responsibilities that would be appropriately executed by the NCDC. Table 1 shows that the enabling legislation for the ONCD already provides authorities for each of the four key lines of effort proposed for the NCDC, as well as for the coordination of U.S. government engagement with the private sector. Table 1. NCDC-related responsibilities and the associated statutory text highlighting the national cyber director’s relevant responsibilities. Why the ONCD Is the Right Place for the NCDC The NCDC would not fit in the NSC, quite literally, given the legislative cap of 200 personnel on NSC staffing. Even if the cap were increased, the NSC staff should be focused on coordinating and overseeing the implementation of strategy and policy, not conducting ongoing campaign planning and coordinating operations. Placing the NCDC in CISA, or in another department or agency, would be a prescription for failure. Developing and coordinating the execution of national campaign and contingency plans for cyber defense—plans that really matter—will require departments and agencies to share sensitive operational planning and intelligence; a standing interagency body in the Executive Office of the President is needed to make this work. In addition to the question of location is the question of seniority: An NCDC director reporting to the CISA director would sit two levels below the Deputies Committee, whereas an NCDC director reporting to the (Principal-level) national cyber director could operate at the Deputies level. Anyone with experience working in the U.S. interagency process understands how important these differences of location and seniority would be in practice. This reality raises a bit of a conundrum: In the same defense authorization bill that created the ONCD, Congress mandated the creation of a Joint Cyber Planning Office (JCPO) in CISA with the mission of developing plans for cyber defense operations. Congress might in principle be persuaded to reverse itself, but there is another viable option: The director of the JCPO could be dual-hatted as the lead for private-sector and state/local government engagement in the NCDC. Wearing the CISA “hat,” this person could make use of all Homeland Security authorities as JCPO director; wearing the NCDC “hat” with presidential top-cover, this person would have significant additional influence with others beyond the reach of Homeland Security authorities, including national security departments and agencies, and U.S. allies and partners. Setting a Course for Success Like all organizations, an NCDC will have growing pains and will make mistakes. The goal for the U.S. cyber posture should be to advance to a new national cyber defense culture and an organization within the next 18-24 months. This timeframe will allow for mistakes and learning to arise from war games and simulations, rather than in the real world. The NCDC could achieve an initial operating capability with fewer than 100 personnel, perhaps with as few as 30 to 40 . Although the enabling legislation for the Office of National Cyber Director caps total personnel at 75, the legislation specifically allows for the ONCD to “utilize, with their consent, the services, personnel, and facilities of other Federal agencies.” Thus, for example, a 100-person NCDC that was 60 percent detailees would count only against 40 of the allowed 75 Office of National Cyber Director slots. To succeed over time, the NCDC will need to compete successfully for its share of talented cyber professionals. Given the importance of this national center, the president might direct department and agency heads to provide their best personnel to field an all-American cyber defense “dream team” and could further make a personal appeal to industry CEOs. Over the course of a decade or so, after there had been five or more rotations of detailed/assigned personnel from the U.S. government and private sector, there could be a cadre of 250 or more highly trained, experienced, and networked personnel who had rotated through the NCDC. This reality creates an important opportunity for the NCDC to serve as a flywheel for interagency and national-level training and education on cyber defense (including in particular experiential learning through exercises and real-world operations). An NCDC leadership team would work to maximize this benefit, through training and education efforts, and the encouragement of continued professional relationships among those who had served in the NCDC. If an NCDC existed today and functioned reasonably well in its planning and operational coordination missions, and in its net assessment function, any proposal for its elimination would clearly create a major gap in the ability of the U.S. government to compete in cyberspace below the level of armed conflict and, if necessary, to coordinate national cyber defense in the context of a crisis or war. That gap exists today, and is evident to U.S. competitors and adversaries, thus putting U.S. national security at avoidable risk. This piece is based on research supported by the Johns Hopkins University Applied Physics Laboratory (APL), where the authors serve as senior fellow (Miller) and consultant (Butler). The views expressed are solely those of the authors, and not of any U.S. governmental agencies or departments, of APL, or of any other organizations.
- Host, Dave Whitehead delves into the future of electric power in his Podcast: Schweitzer Drive
Submitted by: Daryl Haegley Director, Mission Assurance & Cyber Deterrence at the DOD and (CS)²AI Fellow Original Source: Caution-https://selinc.com/company/podcast/ Episode 14: Safeguarding Civilization: A Few Thoughts on Cybersecurity with Robert Lee. In our increasingly technology dependent world, cybersecurity threats have become an unfortunate feature of our daily lives. So many of us have been victims of identity theft or data breeches. But what happens when the target is an industrial control system like those that control large campuses, industrial operations, or the power grid? In this episode, Dave Whitehead talks about industrial control system cybersecurity with Robert M. Lee, the CEO of Dragos and a leading expert in the fields of industrial security incident response and threat intelligence. You might also be interested in Episode 5 - Supply Chain Management: Getting Parts to Make Parts, and Episode 4 - The Need for Speed and the Future of Power System Protection.
- The United States has a major hole in its cyber defense. Here’s how to fix it.
Submitted by: Daryl Haegley Director, Mission Assurance & Cyber Deterrence at the DOD and (CS)²AI Fellow Original Source: https://www.washingtonpost.com/opinions/2021/03/28/united-states-has-major-hole-its-cyberdefense-heres-how-fix-it/ Opinion by: Robert M. Gates March 28, 2021 at 8:00 a.m. EDT Robert M. Gates served as director of central intelligence from 1991 to 1993 and as defense secretary from 2006 to 2011. In recognition of the danger posed by foreign cyberattacks against the U.S. military, economy, infrastructure and political system, I directed the creation of U.S. Cyber Command on May 21, 2010. I concluded that the mission to defend "the nation from significant cyberattacks” required a new, overarching military command, consolidating previously disparate units into one integrated command structure. For Cyber Command to be able to respond instantly to attacks, the commander also had to be in charge of the National Security Agency, the only U.S. institution with the capability to defend the country against such attacks and retaliate. Cyber defense and cyber offense, I was convinced (and still am), needed to be commanded by one person. The commander of Cyber Command could not be in the position of having to ask for or negotiate NSA support, thus increasing the danger of delays in our response time. Even in 2010, we recognized a fundamental legal and structural problem in defending the United States against cyberattack: The Defense Department and NSA had limited legal authority to defend against such an attack originating inside the United States. By law, primary responsibility for defending against domestic-based attacks belonged to the Department of Homeland Security. Unfortunately, DHS had the authority but little capability. More than 10 years later, that conundrum continues to make the country vulnerable to attacks initiated from abroad but launched from within this country, such as the SolarWinds attack (likely of Russian origin) and those against Microsoft’s Exchange servers (likely of Chinese origin). Some contend the solution is for the government to partner with private-sector companies. Others argue that Congress should give NSA additional authority to conduct cyber defense domestically — thus breaking the decades-long prohibition against intelligence agencies operating inside the United States. The latter path is almost certainly not politically feasible. And any kind of formal partnering with the private sector is likely to encounter resistance from most such companies and, in any case, would be challenging to operationalize in such a way as to provide the necessary rapid responses. (That said, improved informal cooperation between the government and private cybersecurity companies could enhance protection of the U.S. private sector.) The NSA is the only U.S. government organization with the vast capabilities to conduct both cyber defense and cyber offense at home and abroad. Civil libertarians and privacy advocates might hope to see creation of a purely domestic organization to defend against attacks launched from within the United States — with appropriate legal safeguards — but that is a fantasy. There is not enough money, human talent or time to establish a domestic equivalent to the NSA. We recognized this dilemma in 2010 within weeks of establishing Cyber Command. In an attempt to resolve it, I reached out to then-DHS Secretary Janet Napolitano with a proposal that would organizationally empower her department to draw directly on NSA resources to deal with cyberattacks originating inside the United States. Recognizing DHS’s legal authority and responsibility for cyber defense internally, I proposed that we agree to appoint a “dual hat” senior DHS officer who would also serve as a deputy NSA director with the authority to task the NSA in real time to defend against cyberattacks of domestic origin. That deputy director would have her or his own legal staff and general counsel, and we would create firewalls and regulations to ensure that DHS tasking would be kept separate from and follow different rules than the foreign intelligence operations of the NSA. Napolitano and I took this proposal to President Barack Obama, who, after proper vetting by the Justice Department and White House lawyers, authorized us to implement this proposal. Sadly, the initiative came to naught, mainly because of bureaucratic foot-dragging and resistance. I still believe the most expeditious path to an effective U.S. defense against cyberattacks launched from within the United States — through servers located here or other means — is to return to the initiative of a decade ago: to enable DHS to fulfill its domestic cyber defense responsibility through new arrangements giving it authority to use NSA’s incomparable resources with appropriate structural and regulatory safeguards. The challenge for DHS Secretary Alejandro Mayorkas and Defense Secretary Lloyd Austin would be to ensure that their designees make the arrangement work. SolarWinds and the attack on Microsoft make clear that prompt action is necessary. The approach we devised in 2010 would not require new legislation and could be implemented quickly. We are under attack. There might be a more elegant solution to our vulnerability, but a better means of defense is available now.













