top of page

Search Results

102 items found for ""

  • Q & A Follow-Up - Cyber Security for Energy - Electric Sector Symposium January 19, 2022 (1of 2)

    By Branko Terzic, Former FERC Commissioner February 1, 2022 We hosted a (CS)²AI Online™ symposium on January 19, 2022 that focused on Cyber Security for Energy: Part 2 - Electric Sector. Here is a bit about the event: Part 2 of the Symposium on Control System Cyber Security for Energy will provide tangible recommendations and best practices for electric utilities to address current and upcoming compliance and cybersecurity challenges. First, attendees will gain a detailed understanding of the latest government regulations that have been pushed by recent changes in the threat landscape. Second, industry practitioners will share their experience on technology solutions and process improvements to mitigate risk faster and build a strong culture of cyber resiliency. The symposium will provide ample opportunities throughout the event to interact, ask questions, and leverage the shared expertise of the (CS)²AI community. Speakers: • Melissa Hathaway (President, Hathaway Global Strategies) - Keynote • Marc Rogers (VP of Cybersecurity at Okta): Hands-on experience on exploit • Ben Sooter (Principal Project Manager EPRI: Responding to High Impact Cyber Security events in Operations • Branko Terzic (Former FERC Commissioner): Challenges for electric utilities • Philip Huff (Univ. of Arkansas): Vulnerability Management for electric utilities • Todd Chwialkowski (EDF-RE): Implementing Electronic Security Controls • Robin Berthier (Network Perception): NERC CIP Firewall Change Review Workflow • Saman Zonouz, Threats to Programmable Logic Controllers (PLCs) As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. ****************************** QUESTION: What additional challenges will green technologies bring to the operators? ANSWER: Challenges will come, not from the fact that the technologies will be “green”, but that many of the solar units will be small, distributed on customer premises and customer owned rather than at large utility owned facilities. California utilities a looking to address the problem of secure communications with residential solar by application of HardSec. QUESTION: What are your thoughts on improving transmission capability between Eastern and Western grids to aid resilience, and ERCOT's unreliability in winter (as demonstrated in Feb 2020!)? ANSWER: I am all for it. The problem is that transmission siting and expansion is under state regulation and not under the FERC. It is very difficult to obtain licensing for new transmission from those states between the power sources and the remote market region. Congress has to address the problem but is reluctant to remove authority from the states. QUESTION: For self-regulated rural utilities, what have you seen to be the best framework to follow for cybersecurity? ANSWER: I guess I would follow whatever guidelines are set out by the National Rural Electric Cooperative Association (NRECA) and other similar organizations. The NRECA has, for example, filed joint comments at the FERC with the EEI the trade organization of investor-owned utilities. QUESTION: Grid operators with high penetration of intermittent resources, such as Ireland, have shifted capacity acquisition to specific "essential grid services". How important is it for the US Grid operators to also change from acquiring "plain old capacity" to the acquisition of specific grid services, like ramping? ANSWER: The question goes to the point that electricity is an instantaneous “service” and not a bulk commodity to be stored, repackaged and delivered when convenient to the marketer. The various US wholesale power markets have already moved to identifying specific electricity “ancillary services” which need to recognized, measured and priced to insure adequate and reliable service. The Texas ERCOT ignored this fact by not having a capacity market and only relying on an “energy market”. QUESTION: Your comment about a status quo in terms of vulnerabilities suggests that something new needs to enter the picture. Does that include federal funding of capabilities and capacity in "secure" microelectronics manufacture within the boundaries of the United States? ANSWER: Its always nice to get federal funding rather than spend your own funds, I suppose. The microelectronics problem is a slightly different one from the problem of vulnerability of electric utilities to hacking. My suggestion was that utilities look at the new HardSec claims and capabilities, especially for OT systems. QUESTION: Are operators just accepting they will not be able to block attackers and so focusing on how to manage the risk and minimize the blast radius? ANSWER: That seems like the current standard for cybersecurity services, a recognition that either the computer systems are already infected or that an intrusion can only be identified, not blocked. The job of the cybersecurity firms is then is one of rapid identification and recovery. QUESTION: With more security services being offered on the cloud, is FERC/NERC moving towards allowing cloud services for the energy sector? And what are the major concerns of using a cloud service for the energy sector? ANSWER: FERC and NERC regulation is somewhat limited as state Public Service Commissions have significant authority over electric utility budgets, for example, among their ratemaking powers. If use of “cloud services” is demonstrably cost effective versus alternatives then its likely state PSC’s would approve “cloud services’. I do not know about FERC’s position. QUESTION: When a new safety mechanism is introduced, like NERC CIP 013,......is that a FERC lead and NERC follow or vice versa? ANSWER: The NERC can lead but it s under the authority of the FERC, which means that the FERC can approve, modify or supersede NERC regulations. QUESTION: How do you recommend mitigating the security issues of the legacy systems in utilities sector? ANSWER: That sector is perfect for the capabilities of the new HardSec option which is indifferent to software type or age. QUESTION: Is it a problem to implement compliance for power companies because of the different sizes of the power companies? ANSWER: I think the problem has more to do with the management priorities of power companies than the actual size of the utility. Even the smaller Investor-Owned Utilities are large enough to have significant budgets to address cyber security issues. QUESTION: If this sector is heavily regulated, why not force the Supply chain Vendors to adhere for regular upgrade cycles? ANSWER: That can be done by the utilities themselves in their purchasing practices. QUESTION: How should/could a regulator incentivize good cybersecurity practices? ANSWER: The regulator can make cybersecurity performance an explicit indicator of management performance and of utility service quality. Then both penalties and rewards in the forms of financial incentives and disincentives can be adopted after the necessary regulatory procedures. Of course, the regulator has to approve electric budgets commensurate wit the new cyber security requirements.

  • Q & A Follow-Up- Control System Cyber Security 2021 Annual Report: The Future of Industrial Security

    By Andrew Ginter, VP Industrial Security, Waterfall Security Solutions January 19, 2022 We hosted a (CS)²AI Online™ seminar on January 12, 2022 that focused on (CS)²AI - KPMG Control System Cyber Security Annual Report 2021. Here is a bit about the event: This session presents key findings from the (CS)²AI-KPMG 2021 Annual Control System Cyber Security Report. Each Report is the culmination of a year-long research project led by the Control System Cyber Security Association International and draws on input from our 21,000+ global membership and thousands of others in our extended (CS)2 community. Based in decades of Control System (CS) security survey development, research and analysis led by (CS)2AI Founder and Chairman Derek Harp and Co-Founder and President Bengt Gregory- Brown, and backed with the support and resources of our Global Advisory Board members, the (CS)2AI Fellows, our Strategic Alliance Partners (SAPs), and many other SMEs. . We asked key questions about personal experiences in the front lines of operating, protecting, and defending Operational Technology (OT) systems and assets costing millions to billions in capital outlay, impacting as much or more in ongoing revenues, and affecting the daily lives and business operations of enterprises worldwide. Over five hundred and fifty of them responded to our primary survey and many others participated in numerous secondary data gathering tools which we run periodically. This pool of data, submitted anonymously to ensure the exclusion of organizational politics and vendor influences, has offered insights into the realities faced by individuals and organizations responsible for CS/OT operations and assets beyond what could fit into this Report. We hope the details we have selected to include serve the decision support need we set out to answer. Speakers: Derek Harp: (CS)2AI Founder and Chairman William Noto: Director OT Product Marketing, Fortinet Andrew Ginter: VP of Industrial Secyurity, Waterfall Security Solutions Brad Raiford: Director, Cyber Security, KPMG in US As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. **************** Many questions came in that there was no time to address in the recent CS2AI webinar giving a preview of the annual report / survey results. Here are some questions I would really like to have addressed - all of them in the theme of "looking forward." Ie: what does today's survey (and other context) tell us about the future of industrial security? Cloud / Internet / Remote Connectivity First let's look at three closely related questions: (1) What makes the Internet of Things so susceptible to being compromised? (2) How do you see the future of OT security with the emergence of cloud service in ICS and the increase of remote connection due to Covid? (3) Where does the trade-off between conveniece (remote access) and security (protecting the ICS) balance out? Which technologies or products can prevent remote exploits or additional cyber attack vectors? Well, to start with, all data flows from outside control-critical networks into those networks represent attack vectors. This is because all cyber-sabotage attacks are information, and every information flow can encode attacks. So, when we look at online communications - out to Internet-based remote access laptops, or cloud services, or vendor support websites - when we look at these communications, too many people assume that encryption makes us secure. They think encryption gives us "secure communications." This is a mistake. In fact, encryption & authentication give us a degree of protection from Man-in-the-Middle (Mim) attacks. Encryption does nothing to protect us from compromised endpoints. If malware has taken over our remote access laptop, or the cloud service that monitors and controls our ICS "edge" devices, or the trusted vendor's website, then encryption buys us nothing. The attack information comes at the ICS targets INSIDE the encrypted connections we have open to the compromised endpoint on the Internet. To question (1) - attacks inside cloud/Internet connections are the single biggest new cyber-sabotage risk with the IIoT. This is a huge impediment to adoption of cloud services or the IIoT in many enterprises, and indeed in entire industries. To questions (2) and (3), we deal with this new risk by asking the right questions. If we ask the wrong questions and we get meaningless answers. The right questions include: a) What is the benefit of cloud/Internet/remote connectivity? Almost always, the benefit is increased efficiencies that reduce the cost-per-output of the industrial process, or that reduce the time-to-output and so indirectly reduce the cost-to-output. "Convenience" is never the driver. When was the last time you heard a large industrial enterprise say "our top priority this year is increasing convenience for our employees and contractors"? This doesn't happen. Almost always, the benefit and motive for cloud/Internet/remote connectivity is increased efficiencies / cost reductions. b) Next question - with the benefits of connectivity clearly established, what are the costs? Cost is tricky. Think about it. Nowadays most industrial sites already have a security program that these new cloud/Internet/remote connections have to fit into. That security program already mitigates certain risks and accepts other risks. The level of residual risk the organization is willing to accept is something the organization has already decided on, and acted on, and deployed security solutions and procedures for. So when we connect to the Internet for cloud / IIoT or remote access services, we have to understand what new risks we add to that residual risk mix. And then we have to ask how much we will have to spend to change our security program to once again reduce all of our risks back to the point we've decided is acceptable. Cloud/Internet/remote connections increase risks materially. If we are not careful, the cost of reducing total residual risk back to the level we've decided is acceptable can exceed the efficiency benefits we hoped to gain from the new automation and connectivity. c) The last question we should ask is what alternatives are there to these very expensive new security measures? An alternative that many sites are deploying is unidirectional gateway technology - between edge/ICS systems and cloud/Internet systems. Most of the time, all or nearly all efficiency benefits of cloud/Internet/remote connections comes from data that flows OUT of the control-critical network. A unidirectional gateway supports that flow, and physically prevents any cyber attack from pivoting from the cloud/Internet/remote laptop back into the ICS target. The bottom line - neither cloud/Internet connectivity, nor remote access, nor convenience are ends unto themselves. Nobody says "the top priority for my large industrial enterprise this year is increased connectivity." Connectivity is a means to an end. The end is efficiency. Modern approaches such as unidirectional gateways give us the benefits of cloud connectivity, without the security risks and the associated costs of material changes to security programs. Compliance Another long question had to do with compliance, arguing that compliance limits innovation, and that "by tightening the hands of developers, scientists and engineers, the company is limiting how far they can go and invent or discover new things." For starters, one point I made in the webinar comes from an upcoming Industrial Security podcast with Suzanne Black of Network Security Technologies. She points out, rightly, that security programs are worthless without compliance. Compliance programs measure whether the business is doing what that business decided had to be done security-wise. If nobody complies with that security program the business so carefully created, then the program is not reducing risks, is it? When it comes security vs. innovation, I recommend episode #25 of the Industrial Security Podcast. The guest was Kenneth Crowther, a product security leader at GE Global Research. Kenneth fights the security vs innovation fight every day - he works with engineers to embed security capabilities into GE products. His conclusion - you have to be watching what's coming out of these innovators very closely to figure out when is the right time to intervene and start inserting security into their designs. Start too early and yes, you slow innovation. That means your competition comes out with product before you do, and you lose the first-mover advantage. But intervene too late, and it can take enormous time and effort to insert security into a design after the fact, again losing first-mover advantage. So yes, security slows innovation, but lack of security renders great innovations unmarketable. So compliance experts in innovative companies have to walk a fine line. Security Program Cost One last question: "Cyber Security Control System Programs can be applied to companies in third world countries, which have critical SCADA systems, but do not have the possibilities of investing a lot of money as countries like the United States or Europe do? Many of the brands related to cybersecurity do not see it profitable to work with several countries, due to the size of their companies. What are your recommendations?" There are a couple of answers to this question. One is that cybersecurity concerns only arise when industrial operations have been automated with computers - usually to reduce costs. My earlier point applies here - when organizations anywhere in the world deploy automation, no matter how much money they have, they need to look at benefits vs costs. They need to compare the efficiency gains of the automation to the cost of deploying security programs strong enough to keep cybersecurity risks at an acceptable level. If an organization can't afford the security, well, they should reconsider deploying the automation. That said, though, there is a real lack of advice out there as to how "poor" organizations can secure their systems. To help make progress in this arena, Waterfall Security has volunteered me to work with a government agency right now to put some advice together for small water utilities - ETA for the report is Q3/22. Even in wealthy countries, the smallest water utilities might have less than 5,000 customers, no IT people on staff, and certainly no industrial security people on staff. But - these same utilities constitute critical infrastructures. I mean, if a hacktivist decides to "take revenge" or something on an unsuspecting population, which is a better target for making lots of people sick - a large, well-defended water system, or a couple of tiny, poorly defended ones? I won’t go into detail, but let me say that the principle of the report and its advice is the same as above. If we want to avoid spending a lot of money on a security program that can maintain residual risks at a level appropriate to critical infrastructures, well then, we must be prepared to give up at least some of the least-valuable benefits of indiscriminate automation its associated connectivity. Further Reading That’s probably enough for now. Anyone who would like to follow up with me one-on-one is welcome to connect with me on LinkedIn, or submit a “contact me” request at the Waterfall website. And for more information about Waterfall Security Solutions or Unidirectional Security Gateways, please do visit us at https://waterfall-security.com

  • Control System Cyber Security Books I'm Currently Reading

    By Derek Harp, (CS)²AI Founder, Chairman and Fellow November, 2021 During a recent presentation about the Key Findings of the (CS)2AI-KPMG Control System Cyber Security Annual Report 2021 during the SecurityWeek's 2021 ICS Cyber Security Conference & virtual expo, I mentioned a few books that are on my nightstand right now. Some of you reached out for a list, so I have included links below to a few I am currently reading: SECURE OPERATIONS TECHNOLOGY by @Andrew Ginter SECURITY PHA REVIEW FOR CONSEQUENCE-BASED CYBERSECURITY by @Jim McGlone and @Edward M. Marszal COUNTERING CYBER SABOTAGE: INTRODUCING CONSEQUENCE-DRIVEN, CYBER-INFORMED ENGINEERING (CCE) by @Andrew Bochman and @Sarah Freeman CRITICAL INFRASTRUCTURE RISK ASSESSMENT: THE DEFINITIVE THREAT IDENTIFICATION AND THREAT REDUCTION HANDBOOK By @Ernie Hayden

  • Q&A Follow-Up with Rick Kaun: Navigating the New TSA Directive for Pipelines

    By Rick Kaun, VP Solutions at Verve Industrial Protection October, 2021 We hosted a (CS)²AI Online™ seminar on September 22, 2021 that focused on Navigating the New TSA Directive for Pipelines. Here is a bit about the event: Navigating the new TSA directive for pipelines (and other future industry targets) – Lessons learned from a regulated industry. The recent increase in ransomware events coupled with one of the targets being a large pipeline company has compelled the TSA to issue a new cyber security directive. This means many OT organizations are now scrambling (some more or less than others) to stand up a multi-disciplined security program for a very diverse, distributed OT environment. This looks and feels a lot like the Power Industry was confronted with when NERC CIP was first introduced and so we, as security practitioners, can learn a great deal of lessons from an industry that has already run down this path. Challenges in understanding scope, standing up multiple security initiatives, organizational changes for responsibility, maintenance and response activities and most notably day to day maintenance and compliance can be significant obstacles for operating companies to overcome. Join us to review a number of security learnings around setting up and maintaining an OT security compliance program such as: • A multi-disciplined approach is key – treating individual security tasks as silos will create gaps, increase effort and decrease efficiency • Remediation is a key consideration – simply mapping vulnerabilities or enabling perimeter/network monitoring is just a drop in the bucket – need to be able to reduce risk and attack surface as well as react to emerging situations • Monitoring – as risk is reduced and new threats emerge the current risk status is always in flux. Being able to monitor and report on current status, changes to the threat landscape or show progress/compliance are key components of a sustainable program • Automation – as many of these tasks and insights that can be automated the better. OT staff is spread too thin and traditional OT risk reduction approaches are far to manual to provide meaningful and consistent risk management As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. ******************* QUESTION: What are the Cyber Risks facing Phasor Measurement Unit (PMU) devices in the power sector and how to mitigate these cyber attacks? ANSWER: This is a very specific question that I do not have a very specific answer for other than to suggest that ALL OT/IIOT devices are gaining much more scrutiny from both ‘good’ and ‘bad’ guys. Any class of ‘embedded’ asset (or asset in general) can be a target for anyone looking to do harm. The explosion of IIOT devices and connectivity in general have supersized our technical footprint and therefore our security debt as well. Most IIOT devices rush to market with ‘problem solving’ and ‘ease of access/connectivity’ as primary promises to the consumer. What is a distant (if even present at all) consideration is security. So we see many organizations wanting to stitch data together but do so in a less than secure way. So to answer your specific question – what can be done? Work with your vendor to provide secure by design solutions. Add data (if possible) from these systems and the networking gear connecting them to an SIEM or alerting tool set. Use this feedback to tune your end point deployment, incident response programs and, above all else, security language into your procurement language and projects as well as back to the original vendors. QUESTION: Awareness of Zero Trust and Secure Access Service Edge concepts are growing across industries. What parts of the TSA directives can be addressed through Zero Trust/SASE implementation (likely in the 2nd generation approach)? ANSWER: Interestingly enough the second TSA directive relative to cyber security explicitly calls out the need for Zero Trust. The specific ask is generic – they say ‘Implement a Zero Trust policy that provides layers of defense to prevent unauthorized execution’ then points to specific details in the directive document. And Zero trust (according to Palo Alto) is “Zero Trust is not about making a system trusted, but instead about eliminating trust.” But what the end user needs to know is that this is a concept that can/should be applied to as many policies, procedures and technologies as possible. Specific examples include network segmentation and firewall rules, remote access technologies and policies (ie, implement 2 factor authentication, minimize applications/permissions on the ‘jump server’, include logging and monitoring of that system, the subnet it is on and the logged in users, etc. This concept needs to be incorporated into every layer of security throughought an entire program for inventory to protection, detection and response/recovery. QUESTION: This is a great model and case study. What happens when the OT team and the IT team do not get along? Have seen client environments where the two teams do NOT collaborate = Demark at the air gap. Tips on getting past that situation? ANSWER: This is not an ideal situation but is unfortunately all too common. Worst case is you can have a ‘negotiation’ whereby you do a RACI chart or an accountability matrix that shows who gets to (has to) do/decide on what topic. For example, IT can monitor the landscape for Vulnerabilities (ie, what is in the wild) but just for identification. OT then needs to take the baton to map known risks to their assets and come up with a plan. IT can be expected to help understand/navigate the actual risk and possible compensating controls (ie, what can we do if we can’t actually patch?) but then OT needs to roll out the remediation. Ideally this type of forced collaboration *should* lead to a better working relationship over time as both sides get to know one another better. Maybe starting with an exercise to understand the other perspective would work too? We hosted a whole webinar about it here if you want to dig in more. Reality though is you need organizational support. Meaning the senior management needs to identify, facilitate and, if necessary, support a collaborative environment. We wrote about that recently too since so many organizations are currently examining their manual/patch work efforts and wondering how to plan a better solution. QUESTION: Any specific thoughts for medium to smaller companies who may not have nearly the budget of enterprise sized companies? It can be a bit daunting on smaller budgets. Why do you state that moving IT to an OT security role will not work? ANSWER: There are two questions in here! First – small and medium companies can absolutely participate without enterprise wide budgets. Though I must put the obligatory plug for educating the finance department that our mission is to ensure the safe, reliable, expected operation of the facility. The very facility that makes the company money. We can get money for spare equipment (sometimes very expensive equipment) because it relates to uptime. Why can’t we see the same correlation to cyber? Anyways. Small/medium companies can be smart and build automated tools into a multi-phased approach over multiple budget cycles. They can also find a growing list of professional (managed) services that cater specifically to OT. These options help to move security along in increments but not break the budget. Really what you need is a view of what you are trying to achieve (ie, what does ‘done’ look like) and then break that journey into multiple, achieveable phases. And where you can invest up front in tools that provide returns on multiple fronts or that introduce OT safe automated remediation and management then you really get the benefit. Remember a technology without an owner is a waste of money! B. Second question. I did not necessarily say IT to an OT security role would not work, what I wanted to point out was that both disciplines (IT and OT) are very specialized and can be very technical. To expect a single person to be an expert at both is asking too much. When I built my team at Matrikon I only had traditionally trained IT types to draw from and had to teach them the basic nuances of OT. Essentially – do no harm and do not touch unless OT says you can. My point was simply that if you want the best possible solution you are better off pairing an IT expert with an OT expert to work together towards an OT safe program. QUESTION: How has the OT Organizations reacted to the TSA Cybersecurity Directive? Acceptance? Reactive? ANSWER: The answer to this is yes. All of the above and some others inbetween. Like most regulatory initiatives there are leaders and laggards. For a select few they are looking at this as acceptance and building appropriate responses. (read: Not just knee jerk reaction to tasks like “Within 120 days, implement and complete a mandatory password reset of all passwords within OT systems, including PLCs” While this does need to be done, it is not a one time task (or at least in the context of building a repeatable, sustainable program) it is part of what should be a new policy/procedure. And to incorporate this, along with many other exisitng or pending control, the opportunity is now to build a proper maintenance program. Unfortunately some do take the approach of ‘I don’t want to be first, I don’t want to be last – I just want to put this behind me and move on’. QUESTION: What are thoughts on the deployment of NERC CIP versus the Pipeline Guidelines? Do you see any possible lessons learned? Specifically the prescription of security functions. ANSWER: I indeed do see many parallels. And the biggest challenge NERC CIP had (and Pipelines likely will have) is in maintenance. (There are a host of other challenges NERC CIP had around enforcement, language, in or out of scope, etc. We saw large power companies physically and logically split their generation units into smaller, independent units to avoid megawatt thresholds!) But the single biggest lesson Pipelines needs to learn is that whatever path they choose to meet these requirements they really need to build in a maintenance plan. Expecting already overburdened OT staff to add inventory, user, software verification and patch/remediation tasks to their list is not sustainable. Automated tools, management support and budget are all fundamentally required. Just like a safety program! QUESTION: Is it advisable to have a in-house central team or third party? ANSWER: In house is good if you can afford it and find the team. Third party expertise and offerings continue to grow so are a good option if it is a better fit for your organization. However be sure to be articulate in your procurement and contract language just exactly what is or is not being provided and know that in the NERC CIP world, in house or contracted, mistakes or oversights are the fault of the owner/operator. QUESTION: What is Rick's take on alternative measures on Section 5 of the directive and how we can use that if we can't meet the intent of a directive? ANSWER: Alternative measures at first glance appear to me to be what NERC CIP built in for ‘Technical Feasibility Exceptions’. Essentially this is a ‘we cant technically do this on this specific asset’ exemption. Usually you need to prove what/why you cant do something, for example legacy or low functioning networking gear not being able to provide logging data. The second thing these types of exemptions require is a plan/path to remediate these exceptions at the next possible opportunity. You also need to submit what you are doing as compensating controls in lieu of the intended directive. So if you can’t patch a system for a specific risk but you can block the port/service it targets upstream at a firewall or layer 3 switch then that needs to be submitted as an alternative. QUESTION: What is the consequence of non-compliance to TSA directives for owner operators? ANSWER: At the moment there is not a formal fine/violation/audit function for pipelines but given the TSA is already a regulatory body and has other powers it can’t/won’t be long before some form of enforcement is added to the mix. QUESTION: What help if any on due care due diligence on TSA directives can help an owner operator, where we are not able to fullfill all directives but have a program in place to fix? ANSWER: I think this is an extension to the question about alternative measures listed above. In general more regulatory programs are looking for taking the intent and doing your best to achieve it. Specific, granular tasks/technologies are not typically prescribed but doing nothing due to a lack of budget or support is not an option. For those who show due care and due diligence and tie back existing measures as well as alternate measures back to directive specifics should be in a pretty good spot to pass a ‘compliance’ check. QUESTION: My question is on interpretation of TSA fines if we show due diligence on following TSA directives though not able to fullfill all of directive, what is Rick's take on that? ANSWER: Again this is about alternate measures. What I can say I have seen from NERC CIP is that if your organization is doing all that is reasonable (meaning you do free up someones time to change all passwords or to research, purchase and implement multi-factor authentication technologies) and have a timeline for completion you should be OK. What is not likely going to be well received is if you were to try to defer multi-factor (as an example) to Q1 or Q2 of next year because there are immediate operational projects underway. They have given direction and timelines and only technical limitations are likely acceptable excuses for non-compliance. QUESTION: Does Rick envision that TSA will issue more directives taking cue from NERC CIP and it will become a 2nd generation compliance program from the regulator? ANSWER: Crystal ball type of question here. My bet is yes. My sense is the TSA is merely starting with this specific set of directives. You can see within them (like documentation of privileged accounts) there are requirements for regular review. This is indicative of the expectation that pipeline companies need to be building a repeatable program. Not just change passwords and add multi-factor authentication once and never look back. QUESTION: Patching timelines are very aggressive on 35 day testing, 35 implementation for security updates on OT assets, how does Rick sees that being achieved keeping operational disruption/outages in mind? ANSWER: For NERC CIP it is a bit more nuanced than that. And not everyone even patches. If you dig into the details of NERC CIP language it can get complex but the general process is as follows: Within 35 days of a patch being released you need to first review it for applicability and assess its potential impact on your specific organization. You then have another 35 to either deploy it or deploy compensating controls in lieu of the patch. All of this needs to be documented. Now some NERC CIP entities deferred all patches and pointed to strong network controls (ie, data diode, physical security, etc) to justify the lack of patching but that is not a long term view (in my honest opinion). QUESTION: Which is the importance or the place of OPC UA for systems and security convergence across the industry? ANSWER: Any security tool or standard is welcomed and would be an improvement. And historically OPC broke a lot of security by virtue of its function to tie together otherwise unconnected systems. However as technology continues to be expanded upon and proliferates within operating environments technologies like OPC or IIOT appliances and apps absolutely need to be selling their virtues alongside their security capabilities. For all of you owner/operator types reading this PLEASE put minimum compliance security language into your procurement language and your project specifications going forward. Otherwise minimum compliant functional bid will be cost effective but not likely secure. QUESTION: There is no one-size-fits-all "standard" solution to security, so what does Regulation Do except add change to already difficult job? ANSWER: Great question. And all security practitioners always say ‘Compliance is not Security’. I am not a favor of ‘the stick’ and would prefer ‘the carrot’ myself but the challenge is that many organizations just do not take security seriously. So does regulation help get more organizations moving towards improved security? I hope so. But what it can do is at least provide a guideline as to what they *should* be doing within security. Again, it is not ideal but I have been doing this for 20 years now and for every 1 proactive security organization I see is see 3 that are half as good, 3 that have not started but are dipping their toes in the water and another dozen who have not nor will not do anything. (Until they are hit by ransomware perhaps?)

  • BOOK Synopsis: Security PHA Review for Consequence Based Cybersecurity- Jim McGlone & Edward Marszal

    By Jim McGlone, Chief Marketing Officer at Kenexis Consulting Corporation Co-author of Security PHA Review We are looking forward to hearing from Jim McGlone on September 15, 2021 at the OT Cyber Risk - Taking it Down symposium at 1:00PM - 5:00PM EDT. During a cybersecurity project on an oil refinery in Europe, the ideas used in this book became a reality. The project was ultimately being led by IT focused people and the industrial control systems were subcontracted to our company. Everyone on the project wanted to do what they knew how to do, and I recognized that the safety functions and the industrial control system were at great risk. Trying to explain to them that they were focused on the wrong thing was frustrating. Local engineering didn’t want us there and the control systems were all cross connected and essentially a flat network for an entire refinery and all the auxiliary processes. This facility couldn’t even shut its blast doors, so it was frustrating. What really pointed at the problem was the traditional cybersecurity risk calculations. The team wasn’t concerned about equipment that could blow up or emit toxic gas. Meanwhile, there were three different vendors remotely connecting to the power generation equipment alone and no one knew it until we found the modems. At least one more vendor was connecting remotely to DCS network too. During several calls back to the other author, Ed pointed out that the risk calculations should already have been completed for the safety functions in their PHA or HAZOP. The local engineers gave me a PHA from 1969. This is how we got to the ideas that are conveyed in the ISA book Security PHA Review for Consequence-based Cybersecurity. By focusing on the possible consequence of a loss of control scenario we were able to determine if the cause and safeguards for that scenario were vulnerable to cyberattack. If they were vulnerable, we devised a method to determine the Security Level – Target (SL-T) from the ISA/IEC 62443 standard or devise an alternative safeguard to prevent the attack from causing the consequence from occurring.

  • How do We Knock Down OT Risk? Authors Unite at (CS)²AI Symposium Sept. 15, 2021

    By Derek Harp, (CS)²AI Founder, Chairman and Fellow September, 2021 Of course, we all want to mitigate risk in our environments. It goes without saying. However, HOW we do that does not. There are so many different products, services, approaches, guidance, regulations and frameworks. Some are broad and some tailored to specific types of asset owners and operators. And then we have to ask ourselves, “Is what we have been doing working?” Are we effectively mitigating or “Knocking Down” the risk to our OT systems? It is believed that Albert Einstein said “Insanity is doing the same thing over and over and expecting different results”. Not to paint that broad of a brush stroke against all that we are doing as clearly there is far more new work to be done than just repeating old. However, there are old methodologies and thought processes that plague our consciousness and leak into our plans for improving cyber security. As we prepare for our next Symposium focused on Cyber Security Risk to Operating Technology - an idea emerged to bring together authors who are writing about consequence-based cybersecurity methodologies that we all can learn from. These are methodologies unique to OT networks and physical operations – approaches that don’t make sense on enterprise networks or in the cloud, and approaches that are robust, even in the face of a constantly-evolving threat landscape. I personally am fired up to learn from Andrew Ginter author of Secure Operating Technology, Andy Bochman and Sarah Freeman authors of Countering Cyber Sabotage (Introducing Consequence Driven Cyber informed engineering) and Jim McGlone co-author of Security PHA Review (for consequence-based Cyber Security) Each of these authors are collaborating to make this (CS)2AI Symposium a valuable education opportunity by opening our minds to new ways of thinking about HOW we address our collective OT cyber security challenges. For me, adding even more industry veterans and true pioneers, Dr. William (Art) Conklin, Bryan Owen, and Mark Fabro to an event closeout panel at the end of the day is icing on an already great cake. I think about the years that some of these very people have been working on the unique challenges to cybersecurity in operating technology systems and am in awe of the persistence I know it required of them. We are only just now entering a time where a broader segment of industry and business leadership is taking the threat to OT systems seriously. Now that this is occurring, HOW we go about mitigating risks or “Taking them down” is everything. Per our mandate and commitment to support the entire control system cyber security workforce everywhere we can, this event has no cost and due the generous support of our Symposium title Sponsor, Waterfall Security Solutions, we are able to give away a copy of each of these authors’ books to 12 winners who participate in our Quality Question submission raffle the day of the event. In addition, this time we are also able to give each of the first 400 attendees to register a copy of Andrew Ginter's book Secure Operating Technology, a super useful pen and a practical gift that I think everyone will find useful instead of taking up space we don’t have on our desks 😊 Stay safe and be well my friends and colleagues, Derek Harp

  • Q&A Follow-Up with Mark Bristow: Developing & Leading a Top ICS Incident Response Team

    By Mark Bristow, Branch Chief, Cyber Defense Coordination (CDC) at Cybersecurity and Infrastructure Security Agency (CISA), (CS)²AI Fellow August, 2021 We hosted a (CS)²AI Online™ seminar on August 11, 2021 that focused on Stop Tomorrow's Crisis Today - Developing and Leading a Top ICS Incident Response Team. Here is a bit about the event: Incident response can be one of the most challenging times a process may face. The key to success is pre-coordination, preparation and training. (CS)²AI founding fellow Mark Bristow will take you through strategies in setting up and training your ICS incident response capability to make sure you are ready for this challenging day. With the right staffing model, incident response plan, pre-arranged internal and external partnerships, pre-built mitigation strategies and the right frame of mind, responding to an OT cyber incident can be effectively managed. Mark has worked on hundreds of incident response efforts impacting or threating process control environments in his long career with CISA’s Threat Hunting teams (formerly ICS-CERT). Speaker: Mark Bristow is Branch Chief, Cyber Defense Coordination (CDC) at Cybersecurity and Infrastructure Security Agency (CISA). He previously served as Director of the US Department of Homeland Security's (DHS) National Cybersecurity and Communications Integration Center (NCCIC), responsible for Incident Response efforts of the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) and the United States Computer Readiness Team (US-CERT). As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. ******************* On August 11th 2021 (CS)²AI hosted me in a conversation about developing and leading incident response teams. In the webinar we covered how incident response can be one of the most challenging times an organization may may face. We covered some keys to success and strategies in setting up and training your ICS incident response capability to make sure you are ready for this challenging day. You can see the presentation in full in the member library (https://www.cs2ai.org/member-resource-library) but we had way more questions than we had time to answer. We decided to take some of the best questions we were not able to get to and post them in blog format to continue this important conversation. QUESTION: Can you have Mark comment on his guidance on incident disclosure - what MUST be disclosed, and what might need to be disclosed, and what might be able to be kept internal? There is an intersection of ethics and business reputation that may need to be dealt with and also specific compliance requirements depending on the client - i.e. Fed client vs commercial client. ANSWER: This is a really complicated topic and there are no “one size fits all” solutions. Each organization needs to internally decide what if anything they are required to disclose, what they plan to disclose and to whom. Business and reputational risk must be carefully weighted and agreed on by management. Not having a pre-built communications plan for common incident scenarios is one of the biggest mistakes I see organizations make and trying to build such a plan during an incident leads to rushed and not fully contemplated decisions. In my view, more disclosures are needed and details of incidents need to be shared with the broader community. Too often are organizations hit by similar activity because others have not shared their experiences. This is where disclosure to external organizations like the Cybersecurity and Infrastructure Security Agency (CISA) or an Information Sharing and Analysis Center (ISAC) can be really helpful in communicating the relevant tactics and observables in a way that can keep the identity of the organization confidential. Ultimately, many incidents go undisclosed. I recently completed a survey of OT/ICS practitioners for SANSwhere this topic was explored more in depth. Most respondents indicated that they had an incident in the last 12 months with many organizations having multiple incidents with operational impacts. Outside of this and (CS)2AI’s own annual surveys and reports I’m not aware of many of these incidents being publicly acknowledged or disclosed. Until we get over the stigma of reporting, and view public incidents with empathy and understanding instead of snide remarks and scorn, incidents will continue to be under-reported and future victims will continue to not benefit from the hard-earned lessons from those already impacted. QUESTION: Do you have suggestions for practice lab environments? ANSWER: Building out an ICS lab is an important step to training and maturing your incident response team. The best plan for building out a lab is to use equipment representative of your current, as-built process so you can test how your response tactics will perform on the equipment you have in the field. I like to recommend taking virtual copies of your windows-based systems in your ICS and stand them up in a virtualized environment to aid in your lab setup. While the system will not have all of the expected IO for the process, this will provide a mostly realistic environment for how an adversary may view your process control network and allow you to test detections without the investment of setting up an entire mirror process. If your environment isn’t yet ready to build out a fully replicated lab, you can work with external organizations such as the ICS Village to leverage labs already built in the community to accelerate your lab buildout. QUESTION: How can we build relationships with other security professionals in order to share intel? ANSWER: This is definitely a key challenge faced by many and can be solved in a few key ways. The first is to always ask questions. This may be tough for many but you’ll find that even when approaching ICS cyber “legends” almost all are willing to support you and help so don’t hesitate to reach out. Anyone not willing to help out or answer a reasonable question isn’t really worth YOUR time anyway. Joining communities like (CS)²AI is also a great way to meet others of like mind in the industry. Conferences and events can really help build your network of trusted partners as well even while most are currently virtual. The best advice is to not be afraid to ask questions of others and be receptive to others when you are asked. QUESTION: What changes would you recommend specifically to Corporate Cyber Incident Response plan to ensure it has correct inclusions for OT? ANSWER: If you have a corporate IR plan, you are off to a great start! The next step is to ensure that OT considerations and operations are reflected in the plan. I like to run a table top exercise (TTX) with the IT IR team and a few people from operations. It’s best to use whatever is in the news recently so ransomware is a perfect scenario. Try running the IT plan on an incident impacting OT and identify the gaps that emerge (there should be a few). Then take a look at any safety plans and emergency plans that the OT team already have on the shelf, and evaluate how the new plan can incorporate and compliment the existing safety plans. Finally add some OT specific scenarios to your overall IR plan to ensure that when you have an incident you are not starting from a blank page. QUESTION: What recommendations do you have for performing red team exercises? ANSWER: Red teaming is an important step in the maturity of an ICS IR team, however, to be effective it requires a level of maturity from the IR team and a red team who is proficient in emulating adversary behavior in an OT network. Make sure you have clearly defined exercise goals and rules of engagement and you’ve accomplished some pre-requisite actions such as comprehensive, continuous asset identification program, a developed collection framework to support your ICS security monitoring, correlation of security monitoring with process data, integration with IT monitoring, and have a robust risk management program. Without these emulating a high order adversary the red teaming will not be particularly effective and resources would be better invested in ensuring fundamentals and security postures are prepared for such an exercise. Finally, for an effective exercise it’s best to conduct it in an environment that replicates your production control system network, some tips covered above (Question 2) when addressing building a lab environment. QUESTION: Do you have (or can you point me to) a standard/document that can be used to design the turtle mode you described? Also do you have an estimate on what percentage of the industry is already using this technique? ANSWER: I’m not aware of anything out there that describes “turtle mode” but I know other ICS security experts have a similar concept in their repertoire. Developing a defensible cyber position is really something that needs to be tailored to the specific process, threat model, and management decisions on acceptable impacts from a defensible position. In the end, “turtle mode” is about having a plan to temporarily reduce your cyber risk surface area while minimizing process impacts. Some key things to consider are: 1) What connectivity can you temporarily disable safely? 2) Can you disconnect all pathways to the internet from the ICS? 3) Are there functional elements that might be able to temporarily disabled (perhaps with an impact to efficiency)? 4) Can you temporarily limit accounts that access the ICS from on or off site? 5) Can you temporarily move to a secure out of band communications framework? 6) Can you temporarily limit processes changes to ensure your baselines are validated? QUESTION: Where do you see plans fall apart the most? Is it right at the start or is there a common point along the plan execution you see it fall apart the most? Bit of an abstract question, but any insight would be great. ANSWER: There are two areas where I see organizations stumble, communications and experience. Most IR engagements begin to fall apart when it becomes necessary to communicate with partners, customers or the public and few plans have a communications plan as a key part of the IR. Every organization believes that all aspects of an incident can be handled either “in-house” or with existing resources, this is almost never the case for an effective response. Finally, often motivated by the above, organizations attempt to leverage only internal resources without the needed expertise and experience to handle an incident. This often leads not only to the response not being completed correctly but in the destruction of evidence or un-necessary impact to the process/business. It’s ok to admit you are in over your head, organizations who do this well know what they can handle in-house, and who they can call when they can’t, and the self-awareness to know the difference. QUESTION: When looking for partners, what are the key skill sets that you feel are hardest to find? ANSWER: Find partners who are strong where you are weak. If you have really great host analysis team, make sure your partners compliment you with strong network analysis capabilities. Perhaps you have a really mature team that doesn’t have the time to build and maintain an ICS testing lab. The community has really grown over the last few years and there are some really great organizations out there who can fill a lot of gaps. The hardest part is making an honest assessment about your capabilities and where you need to reach out for help.

  • Q&A Follow Up with Jules Vos: Deciphering the Value of Zero Trust & CARTA in Operational Technology

    By Jules Vos, Head OT cyber security services - NL at Applied Risk - Critical Infrastructure Made Secure August 2021 We hosted a (CS)²AI Online™ seminar on August 26, 2021 that focused on Deciphering the Value of Zero Trust & CARTA in Operational Technology. Here is a bit about the event: IT and OT are increasingly becoming one and the same entity, and are approaching a common set of business goals and objectives for the future of many industries. Driven by the increase of Industrial Internet of Things (IIoT), Industry 4.0 and new business opportunities presented by digital transformation, many organizations in the energy sector are already entering the IT/OT integration journey and embracing the benefits as well as risks associated with such business models. This integration introduces new dynamics especially for IT and OT cybersecurity teams and a consolidation of responsibility for strategy. The need for a proven and different security approach beyond traditional defense in depth is becoming a necessity for many organizations in light of emerging cyber threats. Modern concepts that have been gaining traction over the last few years are Forrester’s Zero Trust model and Gartner’s Continuous Adaptive Risk and Trust Assessment (CARTA). • What is Zero trust and Continuous Adaptive Risk and Trust Assessment (CARTA) in OT? • Why these models are game changer for OT? • Where are key benefits and how to embrace this journey for your OT? • Case Study: applying zero trust to include IIoT and OT at major energy company Speaker: A forward thinking industrial cyber security expert with over 30 years of outstanding experience in engineering, consulting and mastery in industrial automation. With a hybrid skill-set in detailed control system engineering (DCS/SIS) and consultancy Jules Vos has been involved in a number of complex oil and gas production and power generation environments, in addition to cyber security and standardisation processes. Jules is a ICSJWG panel member and has collaborated closely with EUROSCSI and the Dutch NICC cyber security initiatives. As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. ******************* To view the full recording of his talk, please visit https://www.cs2ai.org/sponsored-seminars. Question: You state OT monitoring is important. Do you have experience with these and how would connect that to the SOC? Answer by Applied Risk: Basically any feasible kind of monitoring should be considered. The more monitoring the better. Options are: 1) use of vulnerability management tools to catch anomalies like Nozomi, Claroty, CyberX, Forescout, Tenable etc. These tools can integrate (API level) with SIEM solutions (LogRhythm, Splunk, Arcsight etc.). 2) Collect syslog files form devices such as network devices (Firewall, routers) or none-windows machines like Linux. 3) collect windows log files using WMI/WEF/WEC type of solutions 4) use smart attack path vulnerability scanners, to find blind spots or network configuration weaknesses (Skybox, Tufin). These tools are not OT specific but are commonly used in IT. Because the path from internet to OT often uses IT networks they may be useful. These tools often utilize firewall monitoring consoles as well like Paolo Alto Panorama etc. 5) Virus scanner (McAfee, Symantec) orchestrators (e.g EPO) integrated with SIEM. Question: Thanks for hosting the event, if we are interested in learning more how would you reccomend we do so? Books, videos etc? Answer by Applied Risk: There is a lot on internet to be found about Zero Trust. Gartner CARTA, Forrester ZT, Microsoft ZT articles. Many solution providers also have insightful articles about the subject. Question: A key balance in cyber security is enabling ths business to continue to do business, Does 0t reduce the ability of a business to do business and if it does how do you limit that impact? Answer by Applied Risk: This is a very good and fundamental question. Cyber security must enable the business and help business continuity. So, all measures taken shall not be a blocker. Of course, measures like strict identity management, meaning giving up the commonly used ‘OT group accounts’ may be perceived as annoying or a blocker. But it shouldn’t be. Identity and access management shall be architected in such a way that it makes use if named users easy for the business. Next to that awareness sessions shall be held for the business to explain the rational behind zero trust, behind identity management. I haven’t come across a street where everybody is using the same front door key, so why would this approach be expectable critical production processes! Question: In what ways can a Zero Trust Architecture simultaneous improve security through more authentication/anuthorization/observerations/interventions, yet at same time potentially reduce security through then acceptance of riskier endpoints where underestimate risk of those endpoints? Answer by Applied Risk: One of the key elements of zero trust is strong segmentation. If riskier endpoints for example means obsolete (e.g., Windows XP), they will be put in a separate segment to manage access best. A fundamental part of any architecture is to understand the risks and design against business objectives, business criticality and technical capability of components in that architecture. Question: How do we ensure engineers agree to zero trust tactics, i.e. provide them enough comfort that it will not intervene with the essential or critical functions (Safety, loss of control, etc.)? Answer by Applied Risk: First and foremost: ZT doesn’t introduce new solutions in the OT perse. It improves the use existing features like identity management. Monitoring solutions are all proven in use so not new either. Engineers in general are lacking deep cyber security knowledge and cyber security risk understanding. So education is the first step to take. Next collaboratively solution designs have to be developed. OT engineers are very disciplined in how they design hardware (e.g. cabinets, auxiliary rooms), safety controls (although the safety system key switch is often not well managed) and sometimes also physical access. But in general, they are far too relaxed when it comes to digital access and controls because risks are not well understood. This needs to change. Question: Do you see Zero Trust as something you achive and done or is it a constant journey that will continue to change? Answer by Applied Risk: ZT will definitely be a journey. The principle will remain the same, however the solutions will develop rapidly. Maybe in future, based on these principles, we will move to self-controlling connections between devices, so network controllers (like firewalls) are becoming redundant. That would be the ideal world. Question: What are the unique challenges of the industrial segment regarding the adoption of zero trust and CARTA. Answer by Applied Risk: The introduction of stricter identity management in the OT and integration with the corporate identity management system (based on zero trust connection) will be the main challenge. Question: Can zero trust can be applied on existing legacy systems, if yes, can you share your best practices Answer by Applied Risk: Yes it definitely can. Applying network segmentation is one thing. Identity management can be applied by implementing named users in each and every system. So get rid as much as possible of group account. Next extract users ID’s from these systems (manual or in an automated way) to a central management system and manage identities from there. Preferably incorporate this data into the corporate identity system but if this is not yet feasible manage identities in the OT so if people move or leave the company the identity can be removed from OT systems. Next to that network vulnerability monitoring can already be applied (see earlier reply including potential solutions) Question: Do you think all the cloud providers in market today think about ZT and CARTA. Any examples you can share of those initiatives? I know of MIcrosoft working on Defender XDR. How about AWS and GCP? Answer by Applied Risk: Yes they are however I don’t have specific examples. Please keep in mind that ZT is very much how we as end users design our IT and OT. Basically all solutions are available however, as end users, we have to architect ZT to our needs. Question: Do you see a prioritized list of areas to start? Network segmentation vs. User Identity (ACL, Least Privs), etc.? Answer by Applied Risk: It may have become clear from my previous answers that my focus is very much on architecting strong identity management and network segmentation. Identity management is complex and requires a lot of time to design the right solution. So don’t underestimate this but it is fundamental and really needs to be protected against current cyber threats. Question: How often are the manufacturers devices checked for secuiry and spying Answer by Applied Risk: Manufacturers continuously update their vulnerability databases and check devices for new vulnerabilities. However it is up to end-users to ensure that continuous monitoring and remediation of vulnerabilities and measures against new threats are being executed. The governance and evergreening is the end-users accountability and contracts with suppliers need to be managed. Question: Does Zero Trust also apply to IoT, and more specifically to Edge Computing model? Answer by Applied Risk: Absolutely. If IoT is really used as IoT, meaning direct internet connection into the cloud, ZT and strict identity management is essential. Remember that an IoT device also is an identity that needs strict management. Question: Could Zero Trust be applied to brown field OT (given the rigid and conservative nature of it) or only is applicable to new deployments? Answer by Applied Risk: Definitely. Segmentation, strict identity management, vulnerability monitoring, logging and integration to SOC/SIEM as examples has been done in many brownfield cases. These activities are not easy and need comprehensive design and preparation but are critical and necessary. Question: Are you familiar with Open Process Automation alliance and how would you consider that to be suitable for Zero Trust implementation? Answer by Applied Risk: OPAF has adopted IEC-62443. So, I don’t see an issue with OPAF and Zero Trust. Remember Zero Trust is very much about architecting a ZT OT using existing technology. ZT is not a technical ‘solution’ but an architecture and a way of working and managing your OT. Question: Operationally, if the business does not have a SOC operation (as yuo mentioned already), how would operation staff react to cybersec issues even if monitoring tools generate an alert? Who on the shop floor can understand the alert and decide how to react? As a security tool developer, we need to understand our audience to develop appropriate tools. Answer by Applied Risk: Companies and OT management need to understand that investments in OT cyber security are inevitable. Incident and event management procedures and processes need to be developed amongst many other things. It could help to develop smart tools that help shop floor users or OT cyber security focal points, if no SOC is implemented, to run effectively through the incident/event management process, also providing guidance on what to do to reduce the impact as much as possible. So that you don’t need to be a specialist to be able to contain an incident as quickly as possible. Question: Is it possible to do threat evaluation by correlating OT environment risk with user risk? if yes,any recommendations? Answer by Applied Risk: This is something for the future I think I haven’t seen this yet implemented. But yes, CARTA for example is based on the adaptive risk principle, meaning that users in a certain environment or circumstances could be trusted more or less, so they get more or less privileges. This approach definitely will be further developed in future and may replace or enhance role-based access control (RBAC) or attributed based access control (ABAC) which are difficult to implement and maintain approaches. Question: Can the process opted during Zero trust strategy for IT be used or referred while defining it for OT? Answer by Applied Risk: I don’t fully understand this question. IT and OT must collaborate and further integrate. OT continuing to stay independent and ‘isolated’ will degrade security levels instead of protecting the OT. The digitalization is moving fast and so does the need for OT data and optimization. There is must in common between IT and OT despite the substantial differences. So combining effort and architecting an zero trust based integrated IT-OT is the way forward in our opinion. Question: Is it true that OT is changing fast with greater IOT Online and in the cloud. I think Colonial Pipeline (CP) was a perfect example of Why OT should attempt to maintain Some isolation! By isolation, I don't mean complete disconnection but isolation and zero trust operation seems to be key. We want access to monitor and do supervisory control But Not Alot of Onlining of IOT Operation in Clouds. And IMO the idea of Digital Twins hosted in Cloud is :-p. Answer by Applied Risk: Colonial Pipeline clearly was not an OT issue but failing billing (office IT) systems forced to company to stop production. Zero Trust indeed is the way to go in my opinion when it comes to integrating IT and OT. It means that nothing is allowed unless explicitly approved. This applies to all OT elements including the inevitable cloud integration. IoT but also specific OT functions like advance process control as well as services like condition monitoring will more and more be cloud based. Question: Preventing lateral movement is important But we have to develop technology to prevent exfiltration in general. There are too many ways for botnets to do command/control from the Internet once a quorum of IOT devices has been co-opted. We've seen this especially with Router devices (some of which have been in operation way beyond expected lifetimes). Old hw/sw solutions are a concern when they've been networked but are not being updated against attacks. Answer by Applied Risk: Fully agree. So, this is why end user must develop a comprehensive governance framework and operating model to manage compliance and devices and keep the OT evergreen. Cyber security must be a part of daily operations. ANSWERS PROVIDED BY:

  • Q&A Follow-Up with Peter Lund & Chris Duffey: Why Hasn’t SOAR Taken Off in ICS?

    By Peter Lund, Vice President Of Product Management at Industrial Defender and Chris Duffey, OT/ICS Specialist at Splunk We hosted a (CS)²AI Online™ seminar on August 18, 2021 that focused on Why Hasn’t SOAR Taken Off in ICS?. Here is a bit about the event: Besides the typical reluctance to embrace new technology in the ICS world, security orchestration, automation and response (SOAR) tools haven’t been as widely adopted as they probably should be because of the contextual data deficiency found in most security alerts. To create an appropriate automated response, you need to know exactly which devices are compromised and whether you can/should isolate them, which up until recently has been extremely difficult to do for industrial control systems. Let’s say you’re alerted that an HMI has a banking Trojan. That’s not great, but not likely something you’d feel compelled to take offline. However, if there was ransomware in an HMI, you have a serious problem. So, what should you do? Well, if you have 7 HMIs, it’s likely fine to just disconnect the infected one to stop the spread, but if that’s your only one, then it’s definitely not ok. This is a prime example of why having access to contextual data about both the threat AND the affected asset is so critical to informing automated security management. In this seminar, you’ll learn: • Why security orchestration and automation reduce the risk of operational downtime from a cyberattack • What type of contextual security information is critical to powering a next-gen program • How feeding the right ICS asset data into your SIEM + SOAR helps demonstrate ROI across your security ecosystem As always, we encourage our audience to participate throughout the event by contributing feedback and questions for our speakers. We weren't able to answer all of your questions, so we have asked some of our speakers to answer a few of them. Below, are some answers to a few questions posed during the event. Do you want to have access to more content like this? Register for our next (CS)²AI Online™ seminar or symposium here, or consider becoming a member of (CS)²AI to access our entire library of recorded sessions. ******************* To view the full recording of their talk, please visit https://www.cs2ai.org/sponsored-seminars. QUESTION: Is it like SNMP Management/ Sniffer s/w ? Doesn’t interrogate the network devices to get the devices info ? Could it interrogate RTU that talks Modbus or other industrial protocol?RESPONDENT: Pete ANSWER: We have numerous methods of active and passive data collection methods which do include SNMP, network traffic analyis, and interrogating via Modbus. QUESTION: How to make sure IDS system wouldn’t flag it as intrusion? RESPONDENT: Pete ANSWER: The SOAR/SoC team should be working in conjunction with the OT Secuirty team to ensure that the network path and traffic are allowed/whitelisted by the IPS Sensor. A test and check approach is already a good idea, espically if you have a QA/Sandbox enviroment. QUESTION: How does an organisation ensure that all devices are registered? Especially with a mobile (e.g. vehicles) or Work-From-Home workforce where things may be vulnerable and diffuse? RESPONDENT: Pete ANSWER: There are ways to monitor/instrument these types of devices and if for some reason that is not possible - monitoring the ingress point to your company network for all these devices. QUESTION: Would IT and OT incident response protocol come into play if a pen test session thats knocked equipment off line...? RESPONDENT: Chris ANSWER: It would depend on the scenario whether this should be involved. Typically in this use case you would want additional contextual data. For example, maintenance records so you could determine whether a device going off line is expected. In most cases, you would want to detect something like a network scan and if that combined with a device going offline happened, then it might be suspicious. In most use cases, it'll involve someone going "onsite" to fix the device. QUESTION: You've mentioned a little about this but I would image integrating SOAR in a live environment is difficult. How much does Sandboxing, Simulation and Virtual help the process work. RESPONDENT: Chris ANSWER: Implementing SOAR in live environments can be challenging, which is typically why there needs to be some kind of use case development and incident response plans in place first. This gives you a good place to start, but it has typically been validated and so there is minimal impact to operations. Sandboxing, Simulations, or Virtual Process could help during the testing scenario, but in OT environments that may be difficult to perform due to a lack of a true testing environment. QUESTION: We're using a tool that focuses on identifying devices by listening to traffic. Do you have any ideas for merging that with the CMDB that the IT guys maintain? RESPONDENT: Pete ANSWER: Yes, we at Industrial Defender do this for many of our customers. It's as simple as exposing the asset data and relevent changes via a method support by the CMDB. For modern systems its normally an API, older systems usually have file-based import methods. QUESTION: How does SOAR address securing physically separate legacy systems? RESPONDENT: Chris ANSWER: How much can be done in a legacy environment will often depend on what is possible with OS, equipment interfaces, etc. A good example is the logging of USB devices is much easier and verbose in newer OS'es. Making sure you have some way to get that data in and make it usuable is typically the larger challenge as not all products may support older operating systems and equipment. What you automate may only be limited to information gathering to help an analyst make a decision faster. QUESTION: In distributed environments (pipelines, water treatment, power distribution) the bandwidth to the facility may be very limited. In some cases older equipment may have limited communication throughput capacity before device operability starts to be affected. What "levers" are available to be able to control traffic? ANSWER: This question was answered during the seminar. Please watch the recording to learn more.. QUESTION: Any best practise and expierence that you can share on implementing SOAR on legacy systems? RESPONDENT: Chris ANSWER: There are several best practices: 1) Make sure you can actually gather data from those systems, 2) Have well-defined use cases that you know on those legacy systems, and 3) Try to leverage existing tools or scripts you may have. Gathering data from legacy systems is often a challenge, but teams will usually have existing scripts to handle problems beforehand. For example, one customer had a service from a legacy system that would simply stop logging which was the only indication there was a problem (service would continue running). The customer has already deployed a script that would run when this even occured, but it was very ad-hoc and cludgey and they were never sure whether it succeeded. Running this same kind of action in Phantom allowed them to get feedback from the script so they could validate the script succeeded and then perform an additional step to make sure the service was actually functioning correctly. In this case, it made an after-hours call out only necessary if things did not restart appropriately. QUESTION: Is it safe to say that the 'safety' concern with SOAR has to do more 'what' you automate vs. whether or not you deploy a SOAR solution at all? In other words, the implementation dictates the safety 'risk' more than the technology itself? RESPONDENT: Chris ANSWER: Yes, in most cases it depends on what is the potential impact or effect of the action that SOAR platform takes. One exception though would be a SOAR workflow could not be properly implemented and then cause an issue. For example, say a second action should only occur if the first succeeded and this was never validated which then resulted in a second command causing a problem. However, validating that an automation won't impact safety should be part of building workflows and playbooks in the SOAR solution. QUESTION: Can there be a problem where the IOT is vendor provided on a proprietatary basis insofar as they may "drag their feet" on providing some of the asset inventory information because they consider that their proprietary business information? RESPONDENT: Pete ANSWER: Yes, this is a scenario that can happen when dealing with an OEM. Don't be afraid to push back, as the equipment owner and operator you are the one that is resposbile for the risks related to not knowing your asset's inventory and vulnerability posture. We have helped customers through these types of conversations, often ending in mutal benefits of all parties. QUESTION: Are procurement contracts now including provisions that third-party vendors be required to participate actively in their customers/ SOAR efforts? ANSWER: This question was answered during the seminar. Please watch the recording to learn more. QUESTION: Isn't it a problem to start with SOAR given Chris said you need to know everything on the network and Paul said often that ppl had retired and left systems running on the network nobody understands... seems it'd be difficult to get started? RESPONDENT: Chris ANSWER: This partly depends on the purpose of the SOAR. In most cases, SOAR is implemented with known scenarios and use cases; hence you need to know what systems you want to use SOAR on. Using SOAR on an unknown asset or system could be risky and affect safety and operations. However, many platforms like Splunk might be able to do some detection of unknown systems so you can at least identify their presence (especially if they are networked). In almost all the deployments I have been involved with, we found systems and interfaces that were not properly documented. So making sure you know about an asset's characteristics should be considered when implementing the workflow and be adjusted based on that information. For example, your SOAR platform could do something as simple as create a ticket about an unknown asset and then not perform any additional actions. QUESTION: For the asset owners currently using SOAR, can we get an idea of what industries they are in (e.g. electric utilites or O&G) and what they are throwing into Splunk currently?RESPONDENT: Chris ANSWER: Personally I have seen SOAR most often deployed in Power Utility and Oil and Gas industries. I think those industries have been high profile and a heavy focus in the past (either regulation or in the media) so their security practices tend to be more mature, and as a result, they are looking at how to extend their capabilities. Manufacturing has a heavy interest but I would say are generally less mature. QUESTION: Please pardon my rather mundane question, but coming from an academician's perspective, do you foresee a time at which we might transfer/transition this concept to home-based IoT (security automation) applicability? ANSWER: This question was answered during the seminar. Please watch the recording to learn more..

  • How Do You Ask Your CISO for OT Cybersecurity Budget?

    George Kalavantis, Industrial Defender COO August 23, 2021 Getting budget approval is clearly a challenge for many in our community. Getting for OT cyber security can be even more challenging in some companies, depending largely on executive awareness and “buy in”. Your success most often comes from how you engage the conversation. So first let’s consider why this is even an issue at all. It seems like not too long ago, there was a time when OT was not even on the security roadmap, and CISOs couldn’t spell OT if you spotted them the “T”. But then STUXNET, UKRAINE, NOTPETYA, and most recently the ransomware attack on Colonial Pipeline happened. These events have accelerated the learning curve, and many CISOs have had a crash course on the criticality of OT to their business and the lack of visibility into OT environments within the enterprise cybersecurity stack. It is now clear that siloed, uncoordinated teams across the same enterprise is not a recipe for success. Though there are a number of preparatory steps one should consider, I want to share a few of my own tips on how to make the case for OT cybersecurity budget to your CISO. Read them here: https://www.industrialdefender.com/how-to-ask-your-ciso-for-ot-cybersecurity-budget/ Keep charging, George

  • After six years we have definitely reached an exciting stage for (CS)²AI!

    By Derek Harp, (CS)²AI Founder, Chairman and Fellow April, 2021 Dear Members, After six years we have definitely reached an exciting stage for (CS)²AI with significant growth every month across the community engagement metrics we are tracking. The global operations team wants thank you for attending the CS2AI Online sessions in record numbers and for contributing your inputs. All your positive feedback is also serving as rocket fuel for us personally and we sincerely thank you for that. We will continue to experiment, innovate and work hard to support your critical endeavors. At this juncture we can only reach our organization’s true potential by growing our team and starting more Members Helping Members initiatives: One way to do that is to activate more global volunteer positions. For those of you that have already notified us of your interest, look for an email soon regarding opportunities. If you have not already, please tell us you want to GET INVOLVED and we will be in touch. Today, I would also like to announce that a few new part-time global team positions will qualify for some compensation. If you interested in learning more and have the flexibility to consider that option, please email us at input at cs2ai.org and we will add you to our list of candidates to contact about those positions as they become available. Regards, Derek

  • OT Cyber Risk Management – You’re Doing It Wrong

    The 3 Most Common Problems That Nearly ALL Cyber Risk Management Programs Have, and How to Solve Them Submitted by: Clint Bodungen President & CEO at ThreatGEN and (CS)²AI Fellow August 11, 2021 This article was previously posted on LinkedIn and can be found here. In this article, I will discuss the 3 most common mistakes people still make when assessing and addressing OT cyber risk management (hint: most of you are still doing it backwards), and ways that you can make your process more efficient and effective (including more cost-effective). I get. Dealing with risk is not exciting… or easy. Risk is a term that turns most people off. It is certainly not as exciting as “penetration testing” and “threat hunting”. More common subjects like threat monitoring, vulnerability assessments, and incident response also definitely have a greater mind-share for most of you tasked with OT cybersecurity. While every single one of these elements areimportant (critical, even), the truth of the matter, however, is that all these things feed into the bigger picture that is your overall risk: risk to safety, risk to production, and ultimately, risk to the business. Furthermore, not one of these things on its own gives you everything you need to see the entire picture. They each provide a different piece of the puzzle; a puzzle that, when completed, gives you what you need to (“stop me if you’ve heard this one”) create an accurate, targeted, and cost-effective risk mitigation strategy. (Eesh... I know, right? We have all heard this before…) I think that most people probably understand this overall concept. The problem is, there are so many aspects to managing risk, and so many moving parts that, (a.) most organizations do not have the resources (people and/or budget) to do it all (so, they are forced to choose one or two areas that will give them the biggest “bang for their buck”), and/or (b.) they don’t truly understand the differences between these tasks (most notably in the assessment tasks), what information and value they should be getting out of them, and how that feeds into their risk management. So, let’s unwrap the problems, identify solutions, and simply this whole thing. Problem 1: Confusion Over Assessment Terminology I cannot tell you how many times I’ve had a customer come to asking for a penetration test, but what they really wanted was a more comprehensive vulnerability identification (a vulnerability assessment), with risks identified and prioritized (a risk assessment). With all the different terms and types of assessments, it can be understandably overwhelming. Not understanding each of the assessment types, steps, and what you actually need can also lead to an incomplete risk assessment. I will cover more specifics about each of the different assessment types and steps in subsequent articles, but for now, let’s break down and define all the terms commonly used to describe the different assessments. Common Cyber Risk Assessment Terminology · Breach Assessment (“Threat Hunting”) A search for evidence that you have been breached. This can be formed as an initial step or as part of threat monitoring. It is also part of incident response, to identify where, how, and when a breach may have occurred. · Vulnerability Assessment This is an examination of your attack surface through vulnerability identification. It is sometimes incorrectly referred to as a vulnerability scan but that’s only part of it. A vulnerability scan is an automated search for technical vulnerabilities. A complete vulnerability assessment also looks for procedural/process, and human vulnerabilities. · Vulnerability Scan Part of a vulnerability assessment that uses an automated tool to identify technical vulnerabilities such as software bugs, configuration weaknesses, and missing patches. · Gap Assessment (Gap Analysis) Part of a vulnerability assessment that identifies missing security controls and procedures, which are outlined (or required) by best practices, standards, or regulations. · Audit A formal gap assessment often carrying penalties for failures. · Penetration Test Part of a vulnerability assessment and uses hands-on adversarial (“hacker”) techniques to validate findings (exploit feasibility), identify more complex vulnerabilities, or test existing security controls (breach feasibility). Note: A penetration test, on its own, is not meant to provide the level of comprehensive vulnerability identification that a complete vulnerability assessment would. · Red Team Exercise (Red Team Test, “Red Teaming”) An exercise that simulates a realistic cyber-attack (sometimes also deploying physical intrusion), where the red team (the attackers) deploy actual offensive techniques and strategies in an attempt to breach the target network and system(s). Such exercises are meant to test an organization’s overall security defenses, including threat monitoring and response capabilities (the blue team). · Purple Teaming A more recent term to better emphasize the blue team engagement during a red team exercise. (red + blue = purple) · Risk Assessment The entire process of identifying assets, vulnerabilities, consequences and impacts of incidents, and prioritization based on a risk score. Note: If the final result of the process does not include some level of risk scoring or prioritization, it’s not a risk assessment. Additionally, vulnerability scoring (i.e., CVE/CVSS) is not necessarily risk scoring. Risk scoring should be specific to your organization and consider the impact to your business (financial, production, safety, etc.). Again, I will cover more specifics about each of these in subsequent articles. For now, hopefully this helps clear up some of the confusion between the different terms, assessment types, and what you actually need for your situation. Problem 2: Most Risk Assessment Processes Backwards Most risk assessment processes I see go like this: Step 1. Identify vulnerabilities (in some cases they just perform a penetration test or a gap assessment). Step 2a. Attempt to prioritize remediation based on the vulnerability CVSS score. Or Step 2b. In some cases, organizations will assign a “criticality” score to assets, and then prioritize remediation by asset risk, calculated using the asset criticality and vulnerability scores. (Good job! But you’re still doing it wrong.) In the end, you’re still left with a heap of problems to fix, but at least they’re prioritized. Right? So, what’s wrong with this process and why am I calling it backwards? Most organizations spend too much time and money performing exhaustive vulnerability assessments on most, if not all, of their assets, prioritizing every finding, and trying to tackle all the remediation. Risk prioritization is good and should be performed. However, by reordering some of the steps normally performed in the risk assessment process, you may find that you can significantly reduce the amount of vulnerability assessment work needed, as well as the number of high priority “fixes” on your list. In doing so, saving you time and money. Here is the process I recommend: 1. Identify your assets and organize them by “criticality”. Criticality is a term used to describe what the impact level would be for a given asset if that asset were out of service. Performing this step at the beginning of your assessment, rather than near the end, can potentially save you a lot of time and effort throughout the rest of the process. Case in point, assets with a low criticality (i.e., low risk) can be moved lower down on the list, or even eliminated from the assessment altogether in some cases. 2. Identify all the access vectors to each of those assets, starting with the assets with the highest criticality. This is a step I see that is missed in most assessments. Yet, it is probably the step that could save you the most time in the long run. Access vectors are the ways in which humans or other assets can access another asset. This could be a network connection or physical access. Identifying the number and type of access vectors can lower the overall risk rating of an asset, thus, reducing the level of risk assessment and management effort needed for that asset. For example, if there are no network connections to an asset, the vulnerability assessment, and especially the remediation, for that asset only needs to take into consideration local access for now. Note: In this case we are only considering network and physical access vectors. Logical access control is not considered in this step. 3. Prioritize based on criticality plus the number of access vectors. Assets with fewer access vectors and/or those inside trusted/private zones would also be lower on the priority list (requiring less attention and level or effort later) than those with more access vectors and/or in less trusted zones. Trust zones and internet access should also be considered. For example, assets without internet access and/or in more trusted zones could be lower on the list. 4. Perform vulnerability assessments, starting with your highest priority assets first for the technical assessments. This step is where you should see your earlier work really start to pay off. Vulnerability assessments typically make up the bulk of the work of a risk assessment. So, by starting the prioritization process at the beginning of the assessment, you should be able to postpone, if not eliminate, some of the vulnerability assessment work here. Remember, your vulnerability assessments should include human, processes and procedures, and technical vulnerability identification. But for asset specific evaluation, especially technical inspection, you can move lower risk assets (i.e., those with lower criticality and fewer access vectors) further down on the list. Assets lower on the list can be assessed later in the process, moved to a time when resources allow it, or eliminated from the assessment process altogether in some cases. Note: I recommend avoiding automated vulnerability scanning on most production OT networks. 5. Prioritize risk and remediation by asset, not by vulnerability. At the end of a vulnerability assessment, many organizations prioritize every vulnerability (usually by CVSS score) and remediate each of the vulnerabilities according to priority. Even prioritized, this can be a huge undertaking. Especially if you used automated vulnerability scanning! Instead, create a cumulative vulnerability score for each asset. I prefer to create a score based on the number and severity level of critical and high vulnerabilities, as well as the number of vulnerabilities that would allow an attacker to gain access to the systems (e.g., remote code execution, remote access, etc.). I then use the asset critically rating and number of attack vectors as modifier values to create a final overall risk score for each asset, which I can use to provide the final asset prioritization. The details of the formula I use are not important for the purpose of this article. You can use whatever formula or calculation method that makes sense to you. The important thing is that you use the same calculation method for every asset, and the final result allows you to prioritize assets based on the values that are important to you in your prioritization. Hint: When you’re preparing to remediate vulnerabilities and you see hundreds of them, remember that most of them all have a common fix. A common OS patch or removing unneeded applications like Adobe, for example. I even use a spreadsheet to help simulate the amount of attack surface reduction I get by applying these fixes one at a time. In summary, prioritization should start at the beginning of the entire assessment process, not wait until the end. Problem 3: Assessments Are Treated as a “One-Off” Almost every organization I speak with treats vulnerability assessments, and even risk assessment, as a “one-off” project or annual occurrence. Some of those organizations are at least comparing subsequent assessments with previous ones, to hopefully show progress. That is what you should be doing at the very least. However, always having an understanding of your risk profile at any given time, and showing progress throughout the year, rather than from year to year, is much more effective. This doesn’t mean you need to be performing multiple risk assessments throughout the year. Having a framework, on the other hand, that can ingest data regularly and always provide an active “living” risk profile as you make progress, is recommended. Having such a framework also helps keep assessment data and remediation tasks organized. Most of the top OT specific threat monitoring platforms (e.g., Forescout, Claroty, Dragos, Nozomi, Tenable.ot, Microsoft Azure Defender for IoT, etc.) have risk scoring and tracking mechanisms built in to varying degrees. However, I also recommend using a platform that is dedicated to OT risk tracking and management by correlating data from multiple sources such as SecurityGate.io and GPAidentify. Conclusion OT risk management isn’t easy, and for most people it probably doesn’t seem very glamorous. To make matters worse, most organizations spend more time, effort, and money on the process than they need to. By understanding all the terms and steps involved in a risk assessment, rethinking how and when you prioritize your assessments and findings, and using an ongoing process combined with a risk management framework, your risk management process and program will be more efficient, effective, and won’t tax your resources nearly as much. About the Author Clint Bodungen is a world-renowned industrial cybersecurity expert, public speaker, published author, and gamification pioneer. He is the lead author of Hacking Exposed: Industrial Control Systems, and creator of the ThreatGEN® Red vs. Blue cybersecurity gamification platform. He is a United States Air Force veteran, has been a cybersecurity professional for more than 25 years, and is an active part of the cybersecurity community, especially in ICS/OT. Focusing exclusively on ICS/OT cybersecurity since 2003, he has helped many of the world’s largest energy companies, worked for cybersecurity companies such as Symantec, Kaspersky Lab, and Industrial Defender, and has published multiple technical papers and training courses on ICS/OT cybersecurity vulnerability assessment, penetration testing, and risk management. Clint hopes to revolutionize the industry approach to cybersecurity education and help usher in the next generation of cybersecurity professionals, using gamification. His flagship product, ThreatGEN® Red vs. Blue, is the world’s first online multiplayer cybersecurity computer game, designed to teach real-world cybersecurity. About ThreatGEN ThreatGEN bridges the “Operational Technology (OT) cybersecurity skills gap” utilizing the ThreatGEN® Red vs. Blue cybersecurity gamification platform and our OT Security Services, both powered by our world-renowned OT cybersecurity experts and published authors. The ThreatGEN® Red vs. Blue cybersecurity gamification platform uses cutting-edge computer gamification to provide an exciting & modernized approach to OT cybersecurity training, both practical and cost effective! Our OT Security Services use our decades of industry experience combined with strategically chosen partnerships to create a holistic service offering. For more information, visit our company website at https://ThreatGEN.com, follow us on LinkedIn at https://www.linkedin.com/company/threatgenvr/, or follow us on Twitter: @ThreatGEN_RvB. For further sales information, send an e-mail to sales@threatgen.com. Derezzed Inc. D/B/A ThreatGEN 14090 Southwest Freeway #300 Sugar Land, Texas 77478 +1 (833) 339-6753 #OT #cybsersecurity #riskmanagement

bottom of page