Securing the Software Development Lifecycle (SDLC) in Healthcare

Subscribe on your favorite platform:

About the Podcast: The CyberPHIx is a regular audio podcast series that reports and presents expert viewpoints on data security strategy for organizations handling patient health or personal information in the delivery of health-related services. These timely programs cover trends and data security management issues such as cybersecurity risk management, HIPAA and OCR compliance strategy and vendor risk management. Meditology Services, the healthcare industry's leading security and compliance firm, moderates the discussions with leaders in healthcare data security.

Breaches continue to balloon for healthcare applications as the industry continues to drive innovations in virtual care, personalized medicine, and digital healthcare. Organizations that deploy robust application development security programs create the opportunity to identify and correct security weaknesses before products hit the market. 

Software Development Lifecycle (SDLC) security programs provide the tools, processes, and training required to design products with security in mind to reduce the likelihood of breaches of sensitive information. 

Join us for this episode of The CyberPHIx podcast where we hear from Ed Adams, CEO of Security Innovation. Security Innovation provides application security services, training, testing, and consulting to healthcare and other industries. 

Topics covered in this session include:
-

  • Application development security trends 
  • The latest threats and vulnerabilities impacting healthcare application development 
  • Best practices for securing AppDev, DevOps, and DevSecOps teams and processes 
  • Common development misconceptions and missteps that lead to security exposures 
  • Security training approaches for healthcare app developers 
  • Frameworks and external resources for SDLC security including OWASP and others 
  • Healthcare-specific vulnerabilities and risk exposures identified during application development 
  • Third-party and fourth-party risks including open-sourced code and IoT devices 
  • Budget priorities for SDLC security investments 

PODCAST TRANSCRIPT

Brian Selfridge: [00:00:20] Hello. Welcome to The CyberPHIx your audio resource for cybersecurity, privacy, risk, and compliance for the healthcare industry. I'm your host, Brian Selfridge. And each episode we bring you pertinent infomation from thought leaders in healthcare, cybersecurity, and risk management roles. And in this episode, we'll be speaking to Ed Adams. Ed is the CEO of Security Innovation Incorporated, an organization that provides software development lifecycle security services, training, and consulting alongside other related security capabilities, products and services. I'll be speaking with Ed about current risks and best practices associated with developing and deploying applications in the healthcare setting. There are so many breaches that could be avoided if we get security baked in right up front in these development processes. So I'm excited to learn from Ed about how we can best accomplish that based on his extensive experience in this space. So let's dive into another great conversation with yet another amazing guest, Ed Adams. 

Brian Selfridge: [00:01:26] Hello. Welcome to CyberPHIx, the leading podcast for cybersecurity risk and compliance, specifically for the healthcare industry. I'd like to welcome my guest, Ed Adams. Ed is the CEO of Security Innovation Inc., an application security company offering award-winning Gartner Magic Quadrant endorsed training, as well as expert security assessments and consulting, and is also a board member of the and Treasurer of the ICIMCP, which stands for International Consortium of Minority Cybersecurity Professionals and serves as a distinguished research fellow with the eponymous institute, which has put out a report this past week that we love and was great stuff for the industry, also near and dear to my heart. 

Brian Selfridge: [00:02:04] Ed is also a host of EdTalks, a monthly panel of cybersecurity experts that discuss approaches to secure software-based IT systems. So always exciting to have a chance to talk to a fellow host or podcast host in our space. So that's excellent. I'm excited to be speaking to Ed today about securing the software development lifecycle as it applies to healthcare organizations and more broadly across industries. As breaches continue to balloon for healthcare applications and organizations, it's imperative that we focus on baking security into apps and solutions during the development cycle rather than after. Products have already hit the market as much as we can at least. So we have a lot of ground to cover today. On the topic of software development lifecycle or SDLC, I may slip into using that acronym from time to time. So you'll forgive us for that upfront. So with that editor, thank you so much for taking the time to be a guest on The CyberPHIx today. 

Ed Adams: [00:02:55] Thank you, Brian. I appreciate you having me on and I'm looking forward to talking with you. 

Brian Selfridge: [00:03:00] Excellent. So just to set the stage here a little bit, you know, software security and development is one of those specialized areas in information security that I think a lot of general security and risk practitioners may not be as familiar with. It's also a topic that can get really complicated really quickly, especially for practitioners who don't have a coding or application development background. And when you add that to the growing trend that organizations more and more are embracing DevOps and DevSecOps and all these terms that we'll throw around as they move away from traditional on-premises I.T. systems onto major cloud providers. You know, the possibilities of problems, misunderstanding requirements, control deficiencies, all the bad things, right? Just abound and are surrounding us these days. So with that and what are some of the overall let's just talk about trends, I guess, in software development and particularly around security aspects of that that you've seen play out in recent months or even recent years. 

Ed Adams: [00:03:55] Yeah. So the software has it's an ever-changing field, something that, you know, I've been in now for over 25 years and for the past 20 years, specifically in software quality and software security. And as you'll quickly learn during the course of our discussion, I love to use analogies because I find that they help when I'm explaining things to folks like my mom. So I'll use them as well here whenever appropriate. So as far as current trends in software development, one of the things that you touched upon in your opening statement is this mad dash toward major cloud providers and moving away from the on-premise traditional i.t. And software is much the same. You know, today's software is pretty much assembled. Very few software applications are actually coded from scratch anymore. And that's a big change from when I first entered the industry 25 years ago. It's even a big change from even just, you know, five or six years ago when a lot of software and business applications were actually coded from scratch. You know, today, software, it's much more assembled, kind of like a car. And just like, you know, if I'm Ford Motor Company and I want to build a mustang, I'm not the one that is making the tires. I'll go purchase the tiresome continental, you know, I'll go purchase the seats from Lear, I'll go purchase the infotainment system from Panasonic, and then I assemble that into my specific design or in the case of software business logic, and it becomes an application. 

Ed Adams: [00:05:33] But just like the Mustang, each one of those components, you have to assure that they are secure in and of itself, that they are functional in and of itself, and then that they work together. So if you look at a car, Ford is responsible for making sure that those tires from Continental have gone through some type of due diligence, and the seats from Lear have gone through some type of due diligence. And as practitioners, when we are assembling an application or even just buying an application, we need to understand what the risks are and what the security implications are. And frankly, this is where organizations like Corral become very valuable to. You know, hospitals and insurance providers that might not necessarily build a lot of their own applications, but they're sourcing them from a lot of different third parties. So the short answer to my very long ramble here is that the biggest trend in software development over the last five years is more toward the use of things that are not developed in-house. Open-source libraries, commercial off-the-shelf software. Software is very much assembled these days. There are a lot of third-party dependencies that folks have to be really concerned about. 

Brian Selfridge: [00:06:52] And just to connect some dots for maybe some of our listeners, is that why we had such a freak out around Log4j and some of these libraries and things that are that the average person may say, I don't know what Apache Log4j is or care what it is, but is it because it's one of those pieces of the car to your analogy earlier. 

Ed Adams: [00:07:11] Exactly. And Log4j specifically. This is one of these, you know, Java libraries that happen to be built into a lot of business applications that got used in a whole bunch of different ways. And the challenging part is when you discover that one of your components is faulty, just like you do a recall in a car, you have to figure out exactly where that piece is. And I heard an analogy. I thought it was a good one. Discovering where exactly Log Forge is used is trying to find a very specific screw inside the engine of a car. It's not really easy to do, especially after the car is completely assembled. And it's not easy to do unless you have a complete list of all the components that have gone into the car. And when it comes to software, very, very seldom do we have a complete list of everything that is in a software application. So, yes, Log4j was one of those third-party components that all of a sudden we discovered it was vulnerable and the industry had a panic, had to figure out what to do about it. Same thing with Heartbleed from years, years before when there was a specific SSL library or version that was found to be vulnerable and all of a sudden people freaked out. 

Ed Adams: [00:08:36] Are we vulnerable? Are we not? And then once you discover that you are vulnerable, what do you have to do about it? Well, we have to swap it out. Well, how do we swap it out? You have to figure out where it is and then update it with the latest and greatest version. So it leads me toward some of the soapboxes that I love to stand on and preach about two of them being threat modeling and software building materials threat modeling, specifically something that we do in our personal lives every single day. And we don't even realize it when we leave our house, do we lock our front door? Do we make sure all the windows are closed? Well, yes. Why? Because we're worried about people breaking in. Why are we worried about people breaking in? Because there are valuable things in the house. Well, what's valuable? And if you kind of back it up and start from your assets that you're looking to protect, I want to protect my loved ones, my jewels, my big screen TV, whatever it is. Then you work outwards. Well, how do I do that? Well, I want to make sure that I have doors and locks so no one can walk in. 

Ed Adams: [00:09:35] And I want to make sure that those locks are the. That's available to me. But there are some things that I decide I don't want to necessarily secure. For example, I might have an alarm system on all of my first-floor doors and windows, but not my second-floor. Doors and windows. Well, why? Because I determine that the risk of someone getting in the second-floor window is much lower than someone getting the first-floor window. And I'm not willing to take that cost to secure the second-floor window. So those are the kinds of trade-offs that I do in my personal life, trying to protect the assets in my home. And if only we all did that with our software and IT systems starting from the assets we're looking to protect and then working outwards, figuring out what are the threats to those assets, how do we go about protecting those assets and then implementing those that we wish and accepting those that we decide we are not going to protect proactively? It would make our lives a lot easier. But I'll get off that soapbox for now and get back to our regularly scheduled programming. 

Brian Selfridge: [00:10:37] Well, I'll stay off the regular scheduled programming for a second, because I do think it's fascinating how we have to constantly be reevaluating risk and. Right. Using the home analogy my wife and I were just talking about, we went away for a weekend somewhere last weekend with our family. And I kind of came to the realization I was like, I don't think we have that much of value in our house anymore. Like, how much do TVs cost anymore? They're like a couple hundred bucks. Everything. I mean, the computers and tablets are coming with us because the kids need them. And I was like, I don't I don't want anybody in my house. Don't get me wrong, because that would be a bad outcome. But I don't think it's as bad as it used to be. Like, I was like it would be unfortunate, but I think we'd recover. Right. Like so so let's calm down about checking the second and third-floor windows right now. We're just going to fly in here and take I don't know what from us. So that was one. That's one. A follow-up on that. But I would like you to expand a little bit if you can on SBOMs, security software, bill of materials because we have covered that on the show a little bit. You and I have talked about that offline a little, but maybe just a quick primer for folks of what that is and how does that help with heartbleed and Log4j and all that stuff? 

Ed Adams: [00:11:39] Absolutely. Absolutely. And SBOMs, for me, it's near and dear to my heart because my training is as a mechanical engineer. And in mechanical engineering, it's a very sort of rigid, formal process by which you define what you want to build, and then you design it and then you test the design, and then you go into the lab and prototype it, and then you test the prototype and make changes. And only after you've gone through all that process do you actually go and build the thing. So I went from that world and all along you're documenting exactly what is going in it, what materials, what type of materials, all that sort of stuff. So I went from that world into the software world that I thought everyone had absolutely lost their minds because it was, Let's build it, let's ship it. The customers will figure out if there are problems and then we can address it later on. So the concept of a bill of materials for me is as old as my schooling, but in the software industry it's very, very foreign. And the best way I like to help folks think about a bill of materials is like a can of soup. 

Ed Adams: [00:12:40] If you hold up a Campbell's can of soup and you turn it around and look at the back, it lists all the ingredients and it doesn't just list all the ingredients and list the ingredients in the order of volume. So the most frequent ingredients first and the least frequent is last. Well, that's great for me to know if I'm trying to track things like my sodium or if I don't want any high fructose corn syrup in my body. Well, that concept of ingredients in a can of soup, when you apply it to software, it's almost unheard of and it's very, very difficult these days, especially with the advent of movements like DevOps, as you mentioned, because the whole purpose of DevOps is speed and agility and being able to change things in a very rapid fashion. Well, if your software is changing on a rapid basis, on a regular basis, how do you know what's in it at any point in time? It's a very, very difficult thing to do, but it's a really important piece.  

Ed Adams: [00:13:55] When you apply the SBOM concept to the software world, it's very uncommon. But it's also very important to know because when something like Heartbleed comes out or we become aware of Log4j, we do need to know, are we using that specific library? So the concept of an SBOM is extremely valuable, but actually pulling it off can be can be very difficult, especially in a DevOps world. Now when it comes to healthcare and specifically using third-party products, it actually is a lot easier because a lot of healthcare products have very long lifespans. And look at a medical infusion pump, for example. It might be built by Edwards or Phillips or someone like that, but it might have an actual lifespan of maybe 15 years, 10 to 15 years. And the components in that medical pump, they don't necessarily change that often. In fact, they may not change at all. So it's a lot easier for a medical device manufacturer to create a bill of materials and have it be published with their medical device, as opposed to a cloud-native application that gets a software push and an update every single day. It's much more challenging in that rapid environment than it is in something with a longer lifespan, but the importance is still it's just as relevant. And both it's just more difficult to pull off in the DevOps environment. 

Brian Selfridge: [00:15:31] So I want to dig a little bit more into like what are the some of the specific vulnerabilities that like you have that specialized screw you talked about or maybe it has some structural weakness and a fissure in it that needs to be addressed. And then we've got to dig for that and fix it and patch it. When you look at your work with working all these organizations to train their security people and do assessments, what are some of the most common, I'll say, vulnerabilities that could either be driven by human error or just coding missteps or some evolution of hackery where people are breaking in. What are some of the things that you're seeing more and more prevalent these days in software exposures? 

Ed Adams: [00:16:09] Yeah. So, you know, human error still is probably the number one problem when it comes to software or whether you're assembling software or you're using software, you're deploying software. It's really just the type of errors that have been kind of changing over the years. So presently the most common errors that we see is misconfigurations and specifically misconfigurations of cloud services or APIs or application programming interfaces. The cloud service providers, you know, the gaps of the world, the use of the world, the azure's of the world. They've actually gotten very, very good at securing the services and features and APIs that they make available. So again, I use an analogy for me, it's like walking into a candy store as a child and I've got all these different, you know, 200 pieces, different types of candy I can use. That's kind of like what the cloud service providers offer. You've got 200 pieces of different types of candy, and so there's a lot to choose from. But just like when I was a kid, I can't possibly eat all the different kinds. I don't need all the different kinds. There's really just, you know, small set that I want. And when it comes to cloud services and APIs, once you choose, Oh, I really like the malted milk balls and I really like these bull's eyes. I'm going to use those. I'm going to put that in my bag. But with cloud service and APIs, you have to make sure that you're actually configuring those properly because just because they're made available, you still have to consume them appropriately. 

Ed Adams: [00:17:40] So I can't take that malted milk ball and just swallow it whole. I'm probably going to choke. So just like with a web application firewall service that's offered from A to B OS, you have to make sure the onus is on you to configure that properly. And if you misconfigured it, you could be allowing in attacks that are very easily automated, like a SQL injection attack which a web application firewall can prevent if you configure it properly. However, if you miss configure a web application firewall, you could actually block good traffic. You can block all traffic. So that's the number one issue today is misconfigurations of a lot of these cloud services. And the probably the simplest one is not securing datastore, which is the cloud equivalent of a database. So if you're in a data center, you have a database, you're connecting to a server or an application to that database. And all of the data in that database is completely unencrypted and anyone can see it just by asking the database, please give me this list of usernames. Well, that is unfortunately a very common misconfiguration with a lot of cloud services. And you will hear about things like S3 buckets, which is basically just a data store in a box, which is the equivalent of a database. But you as the consumer are responsible for configuring and securing the data that's in that data store. So number one is misconfigurations above all, honestly. 

Brian Selfridge: [00:19:09] So let's say organizations, they get it. They see that there are these software security risks and they have their development teams who perhaps haven't really they understood security, but it's not number one priority versus getting the product out and having it functional. What are some of the I'll start I always like to start with the worst stories or the bad outcomes first because I feel like they're more instructive. But what are some common maybe misconceptions or missteps that organizations will take once they realize they need to get their hands on this, that maybe they sort of go a little bit too far in one direction or handle it differently. Maybe you could see if you've ever encountered anybody that's done it not quite right the first time out. What does that look like? 

Ed Adams: [00:19:50] I would say most organizations don't get it right the first time out and it's not for lack of trying. Biggest misconceptions that I've seen. There are probably two big mistakes if organizations accept that this is fundamentally a human problem. First, it's not necessarily a technology problem that naturally leads you to think, Well, why don't I educate my teams, get them some awareness? And whether it's teams that are building applications or teams that are operating applications or teams that are defending applications, you know, they all need a certain type of education and training. So the two biggest mistakes that I see organizations make, one, they will procure something, a high-level training, and they'll roll it out to all of the different teams. It's sort of like taking peanut butter and spreading it on a piece of bread, right? We're in this these food analogies now. We've moved on from cars and houses and now we're talking about food. But that peanut butter spreading, a lot of times it's not appropriate for most of your audience. So, for example, OWASP Top ten, all WASP is the Open Web Application Security Project. It's a very sort of well-known, well-adopted community of people. And for a number of years, they publish something that's called the top ten, which is the top ten threats, and they have top ten Web threats, top ten IoT threats, top ten API threats. But the OWASP Web Top ten is probably the most popular and well-cited list of known vulnerabilities or threats in software. 

Ed Adams: [00:21:34] So what a lot of organizations will do is they'll either develop or procure training on the top ten and roll that out to all of their software stakeholders, whether it's an architect, a developer, an operator, a database administrator, cloud engineer, a penetration test engineer, vulnerability analyst, all these different roles. They say, Oh, here's application security training they're quoting, but you can't see it because it's a podcast and they give it to everyone. And it generally doesn't work well because an architect says, Well, this doesn't really apply to my job function. And the penetration test engineer says, it doesn't apply to my job function. And the database administrator says there's nothing in here about databases. So that's the first mistake. Is this peanut butter spreading, you know, doing trying to have the same type of training for all your stakeholders. The second mistake is folks go in completely the opposite direction. They think, oh, application security. That means software. Software means developers, developers mean coding. So then they go out and they procure training just on secure coding and they hyper-focus just on secure coding training. And that does great for folks that code for a living. But as we talked about earlier, most applications today are not coded from scratch. And if you think about all the stakeholders for all your software teams, you know, those architects, those product managers, the program managers. Yes, the developers. There are quality assurance test engineers, the DevOps engineers, database administrators, the vulnerability analysts. 

Ed Adams: [00:23:00] Most of those folks don't code for a living. So folks are going in the opposite direction. They procure secure code training and then they roll it out. And you've got a test engineer saying, I don't code, so this doesn't mean anything to me. And you have someone on the information security team saying I don't code, so this means nothing to me. So those are the two most common mistakes that we see organizations making. And if I might, I'll cite two industry reports. Both were actually published in 2021, Gartner published. It's called Integrating Security into DevOps. And the second is from the point of an institute, actually, and it's called cybersecurity training benchmarks. And both of these industry reports highlighted. Basically good practices for organizations that want to adopt an application security training program. And I'll just highlight them very briefly. What Gartner recommended is organizations adopt a belt system with blended learning, and I'll explain that. The belt system is analogous to martial arts, where you earn a yellow belt and then as you get more advanced, it might be a brown belt. And as you become an expert, it's a black belt. What Gartner suggests is that you don't need all of your software stakeholders to be expert black belt ninja warriors. You might get away with most of your build team or developers being just yellow belts and some of your security champions or your DevOps team. 

Ed Adams: [00:24:28] Maybe you want them to be brown belts and those on the InfoSec team, the vulnerability analysts and penetration test engineers, you want them to be a black belt. So different levels based on what you need them to be in terms of security awareness and acumen. And then the second thing that Gartner suggested is blended learning. So as you're rolling out your belt system, use different types of training elements. And this is just borrowed from adult learning 101. And you can pick almost any analogy, but I'll use golf in this case. So we're going from cars and houses to food. Now we're going out to sports. So with golf, you can get a lesson from the golf pro. And just because the golf pro will tell you, keep your left arm straight and shift your weight in your back swing, you don't know if you can actually hit the darn ball straight. So what do you do? You go to the driving range and you practice with a real ball and a real club and at the driving range you can use all the different clubs in your bag except for the putter. There's a separate type of practice range for putting it's called putting green. So you take the lesson from the pro, the driving range and the putting green. You've got three different types of learning elements that's blended learning. So you learn something and then you practice it, convert that knowledge to skills mastery. And that's what Gartner's suggesting for software teams that want to adopt security as well, take online courses or instructor-led courses, but then do some practice activities in something like capture the flag or cyber range activities where you're actually putting hands-on keys and trying to implement and practice what you might have learned in that online course or the instructor-led course and then also apply it on your job. 

Ed Adams: [00:26:05] So that's the blended learning that Gartner is suggesting here and I think they really got it spot on with that. So that's the Gartner report. The second report is from the Ponemon Institute. What the Ponemon Institute did is they actually went out and surveyed about 400 different companies across the globe and they analyzed their cybersecurity training program on 17 different elements. And what they determined is that out of all of those 17 elements, there were two that yielded significantly higher. It's called an SES score. It's the security effectiveness score. Two elements, much more than all of the other 15 were relevant. And those two elements were training that has content that's relevant to job function and training that includes realistic simulation. So to summarize that 30-page report, what the Ponemon Institute is recommending is if you want to roll out effective security training for software teams, look for role-based training with realistic simulation and you'll have much better success. So very long answer to your question, but hopefully, it's rich in context. 

Brian Selfridge: [00:27:08] Oh, tremendously rich. And I expect to suspect a lot of those recommendations would apply more broadly outside of just software development training into more broad security topics of the audience and the levels. And I do have to note that the topics and analogies we've used so far, I think, are going to resonate really well with my six-year-old. So I'm going to put this episode up for him. We've got candy to check peanut butter. Absolutely. He's a ninja little ninja guy. He's got his yellow stripe belt. So if you can just find a way to weave in maybe Minecraft or Transformers, somehow I think we'd fully cover his life and all the things he's going to connect with. But. But here's one that he won't get it or care about. I want to dig back into OWASP for a second because that has been around so long and that top ten list moves up and down. What are some of the what does that top ten look like these days? I know you mentioned SQL injection, which I think has been on there for like 25 years, which is really upsetting. But it just it's always hovering in the top one or two. What are some of the other? Not to quiz you on OWASP, I guess, but what are some of the other more common vulnerabilities that you see kind of being pervasive out there, this in 2022 or maybe just historically? 

Ed Adams: [00:28:16] Yeah, so so historically, you know, the two that have been around and on, I think every single top ten list SQL injection is one of them are injection flaws in general. And what's amazing though is. The fact that Misconfigurations appeared in the last couple and it was hadn't been on it for almost, you know, 15 to 20 years. And now it's here and it's really because of the advent of the cloud and DevOps. But what's crazy to me is the fact that things like SQL injection and cross-site scripting, which are two very common attacks, they are so easily preventable just by following basic security principles. And one security principle like sanitizing user input, basically, doesn't trust any user input. You know what all the marketing hype is talking about right now as zero trust and zero trust is nothing new. It's just the latest name that we're giving to the good practice of don't trust anything you get from someone else. And just like, you know. Do you take candy from a stranger? No. Well, why would you assume that a user input is safe and secure and sanitizing user input? It will help you from most SQL injection and most cross-site scripting vulnerabilities. Yet we don't do it. So, you know, thank you for mentioning you know my editor talks show that I've been running for the last couple of years. The very first one that we did was on security principles. And that was absolutely by design because I felt it was important to kick off an entire series talking about cybersecurity with what I feel is the most important and most neglected thing, and that is lack of implementation of security principles. 

Ed Adams: [00:30:01] And it's too easy for us in the IT world to always be chasing the latest, you know, shiny object, whether it's the latest transformer toy or the latest update to Minecraft. You know, we want the latest and greatest, but we don't necessarily know that it's going to be good for us and we don't necessarily know that it's going to be appropriate for our needs. So so long as we apply those principles to what we're doing, as we're looking to maybe go and buy that latest transformer toy, we're making decisions on whether or not it's appropriate for us and taking a risk-based approach to i.t consumption and i.t development. And a risk-based approach is something that I for now 20 years I've been recommending to all my clients and the smart ones will actually ask, well, what does that mean? And as soon as I hear the question, I think, Oh, this is great. It's just what I wanted to hear. But most of the others just sort of nod their head. Yeah, sure, sure, sure. Yeah. We take a risk-based approach. Of course, we do. But they don't really appreciate what that means. And again, to me, it just gets back to those security principles and threat modeling. And again, coming from a formal engineering background, if the way that software is built, even in the days of DevOps and using all these cloud services, I'll go back to the House analogy that we started with. In order to build a house, first you have to define what you want. 

Ed Adams: [00:31:30] Those are business requirements. Oh, I want a four-bedroom colonial with two bathrooms. Then you have to design it. That's your blueprint. Once the blueprint is done, it has to actually be built and constructed. After it's constructed, it has to be tested to make sure that does the roof leak, does the plumbing work, all that sort of stuff. And after it's been tested, then you can move in, you can start using it, you get your occupancy permit. Well, the software is the same. You define business requirements, you design things, you decide what components you want to use, you assemble it. And hopefully your assembly, your dev team is just implementing the requirements of design that you've very well documented and defined, which of course never happens, but they built it. Then you test it to make sure that it is built according to the spec and no extra stuff is found its way in there. And once that's done, okay, it's now shipped, it's available to ready. And sure, you can update it and patch it along the way and make additional features. But just following that sort of good practice of defining it, designing it, constructing it according to the requirements, testing it, and then deploying it will save us from most of the top ten and the top ten, regardless of whether it's for Iot or Web or APIs and whether it's the OS top ten from 2021 or 2017 or 2013. Those security principles and those, you know, adherence to that type of discipline will save us from almost all of those types of vulnerabilities. 

Brian Selfridge: [00:33:00] And I'll reinforce your perspective there on the fundamentals. I mean, I've been like you. I've been on the security conference speaking circuit for years, and I'm always the guy that shows up and talks about the basics, you know, like, hey, you know, because you do you get alongside. You look at the agenda and it's all this. What's this really unique, specific threat that's hit one organization everybody needs to watch out for? It's like, well, yeah, okay, put that on your radar. But above that, put all these really fundamental things like input, validation, and your basic access controls and things that are they're not sexy because we know about them. But as professionals, I always feel like we need to be reminded that those are really where the rubber hits the road and you get the more return on your investment of actually protecting stuff. But I want to ask you a little bit about the stuff around automation. Are there tools, are there things that can help us reduce the human fallibility aspect of identifying these type of weaknesses, either in the development process or in the process or post regression testing even? And any thoughts you have there on what kind of automation is out there in the market and does or doesn't work well? 

Ed Adams: [00:34:09] Yeah. In the short answer is you know there's. Oh. What's that old Apple slogan? There's an app for that in software. In the SDLC world, there's a tool for that. So whatever it is, there's a tool for it. And so it's not a question of finding automation. It's a question of finding the right automation for what you're trying to do. It's very easy to over-automate, and when you over-automate, you end up missing a lot of stuff. So again, we'll go back to analogies and I was going to use the analogy of a surgeon and a scalpel, but instead, I'll go back to Minecraft. So with Minecraft, you have to start with the basics. What do you do? Right? You start with putting a block down or you want to make a hole and then as you go along, you obtain tools that allow you to do things a little bit better or a little bit easier. But first, you have to know what you're doing. Then the tool can help you automate it. So going back to my scalpel analogy, if you hand me a scalpel and say Go do something on this person, it's going to be ugly because I don't know how to use a scalpel and I'm probably going to hurt that person and I don't want to hurt anybody, but I'm not trained on how to use a scalpel. But you put the scalpel in the hand of a surgeon who has gone through appropriate education. They could do a tracheotomy on the side of the road if need be, because they know how to use that tool. So tools, in the absence of education, are very, very dangerous. 

Ed Adams: [00:35:43] And there are two main dangers. One, all tools will generate false positives just because what tools are in general, they're pattern checkers. So you program the tool to look for something specific. If it finds what it thinks is a SQL injection vulnerability, cross-site scripting vulnerability, it will flag it. But then the human actually has to go and validate. Is it a. Actual vulnerability or not. So tools will generate false positives, so it could potentially actually slow down a process. It's meant to speed up, but tools are also completely fallible. So they will also have false negatives, meaning they will miss stuff that's actually there. So overreliance on tools can generate a false sense of security, but there are tools that are very, very good at each phase. And Gartner actually has for probably 15 years maintained a magic quadrant on I think it's called application security testing. It really should be called application security testing tools because they only cover tools vendors. But in that so there's that. And then there's the hype cycle for application security. Gartner actually does a really good job of documenting the various tools in the industry. But the most popular are some things called Sast, which is static analysis, and security testing. It will scan source code to find security vulnerabilities that a developer may have put in, but it's also applicable for open source. So if you use open source that you integrate into your application, you can use static analysis, security testing tools or fast on that. The other most popular is Das, which is a dynamic application security testing tool, and that basically looks from the outside end. 

Ed Adams: [00:37:34] So once your application is developed and deployed and it's sitting maybe in a cloud or on a web server, you can use the dynamic application security testing tool to find security vulnerabilities as it's completely built. So one of them in the security world, as you know, is called white box testing. That's when you have access to the source code. That's Sast. And then Dest is the dynamic piece that's more like blackbox testing. When you don't have access to the source code, you're looking from the outside and kind of looking as an attacker. So those are the two most popular, but there have evolved a number of different technologies as well, including IAST, which is an interactive application security testing tool, which will actually do testing as users are going through and using the product itself. There's also runtime application security protection, which is basically sort of protection that you can build into an application so it can self-defend. Now, I still think that's more promise than it is a little bit more hype than it is useful. But there's a lot of technology around that. And my personal favorite, getting back to the whole spam world is software composition analysis or SCA, and there are a number of good SCA tools. This with what this does is basically allows you to walk down the path toward ESP, toward documenting everything that is in a specific application. And again, that's also fallible. It's not great, but it gets you along a long way to being able to list what ingredients are in your soup. 

Brian Selfridge: [00:39:05] Well, first off, I have to acknowledge the brilliance of weaving in the analogies for my six-year-old. He's going to I am going to play this for him. And I know that's going to be a winner. So I'm almost tempted to give you some more analogy challenges because you're so good at nailing that. 

Ed Adams: [00:39:21] Minecraft very well. But I have seen my nephew play and, you know, whenever he gets a like a diamond edge ax, I know he gets very, very excited. 

Brian Selfridge: [00:39:30] So, you know, just that statement alone, you know, more than more than the average folks. And I'm still learning myself and that but that resonated with me even at this point. So but I am glad to use the scalpel analogy because we are in healthcare, we do specialize in healthcare, and our audience is predominantly healthcare, folks, although if you're not, don't get offended audience you don't have to be. Is there anything specific about the healthcare industry or all of these apps and products that we're putting out to digitize healthcare? And have you seen anything different for healthcare apps in terms of vulnerabilities or the SDLC process that might be a variance from other industry segments? Or is it all pretty much the same? 

Ed Adams: [00:40:10] No. There's actually a lot that are very specific, too, to healthcare that we could talk about. We could probably, you know, do a separate podcast just on that alone. But so the interesting piece is that a lot of the underlying tech stack is the same. However, big differences in the healthcare I used to 20 years ago. I used to say that no one will really take software security seriously until someone dies because of it. Well, unfortunately, we've passed that threshold. And in healthcare, the cyber safety line is completely blurred. There is no more safety without security in healthcare when it comes to it. You know, we've had healthcare systems and hospitals shut down because of ransomware. And as a result, people died because they couldn't get to the hospitals in time because they had to get rerouted from the, you know, the ambulances. So the stakes are so much higher in healthcare with that cyber safety line that is completely blurred. Another piece with healthcare is that especially in the US, most of the electronic medical records are built on ancient, ancient technology that came out of the Massachusetts General Hospital and MIT years and years ago. The system is called mumps, not intentional. It was intentionally named a mumps because it was specifically for healthcare. But a lot of the electronic medical records are still using this very, very data and old protocol. And in fact, just a DEFCON. Is it this week? It might be this week, actually, this week or last week at DEFCON, there is a talk on mumps and electronic medical records at DEFCON, which is the preeminent hacker conference that happens in Las Vegas at the end of July and early August every single year. 

Ed Adams: [00:41:57] So there's a lot of very, very old technology that keeps on getting built upon in the healthcare system, and that causes a lot of fragility and it puts a lot of electronically protected healthcare information at risk, either intentionally or unintentionally. And another piece that I find fascinating and this is not just in, you know, sort of the most commonly used type of electronic medical records like the epics of the world. But a lot of them will have this sort of automatic info sharing. And from a healthcare perspective, it's meant to make patient care better. So, you know, your primary care physician, which is in a different location or different facility than your cardiologist and they're in a different facility from your podiatrist, they can all see your information and share it automatically. Well, that sounds great from a healthcare perspective, but it's an absolute nightmare from a security perspective because sharing that information and doing so again in a manner that most likely is not encrypted or not encrypted properly, is setting those organizations up for a lot of potential data breaches and exposure of what should be protected healthcare information. So there are some unique differences in healthcare and those in those respects. 

Ed Adams: [00:43:15] But a lot of times the technology underlying technology stack is is the same. One other piece that I'll mention in the healthcare system, and it's more so in hospitals than anything else, is that a lot of times the hospital environment will create white space between technology that they procure and use. Between that and the vendor that actually makes it. So things like radiation systems, for example, if a radiation system is known to be vulnerable and the vendor actually wants to patch it, a lot of times there's no way for the vendor to actually implement that patch unless they go onsite physically to the device and make a patch in an update. And I understand why the hospitals are doing that. They're trying to create that sort of whitespace from a protection perspective. But it makes updating and patching sometimes very, very challenging. So as organizations are going through and doing their third-party risk assessment, their supply chain analysis, and risk assessment, that's one thing that should definitely be considered. And I know that something the call can help a lot of organizations with is understanding. I'm going to call it patch ability, which is not really a word, but I'm just going to make it up. Are the. Assets, the assets that you're procuring and using, can you update them and can you patch them and can you do so in a time-effective manner if need be? 

Brian Selfridge: [00:44:45] So we've covered a lot of ground here, and I really appreciate the healthcare-specific aspects. And, you know, it's almost a little surprising to even though we work in this field to realize how special and different we are always I always talk to healthcare leaders and we all think our healthcare is a big snowflake to other industries. And in a lot of ways we are and we are special. But I know we've got to get back to being security leaders in our respective organizations, and security innovations. We've got coral Meditology services, our companies here. So I just want to kind of, I guess, close things out here with any additional thoughts you have on any of the topics we've covered today, any sort of closing thoughts just to wrap all this up, because we've covered so much good, good energy here that you'd like to share with the audience before we close out. 

Ed Adams: [00:45:29] Absolutely. And I'll finish by referencing Aretha Franklin. And there are two songs that Aretha sings that I think are very applicable and should be considered routinely from a security perspective, in an IT perspective. And the first is think I remember the Blues Brothers movie, one of my favorite movies from a long time ago. You know, she actually sang that as part of the movie. But if we take time to stop and think about what we're doing and it's a very difficult thing to do in the IT world because we're moving at such a fast pace. But I think if you take time to think, it will necessarily force you into making better decisions. And it sounds so simple and it isn't a lot of ways, but it is really important, I think is the first thing. The second Aretha Franklin song I'm referring to is Respect, respect of coworkers and trying to understand that even though you have a job to do, whether it's getting a piece of functionality out the door at a certain time or defending a certain piece of functionality or keeping a piece of equipment operating and functional, understand those around you and what their responsibilities are and what their job tasks are. And if you can do your job in a way that makes someone else's job maybe a little bit easier, all of a sudden you're building something that's an amazing thing called a team. And Respect is the biggest catalyst to building a team. And the team is the biggest way to have success in the IT world. So Aretha, God rest her soul, think and respect you. 

Brian Selfridge: [00:47:15] Now hit me where I live. We've covered my six-year-old, but as a musician, I very much appreciate the final closing musical analogy. And that certainly hits home both in content and in spirit. So fantastic stuff. I can't thank you enough. I'd like to thank my guest Adam, CEO of Security Innovation, for just a fantastic conversation today. Ed, thank you for taking the time to share all these insights with our listeners. And we look forward to hopefully having you back again sometime to dig a little deeper. But thanks so much. 

Ed Adams: [00:47:45] Thank you so much for having me on CyberPHIx Brian, I appreciate it. 

Brian Selfridge: [00:48:07] Again, I would like to thank my guest, Ed Adams, who is the CEO of Security Innovation Incorporated. I really appreciated Ed's insights on securing the software development lifecycle overall, and his perspectives very much resonated with me about the importance of covering the basics and fundamentals of security in the SCLC process in general. Of course, also having information about tailoring training for specific teams is so critical for getting this right and also using industry standards and frameworks like OWASP and automating those and so much more. So I hope you got as much out of that as I did. I learned a ton and really had some great information there from Ed. So as always, we'd like to have your feedback as well and hear from you. Feel free to drop us a note about any topics you'd like to hear about or any thought leaders you'd like to hear from. Our email address is [email protected]. Thanks again for joining us for this episode of The CyberPHIx, and we look forward to having you join us for the next session coming up soon.