Can We Get Better at Cyber Resilience?
I have racked my brain to understand how America, the sweet spot of resilience in the world, could let itself be digitally dissected. We are getting a beatdown from nation states and hackers. The damage tears at the fabric of democracy; which is based on trusted systems. Worse, there is quickly becoming an expectation among citizens everything will eventually be breached; so why should I care….
This is serious. Cybersecurity experts predict that the increasing success in illegally accessing systems will have a severe impact on the global economy. As a precursor, people are giving up all of their rights to privacy and won’t even change default passwords. This moves vulnerabilities from systems and networks to a more personal place; our daily lives. The question becomes how do we stop the attacks and the severity of the consequences? And, how do we minimize vulnerabilities while launching a defense if the citizenry won’t or can’t?
If we used all the best practices and tools currently available, we could eliminate 80 percent of all breaches. The problem with this fact is actually getting government, industry and academia to use all of the best practices available. We can safely predict an awakening of the people will never happen for a multitude of reasons. Additionally, the abilities of hackers grow daily. So, maybe our next best hope is to unleash computer systems that can artificially do some of the things us humans should be doing.
While the need for a well-trained workforce is evident, security professionals believe artificial intelligence (AI) may be a viable solution to thwart maturing cyber-attacks. The experts think smart computers will rise to the challenges that the business professional won’t. They believe the expanding role of AI in cybersecurity can help us gain a better and faster understanding for growing threats in today’s shifting security landscape. Yet, due to several biases, AI can also lead us astray; making us feel secure when we are not.
AI, AKA machine intelligence, is an extension of computer science that aims to generate human-like responses, faster than any human, by recognizing threats, detecting/solving problems through automated analysis, and learning how to act accordingly. It plays, and will play, an integral role in developing the cybersecurity technology required to assure we are not one day sitting in the dark. AI can create abilities and capacity a human workforce could only dream about delivering.
AI system designers are literally seeking to give a system “a mind of its own.” However, to get there AI requires humans who are responsible for writing the algorithms and mapping capabilities. They tell the AI machine mechanism what to look for, where to look for it, how to look for it, and what to do when the target has been identified. But all of this brilliance can be corrupted from the start. As such, human bias can be faulted when we don’t receive expected outcomes. The same is true when we apply AI in cybersecurity. These AI biases can be identified in three areas represented by program, data, and people.
Successfully identifying cybersecurity threats is contingent upon knowing what to look. An AI cybersecurity program focusing on the wrong vulnerabilities will surely fail to detect the real security dangers. Regardless of its accuracy, if an algorithm is programmed to solve for a faulty requirement, a real solution will not be identified. A company or organization would still be at risk of falling victim to a cyber-attack.
Using AI success in cybersecurity also depends on using representative data that paints the whole picture. Using biased data will train the AI system to a partial understanding of the problem. As such, the reaction is based on a narrow perception that may not meet the goal. Given our highly diversified risk ecosystem, it’s important to gain a full perspective of what’s at stake. Increasingly sophisticated hackers are using different avenues to penetrate our systems, so disclosing all this information is very important.
Lastly, human bias is one of the biggest factors inhibiting AI. Cybersecurity is an evolving field with continuously emerging threats, risks, technologies, and more. People who come from the same background or culture cannot possibly have a well-rounded perspective for today’s diverse setting. Their limited scope of knowledge translates into programming biased algorithms and ultimately, the vicious cycle starts over again.
Eradicating all these biases is not realistic, however minimizing them certainly is. Implementing successful planning and testing stages is key in determining the cyber threat identification response path forward. Not everyone is not tech savvy, and most people don’t choose to be. However, training AI systems will require experts from a host of fields to assure technologies like AI are enhancers to our lives and not artificial barriers to our cybersecurity growth.