Despite heightened interest in enterprise deployment of artificial intelligence, only 40 percent of respondents to ISACA’s second annual Digital Transformation Barometer express confidence that their organizations can accurately assess the security of systems based on AI and machine learning.
This becomes especially striking given the potential for serious consequences from maliciously trained AI; survey respondents identify social engineering, manipulated media content and data poisoning as the types of malicious AI attacks that pose the greatest threat within the next five years.
Cybersecurity has become a race between white hats and threat actors. Artificial intelligence (AI) has been touted as a potential solution which could learn to detect suspicious behavior and stop cyberattackers in their tracks. However, the same technology can also be used by threat actors to augment their own attack methods.
According to IBM, the “AI era” could result in weaponized artificial intelligence. In order to study how AI could one day become a new tool in the arsenal of threat actors, IBM Research has developed an attack tool dubbed DeepLocker that is powered by artificial intelligence.
Read more about DeepLocker and learn how AI can be weaponized on ZDNet.
According to Ankur Laroia, Leader Solutions Strategy at Alfresco, Artificial Intelligence (AI) could provide an extra level of support in the fight against data breaches. AI could not only help in the identification and alerting of breaches, but even assist in the prediction and post-event analysis of data breaches.
Artificial Intelligence can provide solutions that seek to replicate and automate some human behaviors and functions. Within an enterprise security context this could involve the automation of time-intensive processing work, decision making and, potentially, facial and speech recognition. AI could also have a significant in data processing.
Read about the potential deployments and benefits of AI in enterprise IT security according to Ankur Laroia, on Information Security Buzz.
Machine learning is a form of AI that interprets massive amounts of data, applying algorithms to the material, and making predictions off its observations. Businesses typically use machine learning for locating and processing large data sets, but some organizations are implementing machine learning for more a narrow purpose: Cybersecurity.
While many assume machine learning makes cybersecurity professionals’ lives much easier, that’s not necessarily the case. Just like any new technology, machine learning still has its flaws—problems that turn the tech into more of a headache than a helping hand in the security space.
Read more about why machine learning may make things harder on cybersecurity pros, on TechRepublic.
New research from ESET reveals that three in four IT decision makers (75%) believe that AI and ML are the silver bullet to solving their cybersecurity challenges.
In the past year, the amount of content published in marketing materials, media and social media on the role of AI in cybersecurity has grown enormously. ESET surveyed 900 IT decision makers across the US, UK and Germany on their opinions and attitudes to AI and ML in response to this growing hype.
Last year, Gartner predicted that almost every new software product would implement AI by 2020. The advancements in AI and its ability to make automated decisions about cyber threats is revolutionizing the cybersecurity landscape as we know it.
According to Rodney Joffe, SVP and Senior Technologist at Neustar, AI is a double-edged sword for cybersecurity, as AI is revolutionizing cybersecurity not just for defenders, but for attackers as well.
Read more about why Rodney Joffe fears that AI is creating a seemingly never-ending cybersecurity arms race between security experts and hackers on DarkReading.
A recent Cisco survey found that 39% of CISOs say their organizations are reliant on automation for cybersecurity, another 34% say they are reliant on machine learning, and 32% report they are highly reliant on artificial intelligence (AI).
AI definitely has a few clear advantages for cybersecurity. With malware that self-modifies like the flu virus, it would be close to impossible to develop a response strategy without using AI. It’s also handy for financial institutions like banks or credit card providers who are always on the hunt for ways to improve their fraud detection and prevention; once properly trained, AI can heavily enhance their SIEM systems.
Read why Tomas Honzak, Director of Security and Compliance at GoodData, believes that while AI has certain advantages, it is not the cybersecurity silver bullet that everyone wants you to believe, on DarkReading.
Red teaming, or the practice of detecting network and system vulnerabilities by taking an attacker-like approach to system, network or data access, has become a popular cybersecurity testing process across a wide swath of organizations. Sometimes referred to as “ethical hacking,” red teaming helps organizations be more self-aware and prepares IT teams for swiftly recovering and rebuilding in the event that systems become infiltrated.
To make red teaming truly effective, organizations need to help overwhelmed IT teams efficiently sift through large amounts of data so they can determine what’s important and better understand the breach (or potential breach) in question. They can do so by incorporating artificial intelligence (AI)-based cybersecurity solutions into their red teaming exercises.
Read more about the technological benefits of injecting AI-powered technology into red teaming activities on Security Magazine.
Machine learning (ML) and artificial intelligence (AI) are not what most people imagine them to be. Far removed from R2-D2 or WALL-E, today’s bots, sophisticated algorithms, and hyperscale computing can “learn” from past experiences to influence future outcomes.
This ability to learn gives cybersecurity-focused Al and ML applications unrivaled speed and accuracy over their more basic, automated predecessors. This might sound like the long-awaited silver bullet, but AI and ML are unlikely, at least in the near future, to deliver the much-heralded “self-healing network.” The technology does, however, bring to the table a previously unavailable smart layer that forms a critical first-response defense from hackers.
Read why Craig Hinkley, CEO of WhiteHat Security, thinks that AI and ML would be complete game changers for cybersecurity teams if not for the fact that hackers have also embraced the technologies, which actually makes them more of a double-edged sword, on DarkReading.
According to the market research report published by P&S Market Research, the global artificial intelligence (AI) in cyber security market is projected to grow at a CAGR of 36.0% during 2017-2023, to reach $18.1 billion by 2023.
The rising number of cyber frauds and malicious attacks is the major factor driving the demand for AI in cyber security market. Additionally, the adoption of Bring Your Own Device (BYOD) is fueling demand for artificial intelligence as there have been several incidents reported wherein data leakages, unauthorized access, downloading unsafe applications and content on employee personal devices have left the organization’s data vulnerable to external threats.
Read more about the findings of the market research report by P&S Market Research on GlobeNewswire.