Increasingly sophisticated artificial intelligence (AI) capabilities make headlines daily, as companies pioneer the use of AI for everything from analyzing hospital test results to making the hiring process faster, and rapidly replacing human-based customer service.
As with any new tech, however, AI also comes with a range of security vulnerabilities, some of which can substantially exacerbate a company’s exposure not only to cybersecurity issues, but also geopolitical risk.
Many feel the need to add AI gimmicks to platforms or products without prioritizing the security of these features, exposing sensitive information and proprietary data to hackers.
Frequently, companies that purchase AI features to add to their existing products or platforms do not know the origins and full capabilities of the code, meaning the code has not been properly vetted or patched to avoid data leaks or backdoors added by programmers using the feature to steal and profit from the data.
In other cases, AI features continuously train neural networks which create more vulnerabilities as they learn or magnify the effects of bias that exists in the code.
Recently, news of Amazon’s use of an AI to handle hiring triggered a reputational scandal when it was revealed that the AI was biased against women applying for technical positions, magnifying what was a bias held by the designers.
In many cases, conclusions drawn by AI need to be examined to ensure that mistakes are not being compounded, and errors– leading to dangerous recommendations which could lead to injury or death in the fields of medicine, engineering, and security– are caught before they cause harm.
As AI continues to touch many aspects of daily life, there is growing realization of the need to approach the technology from an interdisciplinary approach.
The Massachusetts Institute of Technology announced the opening of a $1 billion AI college to study the tech from a variety of fields, collaborating closely.
While an academic focus on AI is valuable, it will take time to yield the conclusions necessary to address AI’s effects. Businesses need to address the challenges today.
In order to perform their due diligence and effectively address their exposure, organizations must approach their use of AI from both cybersecurity and geopolitical risk perspectives, to understand not only the intricacies of the code and capabilities, but the extent to which use of the AI further exposes the firm to data breaches, strategic errors, and reputational risks.
As the only cybersecurity firm with a Geopolitical Risk practice, VerSprite has the technical understanding and the geopolitical insight to help firms meet these two demands successfully today.
VerSprite offers a range of services designed to help companies assess, analyze, and address their exposure to geopolitical risk. Geopolitical Risk consulting can help you further unlock your organization’s potential by discovering previously unforeseen opportunities for you to flourish in the global economy. Read more →