When ChatGPT entered the market early this year, it brought Artificial Intelligence (AI) into the mainstream and changed the way the world worked. From software engineering to education, law, and finance there are very few sectors that have not been impacted by ChatGPT. In addition to my role at MetricStream, I also teach at a university, and I have seen firsthand how ChatGPT is changing the way we teach and evaluate. I have had to remove several of my open-book questions since the answers were now available on ChatGPT! As concerns about how such new technologies continue to mount, enterprises, regulators, and governments need to focus on simultaneously managing the risks posed by these technologies and accelerating the opportunities they present.
I moderated an interesting discussion on how AI, automation, and emerging technologies are impacting risks and opportunities at the 2023 GRC Summit in Miami with eminent industry experts Brian Fricke, Managing SVP, CISO, City National Bank of Florida, and Alex Gacheche, Global Head of Information Security, Technology Infrastructure and Emerging Technology Audit, Meta.
Here are the key areas discussed during the session.
Watch Now: How AI, Automation and Emerging Technologies are Impacting Risk and Opportunities
ChatGPT’s impact on the world cannot be underestimated. But it is important to remember that despite the hype, it is a tool in the toolbox, designed to make work easier. There will be other, better AI-powered platforms in the future, each of which will impact the way we work in its own way. But none of these platforms can replace human jobs – people will be replaced by other people who can use the platforms better, and subject matter expertise will remain vitally important. A foundational understanding of the technology, creative problem solving, and the agility to be able to adapt the results thrown up by an AI platform will be increasingly valuable in the years to come.
When it is considered a tool in a toolbox, it is easier for organizations as well—to shape policy, define acceptable risks, and establish technology controls. Organizations also need to develop robust efficiencies across the three lines of defense to maintain a comprehensive risk posture in the era of AI. Once the front line is empowered to leverage and use AI platforms effectively, the second and third lines will need to adapt quickly as well.
There is, of course, no denying that the rapid emergence of AI-powered platforms poses some immediate and long-term risks:
Of course, there is no denying that AI offers a number of opportunities for enhancing productivity, improving key processes and driving responsible business growth. AI-powered platforms can help improve threat analyses significantly by analyzing large volumes of threats against existing mitigation capabilities quickly and accurately. AI can also help bridge skilled manpower shortages. For example, the cybersecurity field is projected to have a 50% shortage of talent by 2025. AI platforms can help companies manage and even improve their cyber security practices even in the absence of resources.
Understandably, organizations, regulators, and even governments need to understand and work towards addressing some of the concerns around AI tools and platforms.
At the end of the day, it is important to remember that AI is a technology much like all transformative technologies that came before it. The world has contended with a number of disruptive technology trends over the last couple of decades, ranging from cloud to digital banking and crypto currencies. And it has collectively regulated, secured, and governed each of them effectively. As we gear up to contend with AI, we must remember that information and data lie at the foundation of any AI-derived platform and consider ways in which to adapt existing data privacy laws such as GDPR and CCPA to include AI risks.
AI is here to stay and will continue to evolve and shape the nature of business, work and even life as we know it. Like any new technology it presents a range of risks and enterprises may be tempted to ban use of AI altogether. But a transformative, and accessible technology cannot and should not be banned in its entirety. Attempting to do so would not only hold the enterprise back from truly exploring its potential but also result in unauthorized and unregulated usage. It would be far more effective to focus on building an AI-aware risk culture by training employees on responsible use of AI. Active discussions at the board level and even the establishment of a risk committee to monitor AI risk is a good idea. By integrating AI into the risk register, organizations can prepare to build policies for it. And most importantly, they must engage with younger generations studying and preparing for careers in a world that is already transformed by AI. Their fresh perspectives, unique approaches, and understanding can help drive better policies for what is undoubtedly going to be a new era of Artificial Intelligence-driven development and growth.
AiSPIRE, an industry-first, state-of-the-art cloud-based product offering from MetricStream, can empower your organization’s GRC functions with proactive intelligence backed by powerful AI- algorithms.
By leveraging large language models, GRC ontology-based knowledge graphs, and generative AI capabilities, AiSPIRE has the power to utilize the full potential of an organization’s existing GRC and transactional data. Unlike other GRC tools that rely on manually defined rules and workflows, AiSPIRE effectively utilizes your organization’s data to train advanced machine learning models and AI.
AiSPIRE can empower your organization to:
Interested to know more? Request a demo today!
Download Product Overview: MetricStream AiSPIRE