Behind the Ballot: Risk and Governance

  • 05 Nov 2025
News-Behind the Ballot Risk and Governance.jpg

Our Members asked board election candidates (Question 1 of 2)AI introduces new risks and opportunities. How should the BCI incorporate these considerations into its guidance and standards?

Atiq Bajwa FBCI

BCI should integrate AI considerations comprehensively into the Good Practice Guidelines by establishing dedicated modules and sections addressing AI-specific risks including algorithmic bias, data integrity failures etc. and practical guidance for leveraging AI to enhance predictive capabilities and automate recovery processes. BCI can also develop risk assessment frameworks specific to AI-dependent resilience programs, helping members evaluate when AI adds value versus when it introduces unacceptable dependencies. The standards should address AI across the entire resilience lifecycle, from risk identification and BIA through to plan activation and testing.

Desmond O'Callaghan Hon FBCI

The BCI has always published guidance, and influenced standards, and is globally recognized for these contributions to the wider community. AI does not change this professional purpose. Updated guidance is an ongoing, member-valued output, incorporating global leading practice information gathered via BCI chapters and special interest groups, as well as partnerships with academia and other partners. The dissemination of information to members should include analysis of relevant risks and opportunities relating to AI, both for how businesses are using it to operate, and how BCM practitioners are using it for their work. The AI SIG should be well positioned to lead this analysis as a specialized group of expert practitioners with this focus. 

Federica Maria Rita Livelli MBCI

It is very simple; I do daily successfully with the other organizations I serve: create ad hoc monothematic publications ref. standards and regulations to make the innovation simpler. These documents should be providing simple and didactic guidance and explanations for better interpretation and implementation. Consider my publications on AI are appreciated and shared on social media in Italy and abroad because they help professional and organizations understand what is required to implement AI, face the risk and challenges it implies, and meet the regulatory requirements.

Gregory Descamps MBCI

Within BCI standards, AI should be recognized as an enabler across the resilience space. Guidance can share best practices for its reasonable use while highlighting risks such as data privacy, reduced vigilance, and over-reliance on AI-generated content. As AI becomes pervasive across sectors and geographies, BCI can help members navigate its use responsibly, breaking down silos and connecting the global resilience community. By balancing innovation with accountability, the BCI can provide practical advice that strengthens operational, crisis, and business continuity practices, enabling organizations to leverage AI effectively while maintaining high standards of awareness, rigor, and resilience.

Disclaimer: An AI tool was used to conduct grammar checks and refine the language in this response

Kelly Blakeley MBCI

The BCI connects diverse organisations across sectors utilising AI to different degrees. Therefore we have huge potential to further track the risks and opportunities associated with AI.   To ensure the BCI incorporates these into guidance and standards, we should continue to enhance our:  -Representation of skills  -Strategic partnerships with related professions  -             Governance structures for connecting Special Interest Groups with technical committees for the Good Practice Guidelines  -Representation on standards committees such as ISO for AI and related standards. Our role as a Board is to provide governance and oversight of the workings of the BCI, the associated risks and opportunities outlined in the Strategy.  

Maura Santunione MBCI

In alignment with new regulations and international standards for risk management, the BCI could incorporate AI by focusing on four key areas:  1. Guidance Integration: Supporting the application of appropriate principles and integrating them directly into the core BCI guidance documents.  2. Professional Education: Developing dedicated training programs and certifications to upskill members on AI governance and resilience risks.  3. Awareness & Outreach: Promoting awareness through consistent webinars, events, and shared content that highlight emerging threats and opportunities.  4. Tool & Resource Development: Cooperating with professionals and organizations to facilitate the creation and dissemination of supporting tools and frameworks.

Mohamed Hassan MBCI

I believe that any new technology introduced to enhance agility, automation, and performance will also introduce new risks that must be identified and mitigated through appropriate controls. To address this, BCI could consider the following actions: -Develop an AI ethics framework or guideline outlining all key considerations for using AI, particularly in activities that produce formal or critical outcomes. This framework should be made accessible and circulated among all BCI members, chapter leaders, and stakeholders.  -Instruct the existing technical review committee to incorporate AI and emerging technology considerations, including related risks and opportunities, into future updates of formal BCI publications and releases.  -Publish an annual report on AI and emerging technologies, similar to other BCI reports, capturing diverse global perspectives from industry leaders and organizations, along with key statistics and trends shaping the resilience and continuity landscape.

Disclaimer: An AI tool was used to conduct grammar checks and refine the language in this response

Rajesh Pillai MBCI

AI introduces both transformative opportunities and complex risks that must be embedded into BCI’s guidance and standards with rigor and foresight. The Institute should establish a dual-framework approach: one that accelerates innovation while safeguarding ethical, legal, and operational integrity. This includes defining governance principles for responsible AI adoption, integrating risk assessment models that address algorithmic bias, cybersecurity vulnerabilities, and data privacy concerns. BCI should also develop scenario-based resilience guidelines that incorporate AI-driven automation and predictive analytics, ensuring members can leverage technology without compromising continuity fundamentals. By promoting transparency, accountability, and interoperability standards, BCI can help organizations balance agility with trust. Furthermore, fostering collaboration with regulators and technology leaders will enable the profession to anticipate emerging risks and codify best practices. Through these measures, BCI will position itself as the global authority on resilient, ethical, and future-ready continuity strategies.

Disclaimer: An AI tool was used to conduct grammar checks and refine the language in this response. It was also used to assist in drafting the responses. AI served strictly as a writing aid, not for generating original ideas or decision-making. Ethical use and transparency were prioritized.

Sanjay Vijayaraghavan KV MBCI

AI brings both exciting opportunities and new kinds of risks for resilience professionals. I think the BCI should help members make sense of this change by offering simple, practical guidance on how to use AI responsibly. It can show how AI supports better prediction, faster response, and smarter decision-making, while also reminding us to watch for challenges like bias, data privacy, and system dependence. The BCI could include AI considerations within its existing standards and share real examples of what’s working well. Most importantly, it should encourage members to approach AI with curiosity but also with caution, keeping human judgment, ethics, and transparency at the centre of everything we do. That balance will help the profession grow with confidence in a technology-driven future.

Simon Contini FBCI

AI has both risks and opportunities for business continuity and resilience. The BCI should strive to continue to engage its membership and the market in these areas and work together to further study them through further research and discussion.    This process should be open to broad and diverse input, iterative (last year's iteration could look much different from the next; this doesn't matter, as it's the actual process of re-examination iteration that matters), and informed by its real-world application, so that it can remain relevant and resilient in a rapidly changing landscape.    This is where the human context will be able to demonstrate value with AI and leverage both to lead the way in Business Continuity and Resilience.

Disclaimer: An AI tool was used to conduct a grammar and spell check in this response.

Question 2 of 2: What governance measures would you advocate for to ensure responsible and secure use of AI in continuity planning?

Atiq Bajwa FBCI

To ensure responsible AI use in continuity planning, I advocate for comprehensive governance measures centered on accountability, validation, and human oversight.     Clear ownership: AI tools used in resilience programs must have a designated owner within the resilience team accountable for tool performance, outputs, and integration into BC plans. A rigorous validation:   Organizations must proactively test AI systems through tabletop exercises and simulations, specifically examining AI recommendations to ensure accuracy before real incidents occur. Human Oversight:  Instead of blindly following the AI recommendation, AI must serve as a tool, with governance requiring critical decisions such as invoking BCPs or recovery plans must be reviewed and authorized by trained resilience professionals.  

Desmond O'Callaghan Hon FBCI

With AI increasingly in use to perform some BCM activities within organizations, I don’t think it eliminates those activities, it just changes the way we do them. To the extent good governance is already in place for these activities, we must look at current governance mechanisms, to ensure they are not diluted by AI, for reviewing analyses and plans, reporting on planning progress and status to top management, running validation exercises, assessing compliance to internal and external standards and guidelines, etc. Management risk acceptance and plan sign-off does not go away with AI. The use of AI in planning should be openly disclosed. AI must not be allowed to become self-governing.   

Federica Maria Rita Livelli MBCI

The responsible and safe use of AI in continuity planning requires a risk-based and resilience-based approach, that implies the implementation of the principles of risk management, business continuity and cybersecurity as well as the compliance with current regulatory requirements in terms of privacy, data security, cybersecurity and AI.   Therefore, organizations should consider: the establishment of a multidisciplinary committee with defined roles; the creation of policies aligned with ISO 22301 -31000- 27001- 42001 standards and current regulations; the definition of ethical principles; data quality assurance and bias management. For better continuity planning organizations should also consider real-time dashboards, specific KPIs, periodic audits to: detect degradation of AI performance as well as plan AI failure scenarios; ensure redundancy; avoid vendor lock-in. It is fundamental to involve the entire organization and ensure adequate training to develop skills on the limitations and risks of AI.

Gregory Descamps MBCI

AI should be treated as a cross-cutting subject across all aspects of resilience, not limited to specific domains. As a key governance measure, the BCI should provide clear usage guidance within its standards, frameworks, and educational materials, ensuring members understand both opportunities and risks. Embedding AI as a recurring topic across BCI content promotes consistent awareness, ethical application, and informed decision-making.

Disclaimer: An AI tool was used to conduct grammar checks and refine the language in this response

Kelly Blakeley MBCI

According to the BCI Vision 2030 Report, the top three factors being part of the evolving role of business continuity managers were:  - Increased awareness of regulations (79.7%)  - Integration of cutting-edge technology to support the role (78.8%).  - More importance given to cross-functional collaboration (77.9%).  This signals an increased need for better governance in general: protocols around information sharing, committees, strategic leadership, a shift of tech literacy to tech fluency reaching our leadership, ethics and effective risk management.  What we’ve observed as a Board over the last 3 years is a race of nations, organisations and professions to govern and regulate AI with varying approaches. Continuity planning is also subject to these.  Our role as a Board is to set the strategic direction for the BCI and, through oversight, ensure that it is operating in alignment with that. 

Maura Santunione MBCI

In alignment with international standards (e.g., ISO/IEC 23894) and critical regulations (e.g., the EU AI Act), I advocate for a holistic governance approach.     This approach is built on measures that address both technical and ethical security, including:    - Establishing clear AI Accountability Frameworks.  Mandating AI-specific Business Impact Analysis (AI-BIA).  - Creating Ethics and Bias Mitigation Committees.  - Implementing Required Explainability (XAI) and Audit Trails.    In parallel with building an appropriate Security Awareness and Culture.

Mohamed Hassan MBCI

If I understood the question correctly about setting the AI governance measures when developing continuity planning. I think there are some important governance measures that can ensure the use of AI is a critical component such as: - Developing audit logs or human in the middle validate all critical inputs are related to the continuity planning such as critical resources, recovery strategies, steps and actions  

  • Develop the appropriate measures to make sure AI meets data privacy requirements related to any sensitive data. 
  • Develop the appropriate measures in validating key BC decisions proposed by AI with expert judgment. 
  • If AI is used to review and update the continuity plans with may miss new threats, process changes, or tech shifts, Validation measures should be triggered after each update or change occurred. 
  • Fine-tune thresholds and pair with rule-based logic Simulate/test AI decision-making  -Keep doing the traditional work in developing the continuity plans.

Disclaimer: An AI tool was used to conduct grammar checks and refine the language in this response

Rajesh Pillai MBCI

To ensure responsible and secure use of AI in continuity planning, I would advocate for governance measures anchored in transparency, accountability, and risk mitigation. First, establish a robust AI Governance Framework that defines ethical principles, compliance requirements, and clear roles for oversight. This should include mandatory risk assessments addressing algorithmic bias, cybersecurity vulnerabilities, and data privacy before deployment. Second, implement audit trails and explainability standards to ensure decisions made by AI systems are traceable and defensible. Third, enforce data protection protocols aligned with global regulations, coupled with continuous monitoring for emerging threats. Fourth, mandate human-in-the-loop controls for critical continuity decisions to prevent over-reliance on automation. Finally, promote training and certification programs for practitioners to build awareness of responsible AI practices. These measures will safeguard trust, resilience, and ethical integrity while enabling innovation in continuity planning.

Disclaimer: An AI tool was used to conduct grammar checks and refine the language in this response. It was also used to assist in drafting the responses. AI served strictly as a writing aid, not for generating original ideas or decision-making. Ethical use and transparency were prioritized.

Sanjay Vijayaraghavan KV MBCI

For AI to be used responsibly in continuity planning, good governance starts with clear accountability and transparency. We need to know who owns the decisions AI makes and ensure people stay in control. Regular reviews, data privacy checks, and bias testing should be part of everyday practice and not afterthoughts. I’d also encourage simple, clear policies that help teams understand how AI supports them rather than replaces them. Responsible use is really about balance, using technology to strengthen trust, not weaken it.

Simon Contini FBCI

Current governance can be adapted, just as the BCI has asked me to answer these questions. It gets to the ethics and integrity of people. This is my dyslexic attempt:-  As the BCI has asked questions about whether AI was used; why and how. This is going to become a more common question and declaration. This is Copilot:-  As the BCI has asked whether AI was used and why, it's important to explore how AI adoption is becoming more common across industries. Understanding the motivations behind its use and the contexts in which it's applied will help shape relevant and future-ready guidance.   In conclusion, the second answer is far more wordy but better than my human answer; both, in my view, say the same thing.  I understand both, which is also an important takeaway. AI and ethics should not be grammar and ethics tests.

Disclaimer: An AI tool was used to conduct a grammar and spell check in this response. Its use was also declared in the second response.

 Voting is still open and closes on 12 November 2025 (midnight GMT)

 

More on