At its core, AI is simply software that can ‘think’, ‘learn’, and ‘make’ decisions – somewhat like we humans do. AI systems aren’t programmed in the traditional way – but instead, and to an extent, program themselves.
Generative AI is a specific type of AI that can generate content that didn’t exist before. In the same way that a human can come up with a melody that has never been heard before. Generative AI can pull something out of nothing, just like us.
Whereas Large Language Models (LLMs) are a specific type of generative AI that focuses on understanding and generating text. Interacting with one is a bit like having a chat with the world’s most extensive library.
Both types of AI are increasingly deployed today, across all sectors. Unfortunately, the impressive technology is not without risk.
Without stringent regulatory frameworks, these powerful technologies can be exploited for nefarious purposes – ranging from the perpetration of sophisticated cyber-attacks to the manipulation of public opinion and behaviour. The potential for AI and LLMs to autonomously generate disinformation at scale presents a particularly insidious threat. The worst-case scenario is … not good. The foundations of democratic discourse could be totally undermined. Societal divisions deepened and were exploited. Moreover, without ethical guidelines, these technologies may perpetuate and amplify biases present in their training data, leading to discriminatory outcomes that entrench existing social inequalities.
Furthermore, the uncontrolled application of AI and LLMs pose significant privacy and security risks. The capability of AI to analyse and predict human behaviour with increasing accuracy raises the spectre of a surveillance state in which individual freedoms are profoundly compromised. Additionally, the lack of ethical constraints could lead to the development of autonomous weapons systems, escalating the risk of unaccountable and potentially catastrophic military engagements.
Deploying an AI management system, such as ISO 42001 is an effective way to manage the risks attendant to AI systems. While it’s impossible to remove all risk, this is a way of knowing what issues you may face and staying in control of technology that is by its very nature, amorphous.
Artificial Intelligence (AI) is increasingly ubiquitous across all sectors. Despite the soaring use of this technology, there are very few guidelines on how to use it securely.
AI can add many benefits to your organisation’s productivity. However, deploying AI potentially raises specific considerations, namely:
One sad fact about AI systems is that just like much of the human data they are fuelled by, they can have unfortunate biases. We must work hard to ensure that AI systems do not emphasise, increase or further divide existing social fissures.
Privacy is sacrosanct and we must ensure it is protected when using AI systems. ISO 42001 puts the protection of data centre stage in the new standard – and for good reason. The misuse or disclosure of personal and sensitive data (e.g. health records) can have a harmful effect on data subjects, not to mention the legal and reputational effects on any organisation caught mishandling such information.
A fairly hands-off approach that is not yet law but coming soon: International Collaboration and National Regulations.
Let’s finish up with ten objectives your organisation should consider as you adopt AI with security, safety and impacts in mind.
The use of AI can change existing accountability frameworks. Where previously persons would be held accountable for their actions, their actions can now be supported by or based on the use of an AI system.
A selection of dedicated specialists with interdisciplinary skill sets and expertise in assessing, developing and deploying AI systems is needed.
AI Systems based on ML need training, validation and test data to train and verify the systems for the intended behaviour.
The use of AI can have a positive and negative impact on the environment.
The inappropriate application of AI systems for automated decision-making can be unfair to specific persons or groups of people.
Maintainability is related to the ability of the organisation to handle modifications of the AI system to correct defects or adjust to new requirements.
The misuse or disclosure of personal and sensitive data (e.g. health records) can have a harmful effect on data subjects.
In AI, robustness means the ability (or inability) of the system to have comparable performance on new data as on the data on which it was trained (or the data of typical operations).
Safety relates to the expectation that a system does not, under defined conditions, lead to a state in which human life, health, property or the environment is endangered.
In the context of AI and in particular regard to AI systems based on ML approaches, new security issues should be considered beyond classical information and system security concerns.
Transparency relates both to the characteristics of an organisation operating AI systems and to those systems themselves. Explainability relates to the explanation of important factors influencing the AI system results that are provided to interested parties in a way understandable to humans.
When developing your AI governance program, you should consider the standards available that best fits your organisation.
There are two published AI frameworks to consider. ISO/IEC 42001 focuses on standardised governance, whilst NIST AI RMF emphasises flexible risk management and trustworthy AI practices.
ISO 42001 AI Management System
This globally recongised standard provides a systematic approach to AI management, similar to ISO 27001 standards in information security. The ISO/IEC 42001 framework guides organisations in designing, implementing and maintaining AI systems that meet security, transparency and ethical standards.
NIST AI Risk Management Framework
Developed by the U.S. National Institute of Standards and Technology, this framework emphasises assessing, managing, and mitigating AI-related risks, focusing on ethics, transparency and accountability. Adopting this framework helps organisations establish a risk-aware approach to AI that supports regulatory compliance and promotes trust.
Both standards provide a foundation for organisations to craft a governance framework that reflects industry best practices and ensures responsible, secure and compliant AI usage.
The essence of the AI Management Plan is to continually improve it. It’s key that your organisation continually improves the suitability, adequacy and effectiveness of its AI Management System.
Non–conformity & Corrective Action: When a non–conformity occurs, your organisation should:
Ultimately, AI should be managed like everything else in your organisation. We are not suggesting you avoid AI – quite the opposite. Rather, you should approach it from a risk-based perspective, and with a management system around it.
Speak to one of our experts to better understand the governance around AI and the process of ethically adopting AI.
Introducing ISO 42001 – the world’s first international management system standard focused specifically on AI.…
Data breaches and cyberattacks have become daily concerns for information security professionals and business leaders.…
It is an undeniable fact that all applications and infrastructures are essentially in need of…