Testing AI regulation08 January 2024

Artificial intelligence nuclear sector

Artificial intelligence is starting to enter the nuclear sector with substantial investment. The UK nuclear regulator, ONR, convened its first regulatory sandbox to learn more about how it might manage the technology

This project piloted a nuclear regulatory sandbox process, using AI as the test case. This is the first application of a regulatory sandbox in the nuclear industry, and draws on the work of the Civil Aviation Authority, which has been using sandboxing for three years. The process (diagrammed p22) involved sprint workshops and regulatory sessions.

Two challenging applications of AI in a nuclear environment were discussed: AI to ensure appropriate and targeted plant maintenance, and real-time use of AI to facilitate the safe operability of robots in constrained spaces (not covered here).

Participants looked to explore the use of AI and plant data to inform structural integrity claims in a safety and environment case to help demonstrate reliability. This would be beneficial because it could support the development of digital twins and probabilistic assessment to demonstrate asset in-service operational life. They said that they struggle to use AI in this way at the moment because of the uncertainty associated with the prediction of the location and progression of defects, and the nuclear safety significance of certain structures and components.

To initiate and structure the discussion in the sprint workshop, licensees and contractors from industry developed a mock initial safety and environmental case structure. The structural integrity AI system envisioned in the task is intended to support structural integrity claims in the plant safety and environment case. It can be used to support the claim that the inspection techniques can provide adequate detection of defects earlier in life before they fail. Currently, these inspections are done by human inspectors, supported by modelling, with around 90% probability of detection.

Initially the workshop considered that the use of AI would be offline (that is, the AI would be a tool with its output unable to affect the plant without human action). Therefore, the focus of the group was to consider the assurance necessary for robust data interpretation across a number of potential users, as depicted in the figures on p21.

The sandboxing focused on using AI to inform a mathematical model and identify potential new correlations that could inform further research.

It is likely that an AI system and its operational environment would change with time, so the group considered what changes could occur and how they could affect the validity of an AI system’s output. These included ageing of system and environmental components, evolution of organisational culture over time, change in attitude to defect tolerance over time, and hardware changes. Therefore, control and associated procedures are needed to clarify any assumptions that are being made, define learning protocols and calibrate the AI system outputs effectively with a benchmark, so long as the benchmarking model remains valid.

The group considered that it would be difficult to train the system to identify failure of the system components (such as camera or lighting), so non-AI protection systems should be implemented.

The starting point for demonstrating that risks are reduced and safety, security and environmental protection is adequate is ensuring the normal requirements of good practice in engineering, operation and safety management are met. These requirements should include the intended use and benefits, functional requirements, and data definition of any AI component. The operating domain of the AI component (that is, the whole system) can have a significant impact on operation and should also be clearly specified. In addition, the approach to AI training should be robustly defined and recorded in clear requirements.

UNDERSTANDING OUTPUTS

The AI output is an interpretation of the system and uncertainty of the inputs. This means that determining the precise overall level of uncertainty of the system will be difficult or even impossible, and therefore arrangements are needed to ensure AI outputs are used appropriately. Given this, the group considered that it would be difficult to define and understand the level of conservatism in the AI output. Limiting an AI system’s output to best estimate applications may not help with the safety and environment case arguments due to the high levels of uncertainty, but best estimate AI-derived data may assist in operations.

ONR stated that its current guidance encourages a deterministic approach for higher reliability structures, systems and components (SSC), but that there may be cases where using AI and plant data gives new insights to support the structural integrity safety case. In the case of lower-reliability SSC, where the known degradation mechanisms dominate, using AI and plant data in probabilistic approaches may underpin the safety case more directly.

The group suggested a graded approach for ML for structural integrity:

  • SSC with low safety significance:
  • ML could be used to inform automated processes and prove concepts in a safe environment

  • SSC with a higher safety significance: AI could be used to optimise margins if the level of uncertainty associated with the application was treated in a conservative manner
  • SSC with the highest safety significance: ONR considered it difficult to use ML as a primary argument within a safety case, but it was recognised that ML may provide insight from component data and help understand the level of conservatism used in safety margins.
  • Throughout the sandboxing, all those involved recognised the value in using AI in many applications. However, due to the complexities and uncertainties associated with the use of AI, the benefits need to be clear and justified at the outset for any specific application of safety, security or environmental significance. Such benefits should be clearly articulated and compared with alternative, more traditional, techniques as part of the decision-making process – for example, as part of optioneering to demonstrate that AI represents the best available technique and that risks are reduced as low as reasonably practicable (ALARP).

    These risks need to be understood and managed through robust arrangements that deal with the uncertainties, and include:

  • The level of authority associated with the AI system – for example, being used in an advisory capacity as input to a decision-making process, making decisions in a supervisory capacity that are checked by a human or making decisions as part of an autonomous control system
  • The safety, security and environmental significance of the application
  • The level of continuous learning – whether the AI is deployed as a static model or continuously learning
  • The complexity of the application.
  • Given these uncertainties, deployments of AI systems with potential safety, security and environmental significance consequences should be undertaken in a phased manner to build up confidence and experience. It may also be prudent to consider AI as a black box system.

    FAILURE

    Where the safety, security and environmental consequences of failure of AI components are significant or unintended consequences of their use could be significant, the user should assume failure or that these unintended consequences have been realised (that is, a probability of failure of the AI component set to 1). This approach would apply a reasonable level of conservatism in analysing systems containing AI until such a time as techniques and measures are available to assess AI’s inherent uncertainty.

    The use of diverse monitoring systems should be considered to help identify any drift in behaviour (including those associated with ageing). These should be accompanied with arrangements to oversee drifts in the AI system’s behaviour to ensure it is still delivering its safety, security and environmental requirements and not inadvertently defeating any protection measures. These monitoring systems should be able to place the AI component into a known safe state (e.g., safe or slow mode) if AI operation goes beyond a defined safety, security and environmental protection envelope.

    The introduction of diversity may assist and provide a level of independent challenge to any AI system. This diversity could come from diverse AI systems, digital twin comparators, voting systems based on multiple models or independent AI or conventional systems. One potential application of AI may be to look for deviations in data that may provide an early indication of failure.

    SKILLS

    The regulatory sandboxing identified a need for requirements relating to the development of skills and experience needed to deploy AI effectively and safely.

    First, access to AI expertise: essential skills will include experience of AI systems development, software development and data science.

    Second, operational experience: end users of systems containing AI that could impact safety, security and environmental protection should have clear responsibilities. They should have the operational and application knowledge, including understanding the limits and conditions of operation, to ensure any inherent uncertainty in the AI systems leads to decisions that maintain conservatism.

    Third: behaviour and culture: the deployment of innovation (including AI) should be accompanied by a challenging safety, security and environmental culture. Such a culture should take a phased approach to deployment and apply a precautionary approach.

    One key element of the sandboxing discussions relating to AI was the importance of understanding the complexity of the interaction between humans and the systems containing AI. Considerations included enunciating faults clearly and having a human representing the output of the system containing AI in engineering panels.

    This article is an edited version of the report ‘Regulators’ Pioneer Fund (Department for Science, Innovation and Technology): Pilot of a regulatory sandbox on artificial intelligence in the nuclear sector’ available as a download from www.is.gd/bulazi.

    BOX: AI TRUST LOW

    China and India are on course to realise AI’s potential to be a force for good in areas including healthcare, food safety and sustainability. While other major economies, including the UK, France and Germany, are facing a greater confidence gap linked to low levels of public trust in the technology, and risk losing out on this opportunity, a study by BSI reveals.

    BSI’s Trust in AI Poll of 10,000 adults across nine countries carried out by Censuswide in August 2023 identifies global attitudes towards AI’s potential to improve our society. More than half (52%) report feeling excited about how AI can shape a better future for everyone by improving the accuracy of medical diagnosis. Nearly half (49%) welcoming help from the technology in reducing food waste. 52% say AI can help create a more energy-efficient built environment.

    Yet while people are aware of the opportunity for AI, there are low levels of trust globally. For example, just a quarter have more confidence in AI than people to detect food contamination issues; 69% say patients need to be made aware AI tools are being used in diagnosis or treatment, and 57% feel vulnerable consumers need protections around AI. Equally, while many of us currently use AI technology (57% use facial recognition for banking) only half of respondents recognise that these technologies use AI. There is a clear opportunity for education to build understanding in AI and empower people to collectively harness its capabilities.

    The research was commissioned to launch the Shaping Society 5.0 essay collection, which explores how AI innovations can be an enabler that accelerates progress (see also

    www.is.gd/tusecaw). It highlights the importance of building greater trust in the technology, as many expect AI to be commonplace by 2030, for example automated lighting at home (41%), automated vehicles (45%) or biometric identification for travel (40%). Over quarter (26%) expect AI to be regularly used in school within just seven years.

    Harold Pradal, BSI chief commercial officer, said: “AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. Closing the AI confidence gap is the first necessary step. It has to be delivered through education.”

    BSI’s digital trust expertise includes ISO 23984 Information technology - artificial intelligence - Guidance on risk management and the forthcoming AI governance standard (ISO 42001), drawn from existing guidelines.

    BOX: AI POWERED DESIGN

    University of Birmingham Enterprise has launched EvoPhase, which delivers services to optimise existing and new process equipment that mixes, blends, stores, or stirs granular materials.

    EvoPhase will use evolutionary AI algorithms, coupled with simulations of particulates in systems such as industrial mixers, to evolve an optimised design for the mixing blade, and the shape or size of the blending vessel. This AI-led approach is applicable to a diverse range of process equipment, including mills, dryers, roasters, coaters, fluidised beds and stirred tanks.

    Chief executive officer Dominik Werner said: “Up to 50% of the world’s products are created by processes that use granular materials, but granules are difficult to characterise or understand. If you consider coffee, its granules are solid when they are contained, liquid-like when poured out of the container, and become gas-like and dispersed if you blow on them. This type of variability means granules are the most complex form of matter to process.”

    The team will use a novel AI technology called highly-autonomous rapid prototyping for particulate processes (HARPPP), which works like natural selection, testing out designs it has evolved to come up with to find the best one. It allows the user to set multiple parameters for optimisation, allowing evolution of a design that will meet, for instance, targets on power draw, throughput and mixing rate, rather than trading these parameters off against each other.

    EvoPhase will also use a numerical method called discrete element method which predicts the behaviours of granular materials by computing the movement of all particles.

    Operations Engineer

    Related Companies
    Civil Aviation Authority

    This material is protected by MA Business copyright
    See Terms and Conditions.
    One-off usage is permitted but bulk copying is not.
    For multiple copies contact the sales team.