Skip to main content
Horizons article 01 June 2023

AI and autonomy: finding the right path through 16 billion options

  • Digital transformation
  • Autonomy
  • AI
Issue 65

With the marine autonomy and AI assurance market increasing in size by more than 50% in a year, LR is leading a multi-year, multi-million-pound project to assess the safety of autonomous navigation systems. LR’s experts explain the challenge ahead.

James Fanshawe CBE FNI

Chair of the UK Maritime Autonomous Systems Regulatory Working Group | Member of the UK Marine Autonomous Systems Steering Group

While autonomous shipping and AI’s potential impact on shipping become important talking points, a significant barrier in the way of their development remains: “In classification terms (i.e. larger vessels), there is no class society-assured autonomy yet. Full stop.”

That stark statement by Tony Boylen, LR’s Principal Specialist, Assurance of Autonomy, underlines the importance of work he is undertaking to develop a Maritime Autonomy Assurance Testbed, which will enable assessments of high-end autonomous monitoring and control systems.

He is focusing on autonomous navigation and outlined the complexity that a control system would have to embrace to manage a crossing situation with two other vessels involved. Tracking and modelling their movements, taking into account variables such as speed, sea state and heading changes, he estimated there could be more than 300 million navigational iterations .

Add in the risk of system failure and other non-navigational issues and exceptions and the number of discrete state spaces rises to 16.7 billion, just for that single crossing event.

With autonomous navigation systems having to operate with that level of complexity , the question that concerns him is evidencing which systems are safe to use. At the moment, he said, no one can answer that.

His project involves a number of contributing UK-based organisations, including the National Physical Laboratory (NPL), WMG, part of the University of Warwick, the Alan Turing Institute,  which describes itself as the UK’s national institute for data science and artificial intelligence, the UK Hydrographic Office, the Meteorological Office and a number of universities.

Meanwhile, there has been some industry interest in the project’s outcome, with proposals being made for LR to provide support based on the project’s outcomes to significant owners and shipyards.

Operational design domain

Before that, a structure is being put in place, starting with an operational design domain (ODD), from which will emerge formalised functional and non-functional technical requirements. These ODD descriptions and technical requirements will benefit maritime stakeholders including developers during the engineering and implementation of the system. LR and regulators will use the ODD and derived technical requirements to carry out verification and validation as part of the system certification. Verification will involve a range of test and evaluation to make sure that the technical requirements are satisfied by the different components of the system. This is crucial given that operators and vessel owners will depend on and benefit from the correct, consistent implementation of ODD and technical requirements during their day-to-day operations. This work also has to be coherently undertaken for Remote Operating Centres.

LR’s Maritime AI Applications Innovation Leader, Joseph Morelos, explained why this approach – and the need for it – is very different from assessing physical components. Take valves, for example. There may be many different valve types for various applications made from a range of materials, all of which define how their design might be reviewed and what physical tests are needed. “Everything is explicit and clear,” he said.

But when it comes to autonomy, “’describing the navigation state space and safe behaviours can be very challenging ” he said. There may be a lot of relevant rules but much of the practical knowledge that informs navigational decisions is held by individuals based on their experience, so new elicitation techniques are needed for autonomous navigation systems, he said. “It’s not easy, but now we are implementing an advanced method to address this challenge.”

The collision regulations (COLREGS) make his point. For example, they use the word ‘appropriate’ in a number of places, including stating that watches and lookouts should be “appropriate to the prevailing circumstances and conditions.” But Boylen points out a machine cannot interpret what ‘appropriate’ means, “so that’s one thing we are looking to resolve in the very near term,” he said.

IMO action needed

The International Maritime Organization (IMO) has been looking at how Maritime Autonomous Surface Ships (MASS) should be reflected in its regulatory framework and aims to have a non-mandatory goal-based MASS Code take effect in 2025. This is expected to be followed by a mandatory code on 1 January 2028.

Yet IMO is struggling to define the functions and performance standards that autonomous navigation software should provide, Morelos commented. The technologies needed to build that software are largely mature, Boylen said, underlining the importance of the work LR is doing. “What hasn’t been done is prove that the systems are effective, dependable and safe,” he said. Once safety is proven, adoption will follow.

But this is neither quick nor cheap. Boylen estimates it will take three and a half years to complete the programme. In collaboration with NPL, engagement with the UK Government Department for Science, Innovation and Technology is progressing.  

In developing and leading matured an unrivalled consortia of deep technical subject matter organisations across the broad technical spectrum required for autonomy, the hope is that the expertise being amassed can be developed in time to capitalise on rapidly-developing interest in autonomy across the maritime sector.

AI jumps out of the box in LR report

Maritime autonomy and AI assurance services are estimated to be worth a combined $3.7billion this year, up 57% in just a year, according to a report published jointly in April 2023 by Lloyd’s Register and the research company Thetius.

The report is titled Out of the Box and calls for clarity around normal and emergency use cases, pointing to the need for traditional assurance measures to become increasingly integrated.

Speaking to Horizons, Dipali Kuchekar, Lloyd’s Register’s Product Manager, Autonomous Systems and Novel Technologies, said that one of its purposes was to clarify what she saw as confusion between ‘automation’ and ‘autonomous’ applications, with the latter referring to systems that will learn while in use and “start doing a lot of things on their own.”

Introducing autonomous operations “is not something that will happen overnight,” she added, and an ‘autonomous’ ship does not have to be an unmanned ship. So, the report will help operators – especially small and medium-sized enterprises (SMEs) – understand the benefits and challenges that AI and autonomy will bring, because these technologies are inevitable. SMEs in particular will have to adapt to them, she said.

They will form what she described as an ecosystem that spans not just the technology but also the regulatory framework, crew training and the necessary infrastructure to support autonomous operation and reduce any risks associated with it. The report encourages awareness of that ecosystem and presents its challenges as opportunities, she said.

In one sense, the report itself is an example of both a challenge and an opportunity. Its foreword remarks that, “as we explore these new technologies and their potential applications, striking a balance between embracing innovation and upholding comprehensive safety procedures is crucial.”

Yet that sentence – and the rest of the foreword – was written by ChatGPT, an AI-based language-generating algorithm that was provided with the report’s text as its raw material.

AI and autonomous technologies are already here.  The LR/Thetius report provides a preparatory text for an industry on the cusp of change.

Read the report here

Technology, risk and insurance – the right perspective

There is a powerful argument that humans are responsible for the majority of navigational accidents and incidents; 70% is not an unreasonable figure, although there may have been some form of mechanical or electrical failure which contributed to the event.

It should therefore be a logical argument that taking humans off ships would enable a rise in safety standards with an obvious reduction in the number and scale of claims against the various forms of insurance. It is early days but the rise in the number of Maritime Autonomous Surface Ships (MASS) operating around the world, with a coincidental fall in the number of events, supports this theory.  There is a very strong awareness within the MASS industry that results must continue to prove this point and there is no room for complacency.

Nor is there a place for outdated thinking across the maritime world. Technology has as much importance for conventionally manned vessels as it does for MASS. But people are right to question the efficacy of the risk management behind MASS operations, to reflect the emphasis on the use of technology to make them work more safely than manned ships, with less, or even no, human intervention.

For comparison purposes, step on to the bridge of a relatively mature ship where a newly qualified officer is in charge at 0300 with the standard array of sensors (radar, ECDIS etc), with one deckhand to support them, navigating across the North Sea in a gale in January. Their displays are not necessarily optimally tuned. The officer is tired and has several professional and domestic pressures bearing down on them. Fatigue management is not top of the list of command imperatives.

Shift now to the Remote Control Centre (RCC) responsible for monitoring and controlling a MASS. This vessel has been designed from the outset to incorporate all the latest technology with the full integration of Artificial Intelligence (AI) and Machine Learning (ML), an area which has not yet achieved the recognition it deserves. By way of example, consider your next mobile phone. It has all the basics you need but has to ‘machine learn’ how you expect to use it, which is a fairly complicated process. If learning processes are contained within machines, while they may not be perfect, they will certainly be swifter and more efficient and can always be updated.

To achieve Class and Flag certification, a MASS vessel must have an extensive suite of sensors, communications and other monitoring technologies which enable the fully qualified RCC staff to do their job in whatever level of control, or degree of autonomy, they are operating the vessel. In reality, this equipment may well exceed the capabilities of the equipment fitted in the average conventional ship. Some may argue that this enhances the cyber risk, but this will have been accounted for from the outset and every possible step will have been taken to reduce this threat, with measures in place to deal with an attack should it prove to have been even partially successful. The chances of this, however, are relatively small given the redundancies incorporated within the key systems and it is important to remember that every movement of a MASS will be monitored 24/7 from ashore.

There are various reasons for the use of autonomous systems. They are correctly associated with the removal of humans from a vessel, with all that this implies for new ship designs, including a significant reduction in power demand and space. Crucially, it should be seen as a significant step on the path to achieving zero emissions in ships, which certainly drives the demands of the customers who are lining up to use them.

The reduction in emissions is very important but not at the expense of safety; increased safety will always be the key driving factor. Effective risk management, fully embracing the technology we now have available, is critical to achieving enhanced safety.

Huge expansion of LNG tanker fleet needed to power energy transition