
Artificial intelligence in the maritime sector could expose critical vulnerabilities – threatening not just economies but also national security, argues Graham Gosden of the AIMPE union
Whether we like it or not, artificial intelligence (AI) continues to evolve. This places the maritime industry – the backbone of global trade and defence – at a critical risk, and it's a vulnerability that could be exacerbated by rapid AI adoption without proper oversight.
Protecting our industry is not just a matter of economic importance but a core aspect of safeguarding national interests, wherever we live.
The dangers
Cybersecurity vulnerabilities: AI-driven systems in marine operations, including autonomous vessels, port logistics and communication networks, are susceptible to cyberattacks.
Malicious actors could exploit AI to disrupt shipping routes, sabotage supply chains, or compromise national defence operations. For example, hacked AI systems on autonomous vessels could redirect military or commercial cargo, causing national and economic security threats.
AI systems could also face ransomware attacks, paralysing ports or naval fleets and disrupting operations.
Loss of human oversight: Over-reliance on AI systems may reduce human involvement in critical operations. In times of conflict or crisis, a lack of human oversight could lead to decisions or actions misaligned with national interests. For example, AI could misinterpret a routine naval manoeuvre as a threat, escalating tensions unnecessarily.
In high-stakes naval operations, human judgement is vital for de-escalation, and is something that AI can't replicate.
Dependence on foreign AI technology: Relying on AI systems developed by foreign entities can create vulnerabilities. Backdoors or intentional flaws in foreign-built systems could allow adversaries to monitor or disrupt critical operations. For example, foreign AI systems could have backdoors, letting adversaries access sensitive data or seize control of maritime infrastructure.
Economic warfare: AI could enable economic sabotage, such as manipulating global shipping markets or targeting maritime infrastructure. A coordinated attack on maritime logistics using AI could cripple national economies.
AI could manipulate shipping routes or disrupt supply chains, delaying essential goods like fuel or military equipment and weakening national resilience in countries like Australia and the UK.
The positives
On the flip side, AI can also strengthen the industry when aligned with national security priorities. For example, AI can predict engine failures, keeping naval vessels mission-ready and reducing downtime.
Another positive is that AI-powered satellite imagery and drones can spot illegal fishing or smuggling in real time, securing our waters.
AI can improve the efficiency of naval supply chains, ensuring timely delivery of resources critical to defence operations. AI can also monitor marine ecosystems for pollution or illegal dumping, protecting fisheries vital to our economy.
Making AI work for us
If we are to meet these challenges and opportunities, we need to develop secure, national AI systems by investing in technology developed in our own nations. These can be tailored to marine applications, reducing reliance on foreign technology. To do this, we should establish partnerships between defence, academia and industry to create secure, purpose-built AI solutions.
We must maintain human oversight in critical operations by training marine personnel to work alongside AI, ensuring human judgement drives key decisions. As part of this, we must implement stringent protocols for AI deployment in military and commercial applications.
We should enhance cybersecurity measures by building robust defences against AI-driven cyber threats targeting the marine industry. Further, we should mandate regular cybersecurity audits for marine AI systems and develop AI-specific security protocols, as well as regularly testing AI systems to identify and mitigate vulnerabilities.
We must regulate AI in maritime applications by enacting policies that limit the deployment of high-risk AI systems in sensitive operations until thoroughly vetted for national security implications. National bodies should be created to oversee AI in the sector, enforcing strict safety and security standards.
It is also important that we adopt ethical guidelines focusing on transparency and accountability in marine AI systems. And we should incentivise the development of regenerative marine technologies, protecting natural ecosystems and aligning with long-term national interests.
A call to action
New legislation and updated standards must mandate the inclusion of manual override capabilities, often referred to in computing as an 'air gap', to ensure systems can be physically disconnected from networks in the event of a cybersecurity threat or AI malfunction or corruption. This is essential for protecting national security.
It should be a legislative requirement that key personnel are physically present onboard all vessels entering Australian waters. These personnel must have the authority and capability to isolate and assume full manual control of the vessel, independent of any AI systems or remote command from the vessel's originating source or 'mothership'.
To protect national control and resilience, the adoption of AI in the maritime industry must be approached with caution, strategic planning, and robust safeguards. While AI offers transformative benefits, it also introduces critical risks. Many other industries are similarly grappling with how to balance the opportunities and control associated with AI, including the academic sector, where even our leading universities are currently navigating how best to teach and regulate AI responsibly. A proactive, security-focused approach will ensure AI enhances, rather than compromises, the maritime industry's critical role in national defence.
In summary, to safeguard national security, legislation must at the very least mandate manual override systems and onboard personnel capable of isolating vessels from AI and external control. As many industries struggle with AI's rapid advancement, a cautious, security-first approach is vital to ensure technological innovation strengthens, rather than undermines, national resilience.
Tags