AI on Watch - Insights from the DNV-BRAIN Hackathon 2025

Summary

In March 2025, DNV partnered with BRAIN NTNU to host a 24-hour hackathon focused on advancing situational awareness (SitAw) systems for autonomous ships. As AI becomes increasingly embedded in complex systems, such as autonomous vessels, ensuring their safety and reliability is a growing challenge. DNV, a global leader in assurance and risk management, is committed to developing rigorous testing methodologies for AI-enabled systems. This hackathon was part of that mission — designed to explore innovative approaches to object detection in maritime environments, and to foster collaboration with the next generation of AI talents.

Safe, responsible and effective use of LLMs

This article elaborates on the strengths and limitations of large language models (LLMs) based on publicly available research and our experiences working with language models in an industrial context. We consider diverse concerns, including unreliable and untrustworthy output, harmful content, cyber security vulnerabilities, intellectual property challenges, environmental and social impacts, and business implications of rapid technology advancements. We discuss how, and to what extent, these risks may be avoided, reduced or controlled, and how to harness language models effectively and responsibly in light of the risks.

AI + Safety Position Paper

Artificial Intelligence (AI) and data-driven decisions based on machine-learning (ML) algorithms are making an impact in an increasing number of industries. As these autonomous and self-learning systems become more and more responsible for making decisions that may ultimately affect the safety of personnel, assets, or the environment, the need to ensure safe use of AI in systems has become a top priority.