Safe, responsible and effective use of LLMs

This article elaborates on the strengths and limitations of large language models (LLMs) based on publicly available research and our experiences working with language models in an industrial context. We consider diverse concerns, including unreliable and untrustworthy output, harmful content, cyber security vulnerabilities, intellectual property challenges, environmental and social impacts, and business implications of rapid technology advancements. We discuss how, and to what extent, these risks may be avoided, reduced or controlled, and how to harness language models effectively and responsibly in light of the risks.

AI + Safety Position Paper

Artificial Intelligence (AI) and data-driven decisions based on machine-learning (ML) algorithms are making an impact in an increasing number of industries. As these autonomous and self-learning systems become more and more responsible for making decisions that may ultimately affect the safety of personnel, assets, or the environment, the need to ensure safe use of AI in systems has become a top priority.