The promise of AI is tantalising, from automating routine writing tasks to modelling health outcomes for large populations, the technology is rich with promise. But we can’t benefit from the promise if we don’t know how to secure systems. Some risks like prompt injection attacks on LLMs (large language models) and deep fake audio and video hacks get all of the headlines. But are they the biggest threats to trustworthy AI? Find out how to assess real vs perceived or overblown risks in AI and ML.