Exploring LLM Hallucinations - Insights from the Cisco Research LLM Factuality/Hallucination Summit

Exploring LLM Hallucinations - Insights from the Cisco Research LLM Factuality/Hallucination Summit

Feb 22, 2024

LLMs have many impressive business applications. But a significant challenge remains - how can we detect and mitigate LLM hallucinations?

Cisco Research hosted a virtual summit to explore current research in the LLM factuality and hallucination space. The session includes presentations from University professors collaborating with the Cisco Research team, including William Wang (UCSB), Kai Shu (IIT), Danqi Chen (Princeton), and Huan Sun (Ohio State).

Timestamps:

00:00 Introduction to Cisco’s Responsible AI Research

6:14 What is LLM hallucination?

15:01 “Principles of Reasoning: Compositional and Collaborative Generative AI” with William Wang

45:14 “Combating Misinformation in the Age of Large Language Models (LLMs)” with Kai Shu
1:15:40 ”Enabling Large Language Models to Generate Text with Citations” with Danqi Chen
1:46:04 “Say Correctly, See Wrongly: Hallucination in Large Multimodal Models” with Huan Sun

Outshift is Cisco’s incubation engine, innovating what's next and new for Cisco products and sharing our expertise on emerging technologies. Discover the latest on cloud native applications, cloud application security, generative AI, quantum networking and security, future-forward tech research, our latest open source projects and more.

Keep up with the speed of innovation:
→ Learn more: http://cs.co/6050psmui
→ Read our blog: http://cs.co/6051psmuc

Connect with us on social media:
→ LinkedIn: http://cs.co/6052psmuY
→ Twitter / X: http://cs.co/6053psmul
→ Subscribe to our YouTube channel: http://cs.co/6054psmum