Sean welcomes Dr. Siddarth Srivastava to the pod to talk about AI safety and how to enable users to engage with AI systems in a safe way without sacrificing productivity, while addressing some of the key challenges to preempting safe related issues of using AI systems.
In this episode of the Learning Futures Podcast, Dr. Siddharth Srivastava, Associate Professor at the School of Computing and Augmented Intelligence at Arizona State University discusses the need for responsible development of AI systems that keep users informed of capabilities and limitations. He highlights exciting research on learning generalizable knowledge to make AI more robust and data-efficient. However, dangers arise from overtrusting unproven systems, so regulation and oversight are needed even as innovation continues. By prioritizing users, the current explosion in AI research can drive responsible progress.
Key topics discussed:
- Dr. Srivastava discusses his background in AI research and the journey that led him to focus on developing safe and reliable AI systems.
- The recent explosion of interest and adoption of generative AI like ChatGPT took many researchers by surprise, especially the accessibility and breadth of applications people found for these narrow systems.
- It's important to distinguish narrow AI applications like generative models from general AI. Overuse of the term "AI" can lead to misconceptions.
- Considerations around safety, bias, and responsible use need to be built into AI systems from the start. Keeping users informed of capabilities and limitations is key.
- Exciting new research directions include learning generalizable knowledge to make AI systems more robust and data-efficient.
- Dangers arise from overtrusting unproven AI systems. Regulation and oversight will be needed, but should not stifle innovation.
- Overall, it's an exciting time in AI research. With a thoughtful, practical approach focused on user needs, AI can be developed responsibly.