Podcasts

  1. Stylized microphone and sound waves illustration.

    Abstracts: May 6, 2024 

    May 6, 2024 | Michel Galley and Gretchen Huizinga

    Researcher Michel Galley explores how he and fellow researchers combined new and existing data to create MathVista, an open-source benchmark for measuring the mathematical reasoning capabilities of foundation models in scenarios that involve text and images.

  2. Stylized microphone and sound waves illustration.

    Abstracts: April 16, 2024 

    April 16, 2024 | Gretchen Huizinga and Tusher Chakraborty

    Tusher Chakraborty talks about the paper “Spectrumize: Spectrum-efficient Satellite Networks for the Internet of Things,” including a method for supporting communication between a large IoT-satellite constellation and devices on Earth within a limited spectrum.

  3. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: March 21, 2024 

    March 21, 2024 | Chang Liu and Gretchen Huizinga

    Senior Researcher Chang Liu discusses M-OFDFT, a variation of orbital-free density functional theory (OFDFT) that leverages deep learning to help identify molecular properties in a way that minimizes the tradeoff between accuracy and efficiency.

  4. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: February 29, 2024 

    February 29, 2024 | Lev Tankelevitch and Gretchen Huizinga

    Can how we think about our thinking help us better incorporate generative AI in our lives & work? Explore metacognition’s potential to improve the tech’s usability on “Abstracts,” then sign up for Microsoft Research Forum for more on this & other AI work.

  5. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: January 25, 2024 

    January 25, 2024 | Gretchen Huizinga, Jordan Ash, and Dipendra Misra

    On “Abstracts,” Jordan Ash & Dipendra Misra discuss the parameter reduction method LASER. Tune in to learn how selective removal of stored data alone can boost LLM performance, then sign up for Microsoft Research Forum for more on LASER & related topics.

  6. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: December 12, 2023 

    December 12, 2023 | Gretchen Huizinga, Tao Qin, and Lijun Wu

    Members of the research community at Microsoft work continuously to advance their respective fields. Abstracts brings its audience to the cutting edge with them through short, compelling conversations about new and noteworthy achievements.  In this episode, Senior Principal Research Manager Tao Qin and Senior Researcher…

  7. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: December 11, 2023 

    December 11, 2023 | Gretchen Huizinga and Alessandro Sordoni

    By treating language models as layers in a network and prompts as learnable parameters, researchers aim for more adaptable, reusable LLM architectures. Check out the work in the “Abstracts” podcast series with guest Alessandro Sordoni and at #NeurIPS2023:

  8. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: December 6, 2023 

    December 6, 2023 | Gretchen Huizinga and Xing Xie

    "Abstracts”—your source for world-class research in brief—welcomes Senior Principal Research Manager Xing Xie to the podcast series to discuss his paper on evaluating general-purpose AI with psychometrics.

  9. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: October 23, 2023 

    October 23, 2023 | Gretchen Huizinga, Andy Gordon, and Carina Negreanu

    Today on “Abstracts,” Partner Research Manager Andy Gordon & Senior Researcher Carina Negreanu explore new work introducing co-audit, a term for any tool-assisted experience that helps users of generative AI find and fix mistakes in AI output.

  10. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: October 9, 2023 

    October 9, 2023 | Gretchen Huizinga and Sheng Zhang

    Researcher Dr. Sheng Zhang joins “Abstracts”—your source for cutting-edge research in brief—to discuss a recent paper on distilling large language models into smaller, more efficient ones capable of excelling in broad application classes.