팟캐스트

  1. Stylized microphone and sound waves illustration.

    Abstracts: May 6, 2024 

    5월 6, 2024 | Michel Galley 그리고 Gretchen Huizinga

    Researcher Michel Galley explores how he and fellow researchers combined new and existing data to create MathVista, an open-source benchmark for measuring the mathematical reasoning capabilities of foundation models in scenarios that involve text and images.

  2. Stylized microphone and sound waves illustration.

    Abstracts: April 16, 2024 

    4월 16, 2024 | Gretchen Huizinga 그리고 Tusher Chakraborty

    Tusher Chakraborty talks about the paper “Spectrumize: Spectrum-efficient Satellite Networks for the Internet of Things,” including a method for supporting communication between a large IoT-satellite constellation and devices on Earth within a limited spectrum.

  3. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: March 21, 2024 

    3월 21, 2024 | Chang Liu 그리고 Gretchen Huizinga

    Senior Researcher Chang Liu discusses M-OFDFT, a variation of orbital-free density functional theory (OFDFT) that leverages deep learning to help identify molecular properties in a way that minimizes the tradeoff between accuracy and efficiency.

  4. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: February 29, 2024 

    2월 29, 2024 | Lev Tankelevitch 그리고 Gretchen Huizinga

    Can how we think about our thinking help us better incorporate generative AI in our lives & work? Explore metacognition’s potential to improve the tech’s usability on “Abstracts,” then sign up for Microsoft Research Forum for more on this & other AI work.

  5. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: January 25, 2024 

    1월 25, 2024 | Gretchen Huizinga, Jordan Ash, 그리고 Dipendra Misra

    On “Abstracts,” Jordan Ash & Dipendra Misra discuss the parameter reduction method LASER. Tune in to learn how selective removal of stored data alone can boost LLM performance, then sign up for Microsoft Research Forum for more on LASER & related topics.

  6. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: December 12, 2023 

    12월 12, 2023 | Gretchen Huizinga, Tao Qin, 그리고 Lijun Wu

    Members of the research community at Microsoft work continuously to advance their respective fields. Abstracts brings its audience to the cutting edge with them through short, compelling conversations about new and noteworthy achievements.  In this episode, Senior Principal Research Manager Tao Qin and Senior Researcher…

  7. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: December 11, 2023 

    12월 11, 2023 | Gretchen Huizinga 그리고 Alessandro Sordoni

    By treating language models as layers in a network and prompts as learnable parameters, researchers aim for more adaptable, reusable LLM architectures. Check out the work in the “Abstracts” podcast series with guest Alessandro Sordoni and at #NeurIPS2023:

  8. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: December 6, 2023 

    12월 6, 2023 | Gretchen Huizinga 그리고 Xing Xie

    "Abstracts”—your source for world-class research in brief—welcomes Senior Principal Research Manager Xing Xie to the podcast series to discuss his paper on evaluating general-purpose AI with psychometrics.

  9. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: October 23, 2023 

    10월 23, 2023 | Gretchen Huizinga, Andy Gordon, 그리고 Carina Negreanu

    Today on “Abstracts,” Partner Research Manager Andy Gordon & Senior Researcher Carina Negreanu explore new work introducing co-audit, a term for any tool-assisted experience that helps users of generative AI find and fix mistakes in AI output.

  10. Microsoft Research Podcast - Abstracts hero with a microphone icon

    Abstracts: October 9, 2023 

    10월 9, 2023 | Gretchen Huizinga 그리고 Sheng Zhang

    Researcher Dr. Sheng Zhang joins “Abstracts”—your source for cutting-edge research in brief—to discuss a recent paper on distilling large language models into smaller, more efficient ones capable of excelling in broad application classes.