ニュース&特集
ニュース | VentureBeat
When AI reasoning goes wrong: Microsoft Research shows more tokens can mean more problems
Large language models (LLMs) are increasingly capable of complex reasoning through “inference-time scaling,” a set of techniques that allocate more computational resources during inference to generate answers. However, a new study from Microsoft Research reveals that the effectiveness of these…
ニュース | TheSequence
One of the Best Agent Frameworks in the Market Just Got Way Better
AutoGen has undergone significant evolution since its inception, driven by the need for more efficient, flexible, and scalable agentic AI systems. The release of AutoGen v0.4 introduces a fundamental architectural shift, addressing prior inefficiencies and enhancing its capabilities.

Research Focus: Week of December 16, 2024
NeoMem: hardware/software co-design for CXL-native memory tiering; Chimera: accurate retrosynthesis prediction by ensembling models with diverse inductive biases; GA4GH task execution API enables multicloud task execution.
ニュース | Techcrunch
Microsoft launches Phi-4, a new generative AI model, in research preview
Microsoft has revealed the newest addition to its Phi family of generative AI models. Called Phi-4, the model improves in several areas over its predecessors, Microsoft claims, particularly in solving math problems. That’s partly the result of better training data…
Microsoft launched a new artificial intelligence model today that achieves remarkable mathematical reasoning capabilities while using far fewer computational resources than its larger competitors. The 14-billion-parameter Phi-4 frequently outperforms much larger models like Google’s Gemini Pro 1.5, marking a significant…
ニュース | Tech Brew
Microsoft researcher on the future of AI agents in 2025
One of the key questions driving Ece Kamar’s research as managing director of Microsoft’s AI Frontiers Lab is how to coordinate networks of these agents—AI systems that can perform autonomous tasks beyond the scope of chatbots. Late last year, her…

We’re excited to be a part of #NeurIPS2024! Explore the future of AI with over 100 groundbreaking papers, including oral and spotlight sessions, on reinforcement learning, advanced language model training, and multilingual, culturally inclusive benchmarks.

Orca-AgentInstruct: Agentic flows can be effective synthetic-data generators
| Arindam Mitra, Ahmed Awadallah, と Yash Lara
Orca-AgentInstruct, from Microsoft Research, can generate diverse, high-quality synthetic data at scale to post-train and fine-tune base LLMs for expanded capabilities, continual learning, and increased performance.

Research Focus: Week of October 28, 2024
New Research | FLASH: Workflow automation agent for diagnosing recurring incidents; METAREFLECTION: Learning instructions for language agents using past reflections; Boosting LLM training efficiency through faster communication between GPUs; and more.