Announcing the o1 model in Azure OpenAI Service: Multimodal reasoning with “astounding” analysis

We are pleased to announce that the o1 model is coming soon to Microsoft Azure OpenAI Service. This multimodal model brings advanced reasoning capabilities and improvements that will significantly enhance your AI applications and solutions. The o1 model supports both text and vision inputs, making it ideal for a wide range of applications from complex problem-solving to contextual AI experiences.

Customers using the o1 model in Azure OpenAI Service

Rohirrim: o1 model analysis “astounding”

Rohirrim Inc. is a fast-growing AI startup that built Rohan, a secure, knowledge-driven generative AI platform designed for Fortune 1000 and Aerospace or Defense sectors. Rohan streamlines complex RFP responses, eliminating the need to sift through unstructured data, enabling quick, accurate answers in a company’s unique voice. Hosted as a dedicated platform as a service (PaaS), it ensures data control, high compliance, and low governance risk, enhancing productivity and reducing burnout. 

With o1, we’ve unlocked a layer of contextual understanding and nuanced analysis that is, quite frankly, astounding. This evolution means we can now apply AI to the intricate, detail-heavy aspects of proposal management, while freeing human experts to focus on higher-order, intuitive thinking—precisely the kind of insight and creativity that truly helps organizations grow. —Steven Aberle, CEO, Rohirrim

Wrtn Technologies: Response accuracy “has grown exponentially”

Wrtn Technologies is an AI-powered solutions provider, recognized for its innovative approach and commitment to excellence. Leveraging the advanced capabilities of the o1 model, Wrtn Technologies has revolutionized its digital presence and performance

With the o1 model, our success rate of accurate answers to queries has grown exponentially, according to our measures.—Dongjae “DJ” Lee, Chief Product Officer, Wrtn Technologies

Harvey: o1 is a “sophisticated reasoner”

Harvey is a generative AI platform trusted by law firms and professional service providers. It helps lawyers and other professionals by automating complex legal tasks, such as summarizing and comparing documents, performing due diligence, and referencing case law. By leveraging Azure AI infrastructure, Harvey enhances efficiency and productivity in the legal industry.

[W]hat you have is an incredibly powerful model, a sophisticated reasoner. It can create plans from scratch to solve complex tasks. And there’s also a measure of controllability there through instruction following. … [Y]ou can insert a human into the loop and actually steer the model towards higher quality outputs.—Niko Grupen, Head of Applied Research, Harvey

See o1 in action when Harvey uses o1 to analyze a series of complex legal documents here.

New o1 model features for developers

The o1 model introduces several new features that can elevate your reasoning-based development experience. You have seen some of them before in other models, while others are introduced for the very first time with o1. They include:

  • Vision input: With the addition of vision input capabilities, o1 expands its reasoning capability to process and analyze visual data, extracting valuable insights and generating comprehensive text outputs.
  • Developer messages: Developer messages are like system messages in GPT models, allowing for instructions and context to be passed for specific use cases with the “role”: “developer” attribute.
  • Reasoning effort parameter: The reasoning effort parameter allows you to adjust the model’s cognitive load with options for low, medium, and high reasoning levels, offering greater control over the model’s performance.
  • Expanded context window: The o1 model now boasts an expanded context window of 200K tokens and a maximum output of 100K tokens, providing ample space for complex and detailed responses.
  • Structured outputs: o1 now adds support for structured outputs to help simplify the generation of well-defined, structured outputs constrained with JSON schemas.
  • Tools: Now you can use the o1 model output to generate input for your functions just like the previous models.
  • Lower latency: o1 uses on average 60% fewer reasoning tokens than o1-preview for a given request. 

With the introduction of tools support, these models will supercharge agentic AI applications, enabling these models to plan, reason, and then implement agentic tools to automate business processes. These newly added o1 features can offer significant benefits to developers by improving customization, control, capacity, and insight into reasoning AI based scenarios. Utilizing these advancements on Azure OpenAI Service can ensure that developers can leverage cutting-edge technology while maintaining enterprise-grade security and global compliance.

Agility and global reach with privacy and security

At Microsoft, we are committed to bringing the latest OpenAI models to our customers with same-day access. The o1 model is no exception, offering cutting-edge reasoning capabilities that are now available on Azure. Our service is designed to provide broad global reach, one of the best local data choices, and can ensure that your data is private and secure.

  1. Global reach and local data: Azure OpenAI Service offers 99.9% reliability with access in 28 regions, ensuring data residency and compliance with local regulations.
  2. Latest innovations: We provide same-day access to the latest OpenAI models, ensuring you stay ahead of the curve.
  3. Enterprise-grade security: Built-in security features, including private networking, managed identity, and Azure monitoring, ensure your data is protected.
  4. Developer-friendly integration: Seamless integration with GitHub, Visual Studio, and Copilot Studio makes it easy for developers to build and deploy AI solutions.
  5. Responsible AI: Our service includes built-in responsible AI features, such as content filters, prompt shields, and groundedness detection, to support safe and ethical AI usage.
  6. New features with o1: The o1 model introduces developer messages, reasoning effort parameters, and extended response objects, enhancing functionality and customization for developers.

Azure OpenAI Service is also introducing several new fine-tuning features, including support for the o1-mini model through reinforcement fine-tuning. This allows organizations to customize AI models to their specific needs, enhancing performance and reducing costs. The service also introduces Direct Preference Optimization (DPO) for adjusting model weights based on human preferences, and Stored Completions for capturing and storing input-output pairs for fine-tuning. These advancements demonstrate the commitment to providing robust, flexible, and efficient AI solutions with Azure OpenAI Service. 

Beyond these updates, we recently introduced a suite of developer-centric capabilities designed to enhance your AI development experience within your preferred environments, like GitHub and Visual Studio Code. With Azure AI Foundry, GitHub Models, the new Azure AI Agent Service, and built-in safety tools like evaluations and monitoring, developers can integrate advanced AI functionalities directly into their applications. These tools provide robust support for building, testing, deploying, and managing AI models, ensuring that you have the flexibility and control needed to innovate and excel in AI systems you can trust. By leveraging these new features, you can streamline your workflows, improve efficiency, and unlock new possibilities in AI-powered development. 

Join us on this journey

We invite you to explore the capabilities of the o1 model and see how it can transform your business. With Azure OpenAI Service, you have access to the latest innovations, enterprise-grade security, and a global reach that ensures your data is private and secure. Join us on this journey and discover the potential of AI to drive your success.

To get access to the latest models like o1, sign up in Azure AI Foundry today.

The post Announcing the o1 model in Azure OpenAI Service: Multimodal reasoning with “astounding” analysis appeared first on Microsoft Azure Blog.