Menu

Mail Icon

NEWSLETTER

Subscribe to get our best viral stories straight into your inbox!

Don't worry, we don't spam

Follow Us

<script async="async" data-cfasync="false" src="//pl26982331.profitableratecpm.com/2bf0441c64540fd94b32dda52550af16/invoke.js"></script>
<div id="container-2bf0441c64540fd94b32dda52550af16"></div>

AWS Adds OpenAI Models to Bedrock and SageMaker

AWS Adds OpenAI Models to Bedrock and SageMaker

For the very first time, developers and businesses on Amazon Web Services can access OpenAI’s new open-weight models, gpt-oss-120b and gpt-oss-20b, directly inside AWS platforms such as Amazon Bedrock and Amazon SageMaker JumpStart. These models represent a major shift opening up powerful AI reasoning tools under a permissive Apache 2.0 license, now available on AWS’ trusted infrastructure.

Open-weight models deliver accessibility and fine-tuning freedom on AWS

OpenAI’s two new models let users inspect and modify the model weights freely unlike closed systems that hide the inner workings. The larger gpt-oss-120b delivers advanced reasoning power and works well on a single 80 GB GPU, while the smaller gpt-oss-20b is optimized for machines with just 16 GB of RAM great for running models locally or in more modest environments.

Open-weight models deliver accessibility and fine-tuning freedom on AWS
image source: inkmagagine.com

Developers now enjoy full control: they can fine-tune the models on their own data, run them securely behind their own infrastructure, and experiment without sending data to external parties. This gives organizations both flexibility and privacy.

Performance and cost benefits compared to other models on AWS

When deployed in Amazon Bedrock, these open-weight models outperform competing AI systems in cost-efficiency. AWS reports that gpt-oss performs approximately 10 times better than Gemini models, 18 times better than DeepSeek-R1, and even outpaces OpenAI’s own o4 model by about 7 times, all measured via price-performant metrics.

For enterprises concerned about budget and efficiency, that represents a huge advantage letting them run advanced AI applications at lower cost without sacrificing power.

Advanced reasoning and tool usage built into open-weight models

What sets these models apart is their capability to perform chain-of-thought reasoning and support external tool usage. They include adjustable reasoning level slow, medium, high and a 128K token context window, enabling users to process large documents or complex instructions.

These models can also integrate web search, code execution, and general computation tasks within agentic workflows making them ideal for building smart assistants, research bots, or intelligent automation systems.

Seamless integration within AWS tools: Bedrock and SageMaker JumpStart

AWS users familiar with Amazon Bedrock or SageMaker JumpStart can now access these models with minimal setup. In Amazon Bedrock, the models appear in the model selector and can be tested via an OpenAI-compatible endpoint developers simply update the endpoint and API key to start using them. Similarly, SageMaker JumpStart lets teams compare, fine-tune, and deploy gpt-oss models with a few clicks.

This means no custom infrastructure or complex configuration just choose the right model, test, and launch.

Share This Post:

– Advertisement –
Written By

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *