For the very first time, developers and businesses on Amazon Web Services can access OpenAI’s new open-weight models, gpt-oss-120b and gpt-oss-20b, directly inside AWS platforms such as Amazon Bedrock and Amazon SageMaker JumpStart. These models represent a major shift opening up powerful AI reasoning tools under a permissive Apache 2.0 license, now available on AWS’ trusted infrastructure.
Open-weight models deliver accessibility and fine-tuning freedom on AWS
OpenAI’s two new models let users inspect and modify the model weights freely unlike closed systems that hide the inner workings. The larger gpt-oss-120b delivers advanced reasoning power and works well on a single 80 GB GPU, while the smaller gpt-oss-20b is optimized for machines with just 16 GB of RAM great for running models locally or in more modest environments.

Developers now enjoy full control: they can fine-tune the models on their own data, run them securely behind their own infrastructure, and experiment without sending data to external parties. This gives organizations both flexibility and privacy.
Performance and cost benefits compared to other models on AWS
When deployed in Amazon Bedrock, these open-weight models outperform competing AI systems in cost-efficiency. AWS reports that gpt-oss performs approximately 10 times better than Gemini models, 18 times better than DeepSeek-R1, and even outpaces OpenAI’s own o4 model by about 7 times, all measured via price-performant metrics.
For enterprises concerned about budget and efficiency, that represents a huge advantage letting them run advanced AI applications at lower cost without sacrificing power.
Advanced reasoning and tool usage built into open-weight models
What sets these models apart is their capability to perform chain-of-thought reasoning and support external tool usage. They include adjustable reasoning level slow, medium, high and a 128K token context window, enabling users to process large documents or complex instructions.
These models can also integrate web search, code execution, and general computation tasks within agentic workflows making them ideal for building smart assistants, research bots, or intelligent automation systems.
Seamless integration within AWS tools: Bedrock and SageMaker JumpStart
AWS users familiar with Amazon Bedrock or SageMaker JumpStart can now access these models with minimal setup. In Amazon Bedrock, the models appear in the model selector and can be tested via an OpenAI-compatible endpoint developers simply update the endpoint and API key to start using them. Similarly, SageMaker JumpStart lets teams compare, fine-tune, and deploy gpt-oss models with a few clicks.
This means no custom infrastructure or complex configuration just choose the right model, test, and launch.