The AWS AI League, a pioneering hands-on programme in collaboration with Atos, is transforming enterprise AI skill development by using competitions and practical projects to build confidence and operational expertise in generative AI model fine-tuning, offering a new blueprint for scalable, cost-effective AI adoption.
Organisations that want to scale AI capability are discovering that conventional training alone rarely moves teams from theoretical knowledge to confiden...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
The League combines instructor-led workshops, guided low-code tooling and a structured competition to accelerate practical skills. Participants used Amazon SageMaker Studio and SageMaker JumpStart to fine-tune pre-trained foundation models, focusing on transfer learning approaches that adapt large language models to narrow, domain-specific tasks rather than training models from scratch. According to AWS documentation, the initiative was introduced in mid-2025 to help enterprises and individual developers build skills in fine-tuning, model customisation and prompt engineering, with AWS offering credits and a championship prize pool to incentivise participation.
Atos chose an underwriting assistant as the competition’s exemplar: an Intelligent Insurance Underwriter trained to assess risk, recommend policy conditions and explain its reasoning in industry-appropriate language. Built on a cost-conscious stack, fine-tuned open-source models managed in SageMaker, with data stored in Amazon S3 and tooling for dataset creation, the project aimed to show how specialist knowledge can be embedded into smaller models for faster, cheaper inference. The AWS blog reports that Atos staff now hold over 5,800 AWS certifications and 11 Golden Jackets, and that the company is working toward a goal of 100% AI fluency across its workforce by 2026.
The League’s three-phase format, an initial immersive workshop, an intensive development period where teams iterated on datasets and hyperparameters, and a live finale judged by experts, audience voting and an automated LLM evaluator, was designed to maintain momentum and surface measurable outcomes. Gamification proved pivotal: Atos recorded 409 active participants who produced more than 4,100 fine-tuned models during the two-week virtual stage, according to Atos’ account of the event. The highest-performing submissions illustrated a central point of the programme: domain-specific fine-tuning can allow a relatively compact model to rival much larger baselines. The AWS blog notes that some 3 billion-parameter fine-tuned models achieved win rates above 93% against a 90 billion-parameter reference model on the competition’s unseen-question benchmark.
The contest also exposed common practical pitfalls. Overfitting emerged when teams trained models too tightly on their datasets, producing repetitive or irrelevant answers on novel prompts. Participants used evaluation loss and perplexity metrics to monitor generalisation and adjusted hyperparameters such as epochs, learning rate, batch size and Low-Rank Adaptation settings to strike a balance between underfitting and memorisation. The organisers supplied tools to simplify dataset creation, in Atos’s case a PartyRock application that output JSONL-formatted instruction–response pairs, and some teams augmented or remodelled that output to increase variety and coverage. Atos reported that iterative dataset refinement and disciplined hyperparameter sweeps were among the most important levers for success.
From an operational standpoint, the exercise highlighted cost-efficiency gains from specialisation. The AWS blog states that fine-tuned 3B models ran effectively on ml.g5.4xlarge instances, while much larger base models required ml.g5.48xlarge hardware, implying substantial savings for inference at scale. Post-event surveys cited by Atos indicated an 85% increase in participant confidence when discussing and implementing generative AI with customers, suggesting the short, hands-on format compressed what might otherwise take months of conventional training into a matter of weeks.
The AWS AI League has been rolled out beyond private partner events. According to AWS announcements and press releases, the programme launched publicly in July and August 2025 with regional events, such as a Jakarta edition, feeding into global finals at AWS re:Invent, where prize incentives and credits were used to drive engagement. AWS materials position the League as a repeatable model for enterprises to host internal tournaments, while individual developers can use the format at AWS Summits and other live events to sharpen practical skills.
The Atos experience sits inside a wider partnership with AWS. In October 2024 the two companies opened a GenAI Innovation Studio in Pune to co-develop industry-focused generative AI solutions, and in July 2025 Atos listed its Polaris AI Platform in the new AI Agents and Tools category of the AWS Marketplace, signalling a strategic push toward deployable agentic and generative products. Together these moves reflect a shift from teaching concepts to operationalising specialised models and agentic architectures across business workflows.
For organisations designing AI enablement programmes, the Atos–AWS pilot surfaces several actionable lessons. Structured, hands-on exercises that abstract infrastructure complexity but preserve model mechanics enable cross-role participation; gamified competition increases sustained engagement; careful dataset design and methodical hyperparameter tuning are more important than raw dataset size; and domain-specific fine-tuning can make smaller models both performant and cost-effective components within larger agentic systems.
The AWS AI League case shows how an experiential, measurement-driven learning format can convert foundational training into deployable skills and demonstrable business value, while also revealing the technical and operational choices that determine whether fine-tuning yields robust, generalisable models rather than short-lived gains on a narrow benchmark. According to Atos and AWS, the pilot’s combination of tooling, competitive structure and real-world use cases has helped accelerate the journey from certification to confident, customer-ready AI practice.
Source: Noah Wire Services



