Monica Caldas, Liberty Mutual’s CIO, emphasizes a strategic ‘defence and offence’ approach to integrating generative AI, combining rigorous governance with targeted modernisation amidst rapid technological change.
Monica Caldas casts the chief information officer’s remit as a careful balancing act between protecting the business and pressing forward with technological change, a stance that has guided Liberty Mutual’s recent push to harness generative artificia...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
Caldas, who joined Liberty Mutual in 2018 and was named executive vice president and global CIO with effect from January 2023, frames the role through what she calls a “defense and offense” lens. “The job is to protect data and make sure that you have systems that are secure and stable so that the company can operate. But you simultaneously have to play offense by building new features and functionality,” she told MIT Sloan Management Review. That duality underpins the insurer’s attempt to move quickly with AI while meeting the controls expected in a highly regulated field.
According to the company announcement of her appointment, Caldas oversees the technology roadmap for Liberty Mutual’s three main business units. Her background includes 17 years at General Electric, where she led digital transformations across operations and supply chains. Industry reporting and company materials show she has combined that operational experience with a public emphasis on diversity and workforce development.
Liberty Mutual has established multiple governance and capability-building initiatives to channel AI adoption. According to the report by MIT Sloan Management Review and earlier interviews with CIO magazine, the firm created a Responsible AI Steering Committee to map risks and set boundaries. The insurer has also rolled out internal education programmes and an “experimentation framework” so employees can learn how to use generative tools safely; training is mandatory before staff may incorporate these systems into their work, company notices state.
Those internal tools include an AI help-desk agent, Libby, linked to Liberty Mutual’s knowledge base and instrumented to surface operational issues before they escalate. The company says Libby has automated manual tasks and freed help-desk employees to address backlogged work. Liberty Mutual has also promoted an internal generative AI platform, LibertyGPT, as a sandbox for staff to experiment with models in a controlled environment; Forbes reported in January 2026 that LibertyGPT has seen rapid uptake and is part of a wider effort to make data and AI central to the business strategy. Company materials further describe an Executech programme aimed at raising technology fluency among leaders.
Caldas emphasises that generative models are accelerants rather than substitutes for sound architecture. “We have a variety of different technology stacks, and modernization is not just about lifting and shifting,” she said. In practice, Liberty Mutual is combining selective retirement of old systems with targeted re‑engineering. Caldas warned against simplistic translation of legacy code into modern languages, noting that attempts to convert COBOL into Java can result in what she called “Jobol.” “GenAI is not a magic wand where you press a button and new code comes out. Yeah, code comes out, but it’s not ready for production,” she added, stressing the need to embed nonfunctional requirements such as security and resilience.
The company has sought to quantify where AI can add value across development lifecycles. Caldas told MIT Sloan Management Review that Liberty Mutual identified roughly 35% of its software development life cycle as amenable to generative assistance, and that more experienced engineers extract greater productivity gains while junior staff require additional mentoring. Her comments mirror the firm’s broader stance that productivity gains from AI should be judged across multiple dimensions: quality, speed of decision-making and improved customer outcomes.
Regulatory and ethical guardrails have accompanied the technical roll-out. Liberty Mutual’s notices to employees and consumers, effective January 2026, describe commitments to transparent and responsible AI use, insist on human oversight for deployed systems and clarify that personal data will not be sold or used by third parties to train external models. The company’s public statements and reporting emphasise training, oversight and a careful calibration of risk appetite.
Caldas’s approach reflects a common executive dilemma as enterprises move from pockets of experimentation to broader operational use of AI: how to preserve trust and stability while delivering the efficiency and innovation these technologies promise. Industry commentators note Liberty Mutual’s strategy mixes governance, internal platforms for safe learning and selective modernisation, an architecture designed to allow the firm to move forward without sacrificing controls.
As Liberty Mutual continues to integrate AI into claims handling, customer service and engineering workflows, the leadership message has been consistent: build secure foundations, educate the workforce and use AI to enhance , not replace , disciplined engineering and oversight. The company says those measures have already enabled redeployment of staff into higher-value work and accelerated internal adoption, while maintaining a cautious posture on production readiness and risk management.
Source: Noah Wire Services



