As manufacturing challenges at nanoscale increase, semiconductor makers are adopting advanced, secure data collaboration methods to enhance yield, reliability, and predictive analytics across the supply chain despite IP concerns.
Semiconductor makers are confronting a simple but urgent reality: solving increasingly subtle manufacturing and packaging problems requires sharing detailed data across companies, yet doing so without exposing valuable intellectual property is ...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
Nanoscale device behaviour is being shaped by phenomena such as line-edge roughness, EUV stochastic defects and emerging backside power parasitics, placing tighter demands on process control and measurement. At the same time, chiplet-based systems assemble components from multiple foundries and vendors, creating failure modes that straddle front‑end process variation and assembly operations. According to Teradyne’s Eli Roth, “We can predict the yield of a device downstream, and that’s becoming very interesting in heterogeneous packaging.” He argued that identifying dies likely to fail before packaging can save many otherwise good parts.
The benefits of cross‑company data flow are already visible in several areas. Secure sharing accelerates failure analysis and returned‑goods investigations by allowing layout, test vectors and environmental context to be compared against production “fingerprints,” helping to trace latent defects that escaped conventional screening. Industry practitioners point to yield improvement, faster time‑to‑qualify and lower cycle times as primary outcomes when design, fabrication, test and field operation data are correlated end‑to‑end. Eidan Mendelsohn of proteanTecs highlights per‑chip Vdd‑min prediction as a deployed example where cloud‑trained models run at the tester to set safe operating voltages, cutting test time and power use while preserving reliability.
Machine learning and advanced analytics underpin many of these gains, but their appetite for large, clean, and contextualised datasets is what forces parties to confront data sharing. Roberto Colecchia at Advantest describes predictions that enable adaptive test flows, power optimisation and early detection of final‑test failures during wafer sort. Such models work best when they can draw from wafer maps, HTOL results, spatial indicators and field telemetry so that early indicators can be mapped to later outcomes across the product lifecycle.
That capability depends on data being prepared and protected. Practitioners emphasise two front‑loaded tasks: securing the transport and access controls for data, and transforming raw outputs into structured, machine‑ready formats. “The first part is securing the data, so making sure that the connection and the pipe is all secure,” said Ranjan Chatterjee of PDF Solutions, who also stressed the need for context and standardisation so recipients can act on shared information. Data cleaning , removing duplicates, filling gaps and aligning formats , is widely cited as the most time‑consuming step; one industry estimate has data preparation consuming a majority of data scientists’ time.
To reduce risk, the sector is adopting layers of protection and new collaboration models. Anonymisation techniques and governance rules strip or mask identifying details before data leaves an owner’s domain, enabling statistical and ML workflows without exposing device‑specific secrets. According to an industry white paper from an enterprise anonymisation vendor, such techniques also help firms meet regulatory obligations while enabling cross‑organisational analysis. Other approaches include federated learning and secure multi‑party computation, which allow models to be trained on distributed data without aggregating raw records, a method highlighted by data‑governance experts as a practical compromise between utility and confidentiality.
Commercial hubs and third‑party platforms are emerging to provide controlled environments for sharing. PDF Solutions’ secureWISE, the Athinia collaboration between Merck and Palantir, and other vendor offerings position themselves as manufacturing connectivity layers that enforce role‑based access, encryption and auditability while normalising diverse tool outputs. A blockchain‑style ledger product aimed at supply‑chain traceability has also been marketed as a way to maintain tamper‑evident provenance without revealing sensitive process detail; vendors claim this preserves provenance while limiting IP exposure, although those assertions should be read as vendor claims rather than independent verification.
Standards and public‑sector engagement are beginning to supplement vendor work. The National Institute of Standards and Technology has convened workshops to prioritise data standards supporting trust and assurance in the semiconductor supply chain, signalling that interoperability and agreed practices are priorities for industry resilience. Industry groups such as SEMI continue to work on compatibility for metrology and test formats, though participants caution that standards development often trails fast‑moving deployment needs.
Not all data are treated equally. Companies tend to guard device‑specific test results most closely, treat wafer process data as next‑most sensitive, and expose equipment health metrics more readily. That gradation shapes what gets shared first: preventive‑maintenance telemetry and tool‑health signals are commonly pooled to improve uptime and avoid yield loss from worn probe needles or miscalibrated handlers, while more detailed process or design data remain shielded behind anonymisation or governance controls.
Beyond secure exchange and cleaning, some teams are experimenting with digital twins and virtualised devices to extend predictive capability. Onto Innovation’s Sean King notes renewed interest in digital twins applied to MES optimisation, tool visualisation and process simulation, with AI enabling more dynamic models. Roth said industry groups are exploring the idea of virtual device twins that, if realised, could compress development time by allowing more validation before silicon, though such initiatives raise new questions about model security and traceability.
Looking ahead, vendors and users foresee higher levels of automation and smarter user interfaces once connectivity, governance and model‑management are mature. PDF Solutions’ Chatterjee envisions agentic AI coordinating ERP, MES and test systems so routine decisions need less specialised operator intervention. Yet successful scaling will require not only technology but also contractual clarity, common data taxonomies and continued investment in anonymisation and federated techniques to reassure IP owners.
Secure, governed data collaboration is thus positioned as both a technical and an organisational challenge. Industry players argue that the potential to lift yields, cut test costs and prevent field escapes is substantial, but realising those benefits depends on rigorous data practices, interoperable platforms and confidence that sharing will not expose competitive knowledge. As device physics and packaging complexity tighten tolerances, the balance between openness for analytics and protection for IP will determine how quickly the sector can turn shared insight into more reliable, lower‑cost products.
Source: Noah Wire Services



