Skip to content

Scalable Compute Architecture for AI applications in the Semiconductor Industry

Modern day, enterprise Artificial Intelligence applications require a scalable and cost-effective compute architecture, especially those that are used to optimize the scheduling processes for semiconductor wafer manufacturing. 

Together with our partner Microsoft we have recently published an article describing the compute architecture of the minds.ai Maestro semiconductor manufacturing optimization suite and its DeepSim training engine. This article describes a reference Azure implementation architecture which can be used for a wide variety of workloads as detailed below.

minds.ai solutions use high-performance computing (HPC) environments to execute reinforcement learning and supervised learning at scale. This architecture (or a similar version) ensures that our customers receive a product that is dynamically scalable and cost effective. The scalable nature of these solutions allows for compute resources to be shut down when not in use and for selection of the most cost-effective resources that are currently available within the Azure Cloud system, including spot nodes and GPUs.

The following is a non-exhaustive list of successful projects that have utilized this architecture:

  • Novel scheduling and dispatching solutions to help semiconductor manufacturing companies optimize wafer fabrication KPIs.
  • Models for measuring tool reliability and predictability, including Equipment Health Index (EHI) and Remaining Useful Life (RUL) and preventative maintenance.
  • The automation and optimization of existing fab scheduling and dispatching for semiconductor manufacturing workloads.
  • A combination of the above where tool models are used to improve the accuracy and efficiency of scheduling and dispatching solutions, which are critical for manufacturers.
  • Yield and productivity optimization.

Diving deeper into the system architecture, we use the Azure Kubernetes Service (AKS) to deploy, manage, and scale container-based applications in a cluster environment. The integration with Azure key-vaults and permission models ensures that the solution is secure and only accessible to the customers engineers that require access. AKS is connected to Azure Files to store input and output data. Our REST API is used to both provide a user-friendly web-interface to mind.ai Maestro for end-users and product managers and a powerful Python API for developers and power users. All this is placed within a special VNET to ensure network isolation and enable selective access for additional security.

Beyond the semiconductor industry we’ve also implemented this architecture for applications across other sectors:

  • Industry 4.0
  • Travel and transportation (application development)
  • Pharma and healthcare
  • Renewable energy control and multivariate site design

All minds.ai’s solutions are customized to the customer’s specific needs for both hardware and software. For example, our solutions work on Linux as well as on Windows to ensure a seamless integration of the augmented workflows into the customer’s existing infrastructure and processes. 

Feel free to contact us if you would like to see a demo of our solutions or want to know more about specific aspects of the architecture. 

The architecture and additional details can be found on the Microsoft Learn website.

Fab Scheduling Architecture
Fab Scheduling Architecture

Related

How can we help?

Reach out – we’d love to hear about how we can help.

We use cookies and similar technologies to enable services and functionality on our site and to understand your interaction with our service. By clicking on accept, you agree to our use of such technologies for analytics. See Privacy Policy

Leave this field blank