With the ever-increasing demands for semiconductors to power automotive, high-performance computing and artificial intelligence applications, methods for improving throughput and process efficiency. The use of Digital Twins (DT) has attracted particular attention as a potential framework to achieve unprecedented performance improvement. This is reflected by the increasing contributions in scientific literature as well as federal and industry investments in the development of digital twins and the required infrastructure for these to be deployed. While semiconductor wafer fabrication has received important contributions, limited research has been conducted on digital twins, particularly inline process digital twins in semiconductor Assembly and Testing (A/T). In this work, we introduce the enabler for the Semiconductor Tester digital twin-agent, which addresses key challenges faced when implementing a practical digital twin framework in the context of semiconductor assembly and testing. Some key challenges in utilizing digital twins in the semiconductor A/T are: (i) large resource and time commitments to generate and sustain models due to presence of autonomous robots, high frequency changes in test cell configurations and loading policies, (ii) similar to the manufacturing case, high-fidelity models are expensive, (iii) domain experience on process and equipment as well as optimization and simulation is required to modify the models once the system is deployed. According to our proposed infrastructure, a digital twin agent can generate several models whose accuracy and execution complexity (fidelity) are largely different and that can be adopted for providing insights offline as well as at runtime. At the core of our architecture is the high-fidelity Discrete Event System (DES) model that supports offline optimization and analytics tasks. For the DES model simulation, we utilized the open-source Python package SimPy, thus giving an open solution for A/T. The modular discrete event simulation enables lower cost of development for different assembly and test equipment. To allow for the use of the model at runtime, graph machine learning methods are integrated that can learn fast to execute surrogates that can potentially inform system-level policies. Finally, to address the sustainability of architecture we integrate methodologies for auto-generation, auto-simplification, and auto-repair of the DES. By leveraging this methodology, deviations between the DES simulated performance and physical equipment can be highlighted, and the auto-repair routine can be implemented to correct the DES to match the physical equipment. We believe this will allow simplification models to “learn” the policies and procedures of the equipment from production data and generate high fidelity models without user interactions. For the proof-of-concept demonstration, the team focuses on the Burn-In equipment at Intel, however this approach would be applicable to all assembly and test equipment (System-Level Testing, Structured Testing, Packaging, and Assembly).