Voice of the Experts: Signal Integrity
Voice of the Experts: Signal Integrity RSS FeedRSS

The articles and columns contained in this section come from members of the Signal Integrity Journal’s Editorial Advisory Board (EAB) and acknowledged experts in the SI field. These authors are often sought after for their advice. In this column area, the EAB and other SI experts will talk about issues related to signal integrity.

Signal Integrity

Generative Solutions vs. Discriminative Models

June 7, 2025

Perhaps the biggest difference between old SI/PI analysis and the new AI/ML approach is the concept of generative solutions vs. discriminative models. SPICE and IBIS models are discriminative in the sense that they look for the conditional probability of the output according to a given an input. In other words, enter the input parameters and the simulation model provides the expected output. Generative solutions look for the joint distribution between the output and input. It tries to cover the entire solution space even in unseen conditions. Not only can it provide the output when there is a known input, but it can also search for the maximum likelihood of an input that will give the desirable output solution if one sets up the right search boundary. This is an intriguing aspect of AI/ML that previous methods do not offer. Sometimes, this is referred to as an invertible solution.

Physics aware AI, multi-physics digital twins, and foundational model approach AI models are inherently asymmetrical. It takes a lot of data collection to build a training set and huge computation sources to train the model. However, once the model is trained, the inference speed is very fast. Hewlett-Packard conducted a live demo in previous DesignCon with Tektronix in order to generate a bit-by-bit captured waveform and generate a PAM4 eye diagram in near real time. In the AI community, these models are referred to as digital twins because they can capture multi-physics phenomena, such as electric to optical or power to thermal, with proper AI training. With digital twins, there is the potential for real time system performance tuning as the system condition changes. This is beyond conventional simulation model capabilities. Advances in LLM modelling in recent years also introduce very efficient ways to add capabilities to these AI models. 

Two of the most notable techniques are fine-tuning, such as QLoRA, and transfer learning. It has been demonstrated that when system conditions changed due to factors such as speed bumps, protocol changes (NRZ to PAM4), or extended ranges, fine-tuning can quickly and efficiently upgrade the original model with very efficient computation techniques. Even when new technologies are introduced in the system, such as new equalizer methods or additional features added that change the model structures, they can be refitted with transfer learning with additional network layers. In a way, a foundational model approach can future proof models. This is something current conventional models, such as IBIS or SPICE, cannot do.  

Another intriguing aspect of LoRA is that one can fine-tune an already fine-tuned model. In other words, the model can continually be upgraded, and security is guaranteed; all proprietary models can be kept in the LoRA, while the main model can be public domain, such as Hugging Face. One can add the LoRA in run time without leaking the proprietary information out. Access to a model can even be compartmentalized within a department in a given company. Despite the common misconception of AI being a data-driven model and having no sense of underlying physics, it has been demonstrated that deep inside the embedding or bottleneck of our AI models, there is rich information of the underlying physics. Long before these LLM gurus started to express that LLM does not understand the world, it was realized that the simplistic way of training with pictures vs. pictures or text vs. text would not be successful for engineering models. We have been training physics-informed AI models for the past few years, and the results are significantly better than the standard AI training methods. This also enables rapid modelling of multi-physics problems such as power to thermal or electrical to optical analysis.

High Dimensional Optimization

When system performance increases, one can start to see problems having very high dimension parameters with multiple but conflicting objectives. Examples such as DDR5+ with tap routing, neckdown and open area geometries, or PCIe 6+ via tuning in high layer-count PCB. Most of the time, one must simultaneously optimize RL and ICN, where helping one will hurt the other. This becomes very challenging for even the most experienced engineer to complete in a short time. AI/ML are the ideal solution for this type of high dimensional problem., provided that one knows how to solve the curse of dimensionality issues.

Reinforcement and Imitation Learning

Another unique capability of AI is its infinite memory and its ability to learn from mistakes. It is well known that reinforcement learning on complicated tasks such as driving can results in better than human performance. There is increasing evidence that this is the case for routing solutions such as 3DIC, chip layout, and PCB designespecially when routing is tied into end-to-end performance and not just geometry anymore. In almost all the experimental research, 80% to 90% of the time, AI layout solutions can outperform human design.

Multi-Modal AI Agent

This brings back to the original Yuri observation of AI agents for SI/PI. Just to clarify, this is not a matter of LLM suddenly knowing transmission line model, having the ability to do EM analysis, and spitting out the correct answer. Instead, it involves taking advantage of recent advances in LLM technology:

  • Multi-modal information ingestion. Newer generations of LLM models are multi-modal; not only can they understand large quantities of text, but they can also be made to understand pictures, diagrams, tables, and so on. This makes them extremely useful to digest huge amounts of design information, such as product design guides and DFM rules. Typical high end processor design guides are anywhere from 400 to 1200+ pages, with 100+ pages for DFM rules. Human engineers will have a hard time reading, digesting, and retaining this large volume of information. Even a small model LLM in the 1-3b range can easily ingest all these documentations and provide correct answers when the information is needed. No fancy hardware is required; a gamer GPU is more than sufficient enough to do this. An AI agent can be used to read and understand these huge volumes of design information.
  • When the above design parameters are captured, high dimensional optimization can be done with a properly planned AI agent. If humans can plan for the optimization steps, so can the AI agent.
  • By now, it has been realized that LLM are extremely good at generating program codes. With the ability for most EDA tools to be controlled by APIs, it will be straightforward for the LLM agent to take the design information from (a), perform the optimization in (b), and send the results to be implemented by EDA layout tools through APIs.

If you believe in the promise of AI being able to do the above, many repeated and mundane tasks, such as reading specs, performing calculations, and finishing off on designs, can be planned out and carried out by AI agents. This is the concept of co-piloting. Rather than replacing the experienced engineers, it will make their jobs much more efficient and, to a large extent, allow the future-proofing of their designs.

However, just like the current challenges faced by graduating computer science majors, it will likely make future students of SI/PI more difficult. Given that most entry level SI/PI tasks can be efficiently picked up by AI/ML, it may prove more difficult for recent graduates to to find entry level experiences that an AI agent cannot do. This is the main reason I spend so much time with CAEML to prepare the next generation of students to be proficient in both SI/PI AND AI/ML knowledge. It is up to educators to prepare students for these future challenges.

You must login or register in order to post a comment.