The temptation is all too familiar. Weeks are spent justifying the need for an s-parameter model from a vendor, or days extracting a model of an interconnect — only to start over because a power outage glitched the company HPC or license server. Finally, after getting the long-awaited s-parameter file, it is run through the s-parameter checker and there are a few red points outside of the causality bounds.

The natural reaction: “It’s just a few points out of the whole file. It’s probably fine to use, right?”

This article explores that question and more broadly:

  • Causality in relation to signal and power integrity simulations,
  • How causality affects time-domain simulation fidelity,
  • How non-causal s-parameters can result from real measurements and simulations,
  • How to avoid non-causal data,
  • And the consequences of using a non-causal model in time-domain simulations.

First, a quick refresher of some background theory on causality.

What is Causality?

Plenty of excellent resources explain causality in the context of s-parameter models for signal and power integrity.1,2,3 Still, when measurement and modeling techniques that use s-parameters are taught, the concept often remains elusive. That’s partly because s-parameter blocks appear in schematics as boxes with ports — behavioral models that only require inputs to produce believable outputs.

Another reason may be that the fact that s-parameters live in the frequency domain is often overlooked. To use them in transient simulations, the simulator must translate them into the time domain and it’s this translation that makes causality a critical figure of merit.

At its core, causality means the output cannot respond before the input arrives in time. The cause must precede the effect. If a signal enters one end of a 1 ns transmission line, the far end should not show any response until 1 ns later.

The idea seems obvious. After all, physical systems (like the interconnect between a driver and receiver) obey the laws of physics. Shouldn’t their models naturally be causal?

Sure — if one had infinite, continuous data.

In a standard Signals and Systems course, linear-time invariant (LTI) systems are studied because they follow predictable rules and closely model many physical systems. Circuits, filters, and communication channels are typically LTI or can be approximated as such over a limited range of operating conditions. Crucially, the mathematics of LTI systems are nicely structured: convolution in time becomes multiplication in frequency, eigenfunctions are sinusoids, and powerful tools like Fourier and Laplace transforms only apply under the assumptions of linearity and time invariance. One key criterion for a system to be LTI is: it must be (you guessed it) causal.

This means that the Fourier and Inverse Fourier transforms only work correctly if the system is LTI and by extension: causal. It’s these transforms by which the simulation tools translate between the frequency and time domains, and therefore the main mechanism by which the frequency-defined s-parameters are used in time-domain simulations.

Put more rigorously, a system is causal if its impulse response is zero for all times before the impulse is applied. The impulse response, h(t), is the output of a system when the input is a Dirac delta function, δ(t), applied at t = 0. The mathematical form of the impulse response, typically expressed as h(t), is fundamental in analyzing system behavior and determining properties like causality. To test for causality, one can examine h(t): the system is causal if h(t) = 0 for all t < 0.

How Does Sampling in the Frequency Domain Affect Time-Domain Simulations?

System characterization is typically performed in the frequency domain. For example, a vector network analyzer (VNA) applies a sine wave stimulus at one port and measures the response at all ports. By sweeping the frequency of the input signal and recording the resulting behavior, the VNA builds a picture of how the system responds across a defined frequency range. These measurements are commonly saved in an s-parameter file, which can then be directly used by most circuit simulators as a behavioral model of the system.

Hybrid solvers like Keysight’s SIPro in ADS and full 3D solvers like Keysight’s RFPro in ADS operate similarly: they solve Maxwell’s equations (or simplified versions of them) in the frequency domain and return a dataset that characterizes system behavior across frequency. The result can also be exported as an s-parameter file.

With such a model, any arbitrary time-domain input — like a pulse, step, clock, FM, or digital bitstream — can be applied. The simulator will compute the corresponding output using inverse Fourier techniques.

But if the real world operates in the time-domain, why bother with the frequency domain to begin with? As signal integrity evangelist Eric Bogatin puts it: “We go to another domain when it’s easier to get the answer there.”2

The frequency domain is preferred both in hardware and computation because it’s often more efficient and practical. Time-domain measurements require a very high bandwidth and sampling rate. A time-domain reflectometer (TDR) or high-speed oscilloscope must capture very sharp edges with high dynamic range, which becomes expensive and technically challenging at multi-gigahertz speeds. In contrast, frequency domain measurements can average over many cycles of sine waves at each point, reducing noise and improving dynamic range. It’s also much easier to precisely control the frequency content, impedance, and power level with a VNA — something far more difficult in the time domain.

Even full-wave EM solvers like RFPro and Ansys HFSS  solve systems in the frequency domain. Although Maxwell’s equations are inherently time-dependent, solving them under steady-state sinusoidal excitation transforms the problem into algebraic equations. Time derivatives become multiplications, allowing the solver to work frequency-by-frequency. This approach is often more stable and computationally efficient than solving directly in the time domain, especially when high-frequency accuracy is needed.

However, the system models created in the frequency domain by these methods are inherently discrete and bandwidth-limited. They describe the system only at sampled frequency points — meaning the true behavior across the full continuum of frequencies is approximated. The spacing and extent of these samples directly affect accuracy in both domains.

To simulate time-domain behavior, the simulator must reconstruct a time-domain response from this discrete frequency data. And like any conversion from a finite set, something is always at risk of being lost in translation.

If the frequency sampling is too sparse, bandwidth too narrow, or phase response inconsistent, the resulting time-domain signal can deviate from physical reality.

This is where phenomena like ringing, overshoot, and the Gibbs phenomenon can appear — not as properties of the physical system, but as artifacts of poor frequency-domain resolution or truncation.

When frequency sampling is limited — especially in systems with sharp transitions like steps or square waves — the reconstructed time-domain waveform may exhibit ripples near those edges. This isn’t due to reflections or real energy in the system, but a mathematical artifact known as the Gibbs phenomenon. It arises when discontinuities are approximated by a finite number of sinusoidal components, as in a truncated Fourier series. It isn’t inherently non-causal or unstable, but highlights why proper bandwidth, frequency resolution, and windowing are critical when preparing s-parameter models for time-domain analysis.

In other words:

  • Causality ensures the model behaves in accordance with physical laws (no output before input)
  • Sufficient bandwidth ensures the model can resolve fast signal transitions with minimal distortion
  • Together, they result in more accurate, physically realistic time-domain behavior

What are the Consequences of Using a “Non-Causal” Model?

To illustrate how a non-causal s-parameter model can arise, I set up a simple channel simulation experiment in Keysight’s Advanced Design System (ADS). I used three representations of the same system — a 1-in. 50 Ω lossy transmission line:

  1. An ideal transmission line component (TLINP), with the default parameters producing a line delay of 123 ps
  2. A causal s-parameter model, created with a dense frequency sweep of the ideal transmission line component from 1 MHz to 35 GHz of 1000 linear samples
  3. A non-causal s-parameter model, created with a sparse frequency sweep of the ideal transmission line component from 1 MHz to 35 GHz of 5 linear samples

The simulation schematic is shown in Figure 1.

fig-01-s-param-generation.jpgFigure 1. Causal and non-causal model generation setup in Keysight ADS.

The results of the under-sampled and adequately-sampled s-parameter simulations are shown below in Figure 2. The 5 under-sampled points are placed at the circles on the red plot with a linear interpolation connecting the points. By under-sampling, many of the dynamic characteristics of the transmission line are excluded, as made evident by the error from the under-sampled plot to the well-sampled one.

fig-02-s-param-generation-results.jpgFigure 2. S-parameter simulation results of the causal and non-causal model generation in ADS.

In order to verify the models are indeed causal and non-causal as intended, they were checked in the ADS S-Parameter Toolkit. The causality results of the causal model are shown in Figure 3 and the non-causal results are shown in Figure 4. When a frequency point fails the causality check, the failing s-parameters in the matrix selector are colored red and the failing point is plotted in the real and imaginary graphs in the viewer. Note that the non-causal points fall outside of the green limit lines.

fig-03-example-causal.jpgFigure 3. Causality check of the causal s-parameter file in the S-Parameter Toolkit.


fig-04-example-non-causal.jpgFigure 4. Causality check of the non-causal s-parameter file in the S-Parameter Toolkit.

With the ideal model, a causal s-parameter model, and a non-causal s-parameter model, how they compare in the time domain can be observed. By doing so, one is essentially analyzing the response of a system to a given input, where the transfer function of each model determines how the input is transformed to the output.

In this setup, there is a simulation that inputs the same pulses into the three versions of the system model, as shown in the schematic in Figure 5. The pulses each have 10 ps rise and fall times and 1 ps, 10 ps, and 300 ps widths, which are defined, but not shown in the SRC1 piece-wise linear voltage source.

fig-05-tl_causality_schematic.jpgFigure 5. Ideal, non-causal, and causal model transient simulation schematic.

The transient simulation results in Figure 6 show that the ideal model and causal s-parameter model behave identically: the output is the same as the input, but delayed by the transmission line’s base delay of 123 ps. Properties such as delay, shape preservation, and the magnitudes of the signals remain consistent with the input. This is exactly what one would expect from a properly terminated, ideal transmission line.

fig-06-tl_results.jpgFigure 6. Transient simulation of ideal, non-causal, and causal models.

The non-causal s-parameter model behaves quite differently. Without the ideal case for reference, at first glance it may not be obvious there’s a problem. The non-causal model output has roughly the same shape as the input, there’s some delay from the input, and there’s some ringing. Someone unfamiliar with the model’s limitations might assume the ringing reflects imperfections in the physical system.

A closer look reveals the issue, though. The ringing begins as soon as the input pulse starts to rise, implying an instantaneous output response, which is physically impossible. More importantly, the rising edge of the output arrives with around 10 ps of delay.

Looking back at the s-parameter phase responses, it’s clear that the non-causal model is missing much of the phase information. The loss of phase fidelity is a key contributor to the erroneous, instantaneous-looking output. It’s the phase response that encodes the system’s delay and dispersion characteristics. Without accurate phase data, the reconstructed time-domain response cannot respect causality.

From some initial experiments with this setup, I extracted the non-causal s-parameter model to 20 GHz with 5 samples and got yet a different answer in the transient simulation shown in Figure 7.

fig-07-tl_results_100ps_width.jpgFigure 7. Transient simulation results with 20 GHz causal and non-causal models.

The 20 GHz non-causal model has more delay than the 35 GHz model and more ringing. The output also begins to change as soon as the input changes. This further demonstrates that one cannot predict results with an inaccurate model.

What About an “Almost” Causal Model?

After some trial and error, it was found that 99 linearly spaced points from 1 MHz to 35 GHz produced an s-parameter model that just barely failed the causality test at the lowest threshold setting of 0.01. However, all points passed when the threshold was relaxed to 0.02. This 99-sample-point model will be referred to as the “almost” causal model.

fig-08-example-almost-causal.jpgFigure 8. Causality results with 99 linear samples.

Figure 9 shows the transient results with the ideal, causal, and almost causal models superimposed on one another to emphasize the match.

fig-09-tl_results_almost_causal.jpgFigure 9. 500 ps pulse response to the ideal, almost causal, and causal models.

The output from the almost causal model shows minor ringing at the rising and falling edges. On the rising edge, there’s an overshoot of about 1.65% above the ideal model’s steady-state value (just under 1 V). This is a small discrepancy that likely wouldn’t impact most analyses, but it’s still measurable.

This is just one case, and the ideal model is a reference to assess the error. But when modeling an unknown system, how does one know the results are accurate? Or more realistically, accurate enough?

Here’s another interesting find: when the sample count is increased by one — to 100 sample points, the model (which can be referred to as the “barely causal model”) passes the causality test. However, the resulting transient waveform is nearly identical to the almost causal output.

fig-10-example-barely-causal.jpgFigure 10. Causality results with 100 linear samples.


fig-11-tl_results_barely_causal.jpgFigure 11. 500 ps pulse response to the ideal, barely causal, and causal models.

What does this mean? Perhaps this model is technically causal — it passes at the 0.01 thresholds, but might not pass at a stricter threshold like 0.009. This raises the question: What does it really mean for a model to “pass” causality?

One takeaway here is that the causality test doesn’t just validate physics — it also provides a practical measure of accuracy. A model that passes causality (at a reasonable threshold) likely contains enough frequency information to produce a trustworthy time-domain response.

What Does a Causality Test Actually Check?

When causality is tested in ADS or another simulator, what is actually being checked?

At a high level, the goal is to verify whether the system violates physical causality. For s-parameter models, this check can be done in either the frequency or time domain, and different tools use different methods.

Some simulators, including ADS, use frequency domain techniques based on the Kramers-Kronig relations. These relations connect the real and imaginary parts of the s-parameter response, making sure they’re mathematically consistent with a causal system (which is why the S-Parameter Toolkit displays causality bounds on the real and imaginary plots across frequency). If the measured or simulated data doesn’t obey these integral relations within tolerance, the system may exhibit non-physical behavior.

However, for practical insight (especially when visualizing the issue), the time-domain methods will be explored. One common approach is to convert the s-parameter to a time-domain impulse response using an inverse Fourier transform, then check whether any significant energy exists before time zero or the base delay when the output response is delayed as in the case of a transmission line. The energy ratio (pre-t = 0 energy divided by the total energy) becomes a practical metric to quantify causality violations.

This ratio is compared to a user-defined threshold, such as 0.01. If the ratio exceeds the threshold, the model is flagged as non-causal; if not, it's considered causal within acceptable bounds.

It's important to note that this isn't a strict pass/fail test. The Kramers-Kronig integrals yield bounds rather than exact values when applied to real-world data, which is discrete and bandwidth-limited. For ideal, continuous, and infinitely wideband input data, one would get a definitive yes/no answer for causality. But in practice, one deals with finite data, so the test results reflect a range of acceptable values, not a binary outcome.

Some tools allow the adjustment of the causality test threshold, which determines how much deviation from ideal behavior is tolerated. In time-domain-based methods, this might correspond to the ratio of energy before t = 0 to total energy. In frequency domain methods, it often defines the tolerance band around the expected Kramers-Kronig constrained response. Either way, the threshold provides a way to balance numerical precision against practical usability. It’s a tradeoff between numerical sensitivity and practical accuracy — just like convergence tolerance in a simulator.

To illustrate this, the through response (S21) of both the almost causal and causal models was converted to the time domain using ADS's ts() function. This provides an impulse-like view of the system's behavior and allows for the comparison of the pre-t = 0 activity directly. Note that a windowing function is typically applied with the iFFT to reduce Gibbs ringing, and ADS enables a Kaiser window by default. For demonstration purposes, windowing has been disabled here to emphasize the ringing. A deeper look at windowing techniques and best practices would make for a valuable future exploration.

fig-12-tl_freq_to_time.jpgFigure 12. Impulse responses of almost causal vs. causal models show pre-t = 0 activity.

Although both responses show non-zero amplitude before t = 0 (which in this case is at 123 ps accounting for the line delay), the almost causal model has slightly more activity in this region as seen by the little bits of red behind the blue trace.

To quantify the difference, I calculated the energy ratio before t = 0 by summing the square of the normalized impulse response samples in that time window for each model's response (since energy is proportional to the integral of the squared magnitude over time). The almost causal model had 1.001x the pre-t = 0 energy of the causal model — meaning it contained about 0.1% more energy in the non-causal region.

Now suppose that one would like to reduce the ripple and total energy leading up to t = 0. There's only a 0.1% difference between using 99 samples and 1000. Adding more samples likely isn't going to add much improvement. But it is known that this converted time-domain impulse response is limited by the max frequency point in the dataset. Let's see what happens when the max frequency is increased and compare the conversions of two S-parameter datasets of 1000 linear samples, but with one out to 35 GHz and the other out to 350 GHz.

fig-13-tl_freq_to_time_more_BW.jpgFigure 13. Time domain conversion of 1000 point s-parameters out to 35 GHz and 350 GHz.

The pre-t = 0 ripple has been significantly decreased. The energy in the 35 GHz case is 5x more than the 350 GHz case.

So what's the takeaway here? Model the system to the highest frequency imaginable? Not necessarily. It's already been observed that the 1000-sample, 35 GHz model is very accurate.

The takeaway is that it's not always about cranking up the sample count — after a certain point, increasing the number of frequency points yields diminishing returns. But bandwidth is a different story. The maximum frequency in an S-parameter dataset directly limits the time-domain resolution, especially near rapid transitions like t=0.

It was observed that increasing the max frequency from 35 GHz to 350 GHz — while keeping the number of samples constant — significantly reduced pre-t = 0 ringing. This demonstrates that bandwidth, not just sample count, governs time-domain fidelity.

That doesn't mean one needs to simulate to 350 GHz for every interconnect. The appropriate bandwidth depends on the sharpness of the transitions one wants to model, not some arbitrary upper limit. If spurious ringing or causality violations are seen, it may not be a frequency point density issue. It may be a bandwidth issue. Make sure the model extends far enough in frequency to capture the critical transitions.

Conclusion

A causality test is a practical indicator of whether a model contains enough frequency content to accurately represent the system's behavior in the time domain.

Even if the system being modeled is inherently causal, one can still end up with a non-causal s-parameter model due to under-sampling, bandwidth limitations, or interpolation artifacts. And while a non-causal model might look reasonable without a known reference, especially for small input signals or broad pulses, it can lead to inaccurate delay, artificial ringing, or even simulation instability in more sensitive applications.

Trustworthy time-domain simulation begins with a physically valid frequency-domain model   and that means it must be causal, or at least close enough to pass a meaningful causality check. The pass/fail of that causality check may require some engineering judgment of the causality results.

The answer to the question, “can I use a model with some causality violations?” is the engineering standard: it depends.

If there are a handful of points just outside the causality limits at the strictest tolerance like the 99-point almost causal model, the model is very likely okay to use. If the results do not meet expectations, then those non-causal points may have to be addressed.

If a model fails a causality check:

  • Increase the number of frequency points — if possible, focus additional points around the frequencies where violations were flagged
  • Extend the bandwidth — especially if the signals have fast edges
  • Follow solver-specific best practices for frequency sweep resolution

Acknowledgements

Special thanks to Jan Vanhese, the R&D Director at Keysight, for providing expert feedback during the writing of this post. His insight helped refine and clarify the explanation of the causality check, particularly regarding the use of Kramers-Kronig relations.

REFERENCES

  1. S. Sercu, C. Kocuba, and J. Nadolny, “Causality demystified,” Samtec, in Proc. DesignCon, Jan. 2015, pp. 1–25. 
  2. Eric Bogatin, Signal and Power Integrity – Simplified, 3rd ed., Pearson, 2018.
  3. J. J. Ellison, “How to Verify Signal Integrity Causality in S‑parameters,” Altium Resources, Altium Ltd, August 2020.
  4. Keysight Technologies, Advanced Design System (ADS) Documentation.
  5. Keysight Technologies, "Verifying S-Parameter Data," Keysight ADS Built-In Documentation.
  6. Keysight ADS SIPro
  7. Keysight ADS RFPro