Satellite designers have crossed a threshold. We are no longer putting “a bit more DSP” in orbit; we are fielding on-board compute that rivals terrestrial edge servers, running AI inference, adaptive beamforming, dynamic spectrum management and real-time analytics at hundreds of TOPS. At that point, the hard limit on mission capability is not the processor data sheet — it is whether the power architecture can deliver sub-volt rails at hundreds of amperes, with microsecond-class transient performance, inside a small, thermally constrained, radiation-exposed enclosure.
Traditional space power topologies were not designed for this regime. They assumed modest current density, slow load steps and static operating profiles. AI workloads violate all three assumptions at once. The result is straightforward: with conventional bus and point-of-load schemes, power delivery — not compute capability — often becomes the gating function for new space missions targeting AI-class processing.
The New Reality of On-orbit Compute
Modern LEO payloads are expected to perform on-board inference, compression, routing, and autonomous control that were once strictly ground functions. Doing that efficiently pushes designers toward ultra-deep-submicron FPGAs, adaptive compute acceleration platforms, and custom ASICs whose core voltages continue to drop even as current demand grows.
The Spacechips AI1 transponder provides a concrete stress test for any power architecture. It is a radiation-tolerant, AI-enabled processor card delivering up to 133 TOPS of in-orbit AI throughput, with signal processing and machine learning tasks performed directly on-orbit rather than on the ground. Its main AI rail operates around 0.8 V with time-domain current on the order of 130 to 150 amps, precisely the regime where conventional bus-plus-POL infrastructures become inefficient and difficult to cool.
The load is not benign. AI accelerators swing between idle, inference, and burst modes, generating fast, discontinuous current steps. Voltage droop, ringing, or noise in that environment translates directly into de-rated clocks, reduced margins, and thrown-away compute capacity. With local decoupling limited by volume, mass, and MLCC reliability, the upstream power delivery network must do the heavy lifting on transient response.
System and Environmental Constraints
In new space, the usual design knobs are locked together. To control launch costs, platforms get smaller, smaller satellites harvest less power, even as workloads demand more. Every watt wasted in the power chain is a watt lost to payload function, and operators are increasingly unwilling to donate that margin.
Thermal constraints impose high current density on the power delivery network (PDN). I²R losses in planes, vias, and wire harnesses rise with the square of current; copper that was acceptable at a few tens of amps becomes a significant heat source at hundreds. There is no forced air to save you. All waste heat must conduct through the structure and radiate away, so PDN loss appears as mechanical and mass penalties with larger radiator panels, heavier conduction paths, and more elaborate packaging.
Radiation tightens the constraints. Total ionizing dose-induced parameter shifts eat into already tight voltage margins on sub-1 V rails, while single-event effects can coincide with load transients to produce worst-case conditions exactly when the system is most stressed. Power components must tolerate radiation exposure typical of LEO and MEO missions without dragging in the cost, mass, and schedule overhead associated with fully rad-hard custom solutions.
Why Conventional Space Power Architectures are Inefficient at High Currents
Most heritage designs follow a familiar pattern: an isolated DC-DC converter steps down the spacecraft bus; a low-voltage distribution plane fans out; local PWM point-of-load regulators generate final core rails near the load. At modest current and voltage, this is workable. At 0.8 V and >100 A it becomes self-defeating.
Distribution losses scale with the square of current. Pushing hundreds of amperes across even carefully optimized planes produces non-trivial I²R heating and several tens of millivolts of droop that designers cannot afford to spend.
Hard-switched POLs inject rich harmonic content that demands aggressive filtering — more inductors, more capacitance, and more copper — which, in turn, raises impedance and lengthens current loops.
Classical control loops struggle with deep, fast load steps. Designers are forced to blunt the problem with large, low-ESL capacitor banks, trading away size, weight, and power while still falling short of ideal transient behavior.
At some point, trying to fix this architecture is ineffective. The structure itself is poorly suited to the load.
Current Multiplication and Factorized Power Architecture
Spacechips decided to replace the old topology rather than try to optimize it. They partnered with Vicor to implement a current-multiplication approach using the Vicor Factorized Power Architecture (FPA™) and radiation-tolerant modules specifically targeted at LEO and MEO missions, as shown in Figure 1.
Fig. 1 Vicor Factorized Power Architecture.
Current multiplication separates the functions of DC-DC conversion into independent power modules. The Vicor rad-tolerant topology splits the power conversion task into three sections: voltage transformation and isolation (BCM), load voltage regulation (PRM) and power delivery (VTM). This configuration reduces the last-centimeter copper losses and allows layout engineers more flexibility in placing modules where space is available.
The Vicor FPA deliberately splits the power conversion task into two distinct functions:
A PRM™ regulator that generates a well-controlled intermediate voltage from an isolated BCM® bus converter. In the AI1 case, a rad-tolerant BCM steps 100 V down by one-third to 33 V and delivers several hundred watts at high efficiency; a PRM then regulates that rail to an optimal intermediate bus for the load.
- A VTM™ current multiplier located physically close to the processor that converts the intermediate rail directly to the 0.8 V AI core while multiplying current by 32.
For a given power level, increasing the distribution voltage reduces current and therefore I²R loss by the square of the voltage ratio. As a simple example, delivering the same power on a 100 V bus instead of a 12 V bus reduces current by about 8.3× and idealized copper losses by roughly 69×, assuming the same conductor resistance.
In practice, spacecraft power systems commonly use a 28 V bus, and higher power new space designs increasingly adopt elevated bus voltages such as 100 V to further reduce distribution losses. Even moving from 12 to 28 V cuts distribution current by a bit more than 2× and I2R loss by about 4 to 5×, while the step from 28 to 100 V yields a further ~3.6× reduction in current and ~13× reduction in resistive loss for the same power. That is the difference between a hot marginal backplane and a tractable thermal design.
100-to-33-to-0.8 V Example
To make the architectural impact tangible, consider a representative case consistent with the AI1 power path: a 100 V spacecraft bus feeding two AI rails: one at 0.8 V, the other at 3.3 V, each delivered by a 150 W VTM for a total of 300 W of on-orbit AI compute (see Figure 2). With voltage transformation designed in a very compact form factor and placed close to the load, designers have added flexibility in component placement and values.

Fig. 2 Energy storage in a Factorized Power Architecture.
The 150 W VTM at 0.8 V supplies about 187 A; two such rails represent 300 W of AI-class power near the die, even before accounting for headroom and conversion loss. One cannot route that current around a backplane; the point of the architecture is to confine those hundreds of amperes to a very short, explicitly engineered region near the processors.
In a more conventional bus-plus-PoL approach with, say, a 12 V intermediate rail feeding two local multiphase regulators, that same 300 W of AI implies about 25 A on the 12 V plane. With realistic copper and via resistance on a dense RF-plus-digital card, seeing 10 mΩ of effective distribution resistance is not optimistic. At 25 A, that path dissipates about 6.25 W in I2R loss and drops 0.25 V, more than 2% of the 12 V rail, before the current even arrives at the PoLs. Scaling to 400 W or beyond for future variants makes the loss and droop climb quadratically.
The AI1 chain looks different. A rad-tolerant BCM takes the 100 V bus and delivers up to 400 W at an intermediate voltage at one-third the input (33 V), with enough headroom to drive two PRM: VTM rails at 150 W each. Each PRM regulates its own 33 V-class intermediate node; each VTM then converts to the appropriate point of load voltage. At 300 W total, the 33 V distribution bus carries only about 9 A instead of 25 A. With the same 10 mΩ effective path resistance, I²R loss falls to roughly 0.8 W and the DC drop to about 0.09 V — less than 0.3% of the intermediate voltage. That is roughly a 5 to 8× reduction in distribution loss and a meaningful recovery of voltage margin, gained purely by moving the heavy lifting to a higher voltage, lower current domain.
Downstream, each VTM trades that 33 V back down to 0.8 V and multiplies current by 32× or 8×, so the high-current loops (approximately 187 A in the 32× rail) are reduced to a few centimeters of copper, via fields, and decoupling specifically designed for that density.
The net result is that the same 300 W AI workload either taxes a hot, not well-regulated 12 V plane in a legacy architecture or rides on a relatively cool 33 V intermediate bus feeding two localized current-multiplier stages in the FPA implementation. The AI1 transponder takes the latter route and, in doing so, recovers both thermal and voltage headroom that would otherwise constrain on-orbit compute.
Electrical Advantages for AI-class Loads
Placing the current-multiplying stage centimeters from the FPGA or ACAP collapses the high-current path. The intervening copper is short and wide, and the effective output impedance at the die is dramatically lower than any equivalent 12 V distribution plus PoL scheme. Internally redundant power trains for single-event effects mitigation. For single-event effect mitigation, internally redundant power trains integrate and automate monitoring and control of the power rails — transparently relieving the design engineer from implementing this logic in external circuitry, as shown in Figure 3.
Fig. 3 Internally redundant power trains integrate and automate monitoring and control of the power rails.
Resonant, soft-switched conversion in the FPA modules further reduces conducted and radiated EMI compared to hard-switched alternatives. Lower harmonic content means lighter filters, shorter loops and better design margins against DO-160-style conducted emissions constraints when applicable to new space platforms.
Crucially, the transformation stage does not rely on a slow global feedback loop to manage transient behavior. Its fixed-ratio operation and very low intrinsic impedance allow it to source abrupt current steps in microseconds instead of tens or hundreds of microseconds, keeping a 0.8 V rail inside a narrow tolerance window even under worst-case AI bursts.
There is a second-order benefit: capacitance multiplication. A current-multiplying transformer effectively scales the apparent load-side capacitance seen by the source by approximately the square of the turns ratio. In practice, that means a modest bank of carefully placed MLCCs at the AI rail behaves like a much larger reservoir as seen by the upstream PRM, allowing designers to meet transient requirements without covering the card in fragile, mechanically risky large-case ceramics.
Radiation Tolerance at the Architecture Level
Current multiplication also changes how designers approach radiation in the PDN. Rad-tolerant Vicor FPA modules for new space are qualified to levels typical of LEO and MEO missions (tens of krad total ionizing dose) and use redundant powertrains and SEE-tolerant control to maintain operation under single-event stress. But the more interesting aspect is architectural.
By keeping tight voltage regulation at a higher, more forgiving intermediate voltage and handling high current via a fixed-ratio transformer near the load, the design reduces the number of nodes where a radiation-induced parameter shift can immediately threaten a 0.8 V margin. The regulation stage sits at tens of volts with ample headroom; its drift and SEE behavior can be managed with conventional de-rating and system-level mitigation. The transformation stage, lacking a complex fast feedback loop, is inherently less susceptible to control-loop upset.
The AI1 implementation adds another layer: radiation-tolerant modules incorporate dual powertrains, allowing the load to be driven at full power from either side, with autonomous fault detection and switchover. Instead of hardening every discrete component in a sprawling PDN, Spacechips can treat radiation as a system-level problem with well-characterized, modular building blocks.
For many LEO and MEO missions, this rad-tolerant, modular approach avoids the cost, mass, and procurement complexity of fully rad-hard parts while still satisfying mission reliability objectives. It is a pragmatic middle path between consumer-grade COTS and bespoke rad-hard ASICs.
Raising the Ceiling on On-orbit AI
As satellites adopt higher levels of on-board autonomy, the practical limit on useful AI is increasingly set by the ability of the power architecture to deliver clean, tightly regulated, high-current rails under worst-case transients in a radiation environment and within tight size, weight, and power budgets. By moving from legacy low-voltage distribution to current multiplication with Factorized Power Architecture, Spacechips and Vicor effectively raised that ceiling. The AI1 transponder is not just another RF board. It demonstrates that the bottleneck has shifted from computing capability to optimized power delivery in new space.
If you design the power delivery network as if the processor was the scarce resource, you will leave compute capability on the table. If you design the processor board as if power density, current distribution, and radiation behavior are the scarce resources, you can put server-class AI in a small LEO satellite and still close the thermal and reliability budgets constraints. That is the architectural shift this generation of missions is forcing, and current multiplication is one way to achieve it.