Conventional advice says never mix battery brands. That guidance is well-founded for series strings, but there’s surprisingly little data on purely parallel configurations.
I built a 12 V, 500 Ah LiFePO₄ battery bank (1S5P) using mixed-brand cells and instrumented it for continuous monitoring over 73+ days, including high-frequency voltage sampling. The goal was to see whether cell-level differences actually manifest over time in a parallel topology.
What the data shows
No progressive voltage divergence across the observation period
Voltage spread remained within ~10–15 mV
Measured Peukert exponent ≈ 1.00
Thermal effects were small relative to instrumentation noise
In practice, the parallel architecture appears to force electrical convergence when interconnect resistance is low. I’ve been referring to this as “architectural immunity” — the idea that topology can dominate cell-level mismatch under specific conditions.
This is not a recommendation to mix batteries casually, and it’s not a safety guarantee. It’s an attempt to replace folklore with measurements and to define the boundary conditions where this does or does not hold.
Everything is public:
Raw CSV data
Analysis scripts
Full PDF report
Replication protocol
Repo: https://github.com/wkcollis1-eng/Lifepo4-Battery-Banks
I’m posting this to invite critique — especially around failure modes, instrumentation limits, or cases where this model would break down (e.g., higher C-rates, aging asymmetry, thermal gradients, different chemistries).
Happy to answer technical questions.
theamk•1h ago
(It is easy to calculate in series packs, but the parallel ones would be tricky, since the bus links will equalizes the voltage. Did you manually remove the links and then measures each battery's voltage? Or did you estimate spread by measuring voltage drop between the bus?)
wkcollis1•13m ago
Because the cells are hard‑paralleled, their terminals are forced to the same potential, so true inter‑battery divergence can’t be measured without isolation taps. Instead, “spread” refers to:
*Voltage_Max – Voltage_Min of the pack‑level voltage within each hourly window.*
This captures short‑term variation in the measured pack voltage (ADC noise, EMI artifacts, inverter mode shifts, temperature coefficient), not cell‑to‑cell imbalance.
The raw data is in `Data/combined_output.csv` with columns:
``` Timestamp, Voltage_Min, Voltage_Max ```
Those come from 60‑second samples aggregated hourly. The analysis scripts compute:
``` Spread = Voltage_Max – Voltage_Min ```
So the ~10–15 mV “spread” in the report reflects the measurement envelope of the pack, not divergence between individual batteries. Measuring true per‑battery drift would require either per‑cell taps or momentary isolation, which wasn’t part of this study.
Happy to go deeper if you want details on sampling or noise characterization.