DSP optimizationTechno-economic analysisMethodologySolvent extraction

Why your DSP assumptions drive 60-80% of fermentation COGS

Downstream processing typically accounts for 60-80% of the cost of goods for fermentation products. This post walks through where DSP costs actually live, why naive assumptions produce 3x COGS errors, and how to stress-test the three assumptions that matter most.

RenewVerse Research9 min read

If you're building a fermentation business and your COGS model says fermentation is 80% of your cost, your model is probably wrong. For most products below 100 g/L titer, downstream processing drives 60-80% of the final cost per kg. This post walks through where DSP cost actually lives, why naive yield assumptions produce 3x errors, and the three stress tests that separate a credible COGS estimate from a pitch-deck number.

The 60-80% claim, unpacked

Take a hypothetical sophorolipid process: S. bombicola at 150 g/L titer, 20,000 L fed-batch, 30 batches per year. That's 3 tonnes of product per batch, 90 tonnes per year. Fermentation cost per kg sits around $8-12 at that scale (feedstock is dominant because sophorolipids are fat-fed). DSP cost runs $25-45 per kg depending on whether you use gravity phase separation or solvent extraction, and how efficiently you recycle solvent. Total COGS: $33-57/kg, of which DSP is 68-82%.

The same math plays out across categories. Rhamnolipids in P. putida at 30-50 g/L run roughly 55-70% DSP. MEL in M. aphidis at 35-50 g/L is often 70-80% DSP because solvent extraction is mandatory. Even CHO mAbs, which have much higher-value product, end up with DSP at 50-65% of COGS once you account for Protein A resin lifecycle cost, viral clearance, and polishing chromatography.

Where the cost actually lives

Break DSP cost into four buckets and the picture becomes clear:

  • Yield loss. A 5-step DSP train at 90% per-step yield gives 59% overall recovery. At 85% per-step, it's 44%. Every percentage point of yield loss is paid twice: once in lost product and once in capacity dedicated to product that won't reach the customer. If your target COGS is $20/kg at 85% DSP yield, at 70% yield you need $16.5/kg to hit the same effective cost.
  • Solvent and consumables. Extraction, chromatography, and filtration all consume volumes of material that scale with broth volume, not product mass. This means low-titer processes are doubly punished: more broth per kg of product, more solvent per kg of product.
  • Labor. Plant labor for DSP at 20,000 L scale runs 200-400 person-hours per batch depending on unit-operation count and degree of automation. At a 30-batch-per-year cadence and a loaded labor rate of $60/hour, that's $360-720k/year in direct labor alone.
  • Equipment depreciation and utilities. DSP equipment often costs more than the bioreactor itself. A complete DSP train for a 20,000 L process can run $8-20M installed. Depreciated over 10 years against 30 batches/year, that's $27-67k per batch in capital cost alone.

The three assumptions that matter

1. Overall yield at scale

Lab-scale DSP yields of 85-95% almost never carry over to production. Real drivers of the gap:

  • Centrifuge residence time is fixed at scale (set by throughput requirement) and usually shorter than lab bench-top protocols allow, so clarification is less complete.
  • Filter cake thickness is larger at scale, which means cake resistance is higher and product trapped in the cake fraction is higher. Lab filters usually operate well below the cake-resistance regime.
  • Chromatography at scale runs faster (shorter residence time per kg of resin) than bench columns because throughput matters, reducing dynamic binding capacity.

A rule of thumb for first-pass modeling: multiply lab-bench yields by 0.85 to get production yields. Then validate with pilot data if at all possible. Any COGS estimate that uses lab yields as production yields is off by 15-30%.

2. Solvent recycling efficiency

For extraction-heavy processes (MEL, some rhamnolipids, many small molecules), solvent cost can be the single biggest DSP line item if fresh solvent is assumed. A single-pass ethyl acetate extraction at 20,000 L, using 2:1 solvent-to-broth ratio, consumes 40,000 kg of ethyl acetate per batch. At $1.50/kg, that's $60k per batch in solvent alone, or about $22/kg of product at 3,000 kg titer output.

Industrial solvent recycling recovers 90-97% of used solvent via distillation. At 95% recovery, fresh solvent cost drops to $3k per batch, about $1/kg. That 22-to-1 ratio difference is why the recycling assumption is the single most leveraged input in an extraction-based COGS model. Building a recycling loop costs capital (distillation column, storage, recovery steam), but for anyone targeting commercial biosurfactant production it's mandatory.

3. Labor and batch-to-batch variability

Labor hours in a COGS model are typically specified as an average per batch. In reality, the first 10-20 batches of a new process take 50-100% more labor than steady-state, because operators are still debugging, process deviations are more frequent, and QA cycles are longer. If your TEA model uses steady-state labor hours to project Year 1 COGS, you're going to undershoot by 20-30%.

Separately: variability matters. A COGS point estimate doesn't capture the fact that one bad batch in ten can drive annual effective COGS up by 15%. Any credible model should report a COGS distribution, not a point estimate.

The three stress tests

  1. Bad-case yield. Drop every DSP operation's yield by 10 percentage points. Does COGS still hit your target? If not, you have no margin and a single scale-up surprise will break the process economics.
  2. Realistic solvent recycling. Rerun with 85% solvent recycling instead of 95%. For extraction-heavy processes this will often double DSP cost. If the process survives this, you're robust. If not, commit to building out the recycling loop before you commit to the market price.
  3. Year-1 labor multiplier. 1.5x or 2x the labor hours for the first 12 months of production. Roll this into a 3-year average cost. Your marketing-ready COGS is the 3-year average, not the steady-state number.

What Augur does about this

The platform treats DSP as a first-class part of the prediction. Each of the 8-9 unit operations is registered with a parameter schema (yield, solvent ratio, cycle time, labor hours, consumable cost) and organism-appropriate defaults. The user can override any parameter, and Monte Carlo sampling over the top 15-20 sensitivity inputs produces a COGS confidence interval plus a tornado chart that shows which inputs drive the most variance.

For biosurfactants specifically, the extraction operation exposes solvent recycling rate as a configurable parameter. Teams that plan to run solvent-free (e.g., Holiferm-style gravity phase separation for sophorolipids) can swap the extraction step for a phase-separation step and the pipeline recomputes COGS accordingly. Teams that plan to outsource DSP entirely can model a flat contract-DSP cost and skip the unit-operation chain.

Bottom line

For anyone building a fermentation business, DSP is the most dangerous line item in your TEA. It's the part that gets the least attention during strain and process optimization, it's the part where scale-up surprises are the biggest, and it's the part where vendor spec sheets over-promise yields that production reality doesn't deliver. Any credible COGS model treats DSP assumptions with the same rigor as fermentation assumptions and stress-tests all three of the inputs above.

If you want to see your own process modeled with calibrated DSP unit ops and a COGS confidence interval, request access. We're onboarding pilot users this quarter.

Frequently asked questions

01Is the 60-80% DSP cost share really accurate? What drives it that high?

For most biomanufactured products below ~100 g/L titer, yes. At 50 g/L and 20,000 L batch volume, you're recovering 1,000 kg of product from 20,000 kg of water, cells, media salts, and byproducts. That volume of material has to be centrifuged, filtered, concentrated, purified, and often solvent-extracted. Each step has its own equipment, consumables, labor, and yield loss. Cumulative yield losses through a 5-step DSP train easily drop recovery to 50-60%, which doubles effective cost per kg of final product. High-value low-titer products (mAbs at $100+/g) flip this: the product is worth so much that even 40-50% DSP yield is acceptable and chromatography resin is the dominant cost.

02Why do customers consistently underestimate DSP cost?

Three reasons. First, lab-scale DSP is easy. Spin, filter, purify. Physics at 20,000 L is different. Filter cake thickness matters, centrifuge residence time matters, chromatography column height matters. Second, DSP yield at scale is almost always lower than at lab scale. Customers quote lab yields (often 80-90%) and build a COGS model on that, then discover production yield is 55-65%. Third, solvent cost and disposal. A single-pass solvent extraction at 20,000 L can consume 40,000+ kg of ethyl acetate per batch. Industrial recycling recovers 90-97% of that, but the capital and energy to run the recovery loop aren't free.

03Which DSP assumption is the most sensitive to get wrong?

Overall yield, by a wide margin. A change from 60% to 70% DSP yield is a ~14% drop in COGS/kg. A change from 80% to 90% is an 11% drop. Within yield, the biggest lever is usually the first concentration step (centrifugation or filtration), because any product that ends up in the supernatant or filter cake is gone for the rest of the train. The second-most-sensitive assumption is solvent-to-product ratio for extraction-heavy processes. The third is labor hours, which scale roughly linearly with batch count and whether the facility is single-product or shared.

04How should I stress-test my DSP model before committing to a COGS number?

Run three sensitivity scenarios. Bad-case: drop each DSP operation's yield by 10 percentage points and see if COGS still works. Realistic-case: use published yields from literature for your specific unit operations rather than vendor spec sheets. Stressed-case: assume 2x labor hours and 1.5x solvent consumption for the first year of production (before process optimization catches up). If your COGS target still works in the stressed case, you have real margin. If it only works in the vendor-spec best case, you don't.

05Does Augur actually model all these DSP sensitivities?

Yes. The platform chains 8-9 DSP unit operations, each with a parameter schema (yield, solvent ratio, cycle time, labor hours, consumable cost). Monte Carlo sampling across the top 15-20 sensitive inputs produces a COGS confidence interval and a tornado chart showing which inputs drive the most variance. For biosurfactants specifically, extraction operations include solvent recycling rate as a configurable parameter, because a 95% recycling efficiency assumption versus 85% changes DSP cost by roughly 3x for solvent-heavy processes.