Viscosity data can tell you a ton about your sample up and down the development pipeline. In the world of antibody engineering and formulation, viscosity raises the alarm when your antibody is prone to sticky self-interactions or if your formulation isn’t going to cut it. In process development, high viscosity can lead to issues with tangential-flow filtration (TFF), sterile filtration, or high sample loss in other fill/finish processes. Viscosity is the biggest factor telling you if your sample will struggle with syringeability or injectability.
For such a useful measurement, most viscosity tech is such a hassle that no one is getting as much viscosity data as they need. High-volume, classic techniques can require hours of hands-on time in super slow, one-sample-at-a-time instruments. Even modern tech that just needs microliters of sample is often paired with a ridiculously expensive chip that creates a one-by-one bottleneck and creates a constant cycle of clogging, cleaning, and calibration.
The right tool for the job
Push it to the limit
Characterize every construct, formulation, and concentration by gathering as much viscosity data as you really want. Know which combinations behave the best, and exactly how high you can go in concentration before you’re over your viscosity limit.
Sweep up piles of data
Every read in Honeybun is done with a sweep of shear rates, so you’ll get to see if your viscosity is staying rock-solid or if there’s some non-Newtonian behavior going on.
Cool temperatures can send viscosity straight to the moon. Honeybun lets you explore how samples behave at room temperature or when cooled down. If your manufacturing process has a warm spot, you can also heat up Honeybun to get a spot-on value at higher temps.
Want more info?
Want to learn more about how Honeybun rolls out tons of sweet viscosity data?