The path from raw data to biological insight depends on the analytical decisions made between upload and interpretation. Accounting for batch effects, choosing the right processing approach, evaluating the results, and connecting those results back to the biological context all influence how quickly teams can understand the true biological effect.
This month’s updates focus on making that journey easier to work through, helping you move from data evaluation to biological interpretation with more clarity and confidence.
Everything here builds on the personalized support you already get in your Member Success sessions. Feel free to raise any of these updates in your next session if you want help making the most of them.
Here's what's new.
1. Account for batch effects with a method the field already trusts
Batch effects are one of the most common sources of distortion in proteomics data, which can distort comparisons and blur biological signals. ComBat (Johnson et al., 2007) is one of the most widely accepted post-acquisition approaches for handling this, and it's now available directly on the platform.
ComBat batch effect correction. Apply ComBat to your dataset to adjust for known batch variables without leaving the workspace.
Histogram of p-values. This newly added diagnostic plot can be used alongside existing visualizations, such as dimensionality reduction and boxplots, to inspect p-value distributions from differential expression analysis to confirm that residual confounding has been reduced.
Where to find it. Batch correction runs as part of your standard normalization and imputation step, so the corrected dataset stays linked to its source.
A faster way to remove a known source of noise before you start interpreting results.
Most analyses live or die on what happens before the differential expression step. Missing values are imputed, entities are filtered, and p-value distributions are inspected. A set of updates makes those upstream choices easier to make, easier to see, and easier to revisit.
New normalization and imputation form. A rebuilt form with cleaner, conditional dropdowns - options now appear only when they apply to your entity type. Localization-probability filtering, for example, only shows up for peptides.
KNN-TN and MinDet imputation methods. Two new imputation options join the existing set, giving you more flexibility depending on the missingness pattern in your data.
Filter entities in the normalized and imputed dataset. Create filtered protein and peptide datasets to remove entities with too many missing values before downstream analysis and visualizations.
Less time managing prep, more time on the parts that actually drive interpretation.
Once you have a list of entities that are changing, the next question is always the same: what does this mean biologically? Reactome over-representation analysis and STRING protein–protein interaction profiles already help with parts of that picture. What's been missing is a deeper, gene-set-centric view that accounts for inter-gene correlation - and that's now in the platform via CAMERA (Wu et al., 2012) gene set enrichment analysis (GSEA).
GSEA dataset using CAMERA. Run GSEA using CAMERA, a competitive gene-set test that accounts for inter-gene correlation across pairwise comparisons.
Multiple databases in one run. Pull gene sets from Reactome, Gene Ontology (via QuickGO), and MSigDB collections in a single GSEA dataset.
GSEA dot plot for visualisation. A new dot plot module shows enriched pathways across comparisons, with significance and average fold change visualised together.
Connects back to your existing analyses. GSEA results sit alongside your Reactome ORA and STRING outputs, so you can layer different views of the same biology.
A way to move from "these entities changed" to "these pathways and processes are affected" - without leaving the workspace.
As workspaces grow - more analysts, layered analyses, evolving tabs and modules - it gets harder to keep track of what changed, when, and why. A new audit log gives you a full revision history of any workspace, plus clearer provenance on the datasets it depends on.
Workspace-level audit log. Every action - module settings updated, tabs renamed, workspaces edited - is recorded with timestamp, user, and a field-level diff so you can see exactly what changed.
Filter by user, type, or date. Narrow the log to a particular collaborator, action type (Workspace, Module Settings, Tab), or time window.
Inline change details. Expand any entry to see the before-and-after for the field that changed - useful for tracing back unexpected results to the action that caused them.
Dataset creator visibility. See who created each dataset directly in the dataset list, so provenance is clear without asking around.
A clearer record of how your analyses evolved, and a faster way to debug when results don't match expectations.