Exploratory analysis (Jupyter)
We will do some exploratory data analysis on the adult mouse hippocampus dataset that you just preprocessed.
Here, we show our code and results as a jupyter notebook, but you can copy and paste the code into a standalone script, and it will still work.
Loading the modules¶
Loading the data¶
After transfering the segmentation information, the cell with the identifier 0 will have all the transcripts belonging to the background. Therefore, make sure to omit it from the dataset.
Tip
We all know that views in anndata
can be a bit problematic sometimes... You might want to append .copy()
at the end of the following line of code if you encounter issues.
Calculating QC metrics of the sample¶
Now, we can show different plots (histograms) to assess the number of unique transcripts (UMIs), genes, percentage of mitochondrial transcripts per segmented cell, as a quick way of assessing the quality of the dataset. This will help to decide on filtering thresholds, to remove potential low quality cells (i.e., due to poor sequencing coverage, or wrong segmentation of background as cells).
Here we apply the following filters, decided by looking at the histogram. You can be more conservative, we chose these in order to keep most of the cells while removing very obvious bad quality data points.
Normalization¶
From this point, some of the analysis decisions are just heuristics taken from single cell analysis. We would not say these are the best possible practices for analyzing this data, but the most common ones. Especially, regarding normalization (see Warning below).
Let's apply the common normalization to \(10^4\) counts per cell, and the log-normalization with pseudocount. This is supposed to stabilize the variance of genes and remove potential biases from sequencing depth per cell.
Warning
Normalization is necessary, i.e., to account for the differences in depth per cell and remove potential biases, or to estabilize the variance of genes and remove their dependency on the counts.
It is still under study, but normalization strategies for single-cell are most likely not well suited for this kind of high-resolution, single-cell spatial data. Especially, since counts have an additional source of (spatial) coviariance that does not exist in single-cell datasets. Therefore, it is possible that the covariance and errors from spatial components are propagated throughout the analysis pipeline, and some signal or significant results are just noise from the spatial autocorrelations. Anyway, we are using these common practices just as a way of getting a feeling about the data, and what could be genes playing a role at specific regions in space.
Now, we detect highly variable genes. Any of the flavor
(s) are designed with single-cell data in mind, so the list of highly variable genes that they select will be likely affected by the spatial autocorrelation structure. Alternative strategies could be selecting these genes based on the spatial variable genes (e.g., by Moran's I value).
Dimensionality reduction and clustering¶
We perform the dimensionality reduction with PCA, and community clustering of the nearest neighbors graph using the leiden algorithm. For this exploratory analysis, we keep the default parameters.
TODO: put the PCA loadings and select from this.
Now we can take a look at the clusters in space. It seems like we recapitulate the major morphological structures from this tissue. We also show Ttr, a marker of the choroid plexus, which seems to be restricted to a specific location in space, that also clusters separately.