SpotFiles



Last updated: 2020-10-06

This app is available on Mac OS X and iOS. It will allow you to test and troubleshoot your wifi network. It will be very useful when you want to test bandwidth of your wifi, ethernet or mixed network, but your internet connection is too slow(for example 10 Megabits per second), and your router can provide much faster speed between devices in local network. Download Mac software in the Utilities category - Page 27. Native macOS Gmail client that uses Google's API in order to provide you with the Gmail features you know and love, all in an efficient Swift-based app. Grow your communities through Groups, promote and manage your events using Peatix's robust tools on the web and the app. Simple, transparent, low fees. Customer-centric support. Addendum: A request to the developer. When copying and pasting the size data in bytes from the Info pane in Finder (to look for files of byte-for-byte identical size, for instance), the dot(s) (as, in, '1.832.745.521 bytes', for instance) don't automatically get parsed (if that's the right term), into a format that Find Any File recognises (i.e., don't get translated into a dot-less format.

Checks: 7 0

Knit directory:STUtility_web_site/

This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.

Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20191031) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 36493b2. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.

These are the previous versions of the repository in which changes were made to the R Markdown (analysis/Quality_Control.Rmd) and HTML (docs/Quality_Control.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

FileVersionAuthorDateMessage
htmld13dabdLudvig Larsson2020-06-05Build site.
html60407bbLudvig Larsson2020-06-05Build site.
htmle339ebcLudvig Larsson2020-06-05Build site.
html6606127Ludvig Larsson2020-06-05Build site.
htmlf0ea9c1Ludvig Larsson2020-06-05Build site.
html651f315Ludvig Larsson2020-06-05Build site.
htmla6a0c51Ludvig Larsson2020-06-05Build site.
htmlf3c5cb4Ludvig Larsson2020-06-05Build site.
htmlf89df7eLudvig Larsson2020-06-05Build site.
html1e67a7cLudvig Larsson2020-06-04Build site.
htmlf71b0c0Ludvig Larsson2020-06-04Build site.
Rmd90a45ccLudvig Larsson2020-06-04Fixed dataset in Load data
html8a54a4dLudvig Larsson2020-06-04Build site.
html5e466ebLudvig Larsson2020-06-04Build site.
Rmd4819d2bLudvig Larsson2020-06-04update website
html377408dLudvig Larsson2020-06-04Build site.
htmled54ffbLudvig Larsson2020-06-04Build site.
htmlf14518cLudvig Larsson2020-06-04Build site.
htmlefd885bLudvig Larsson2020-06-04Build site.
html4f42429Ludvig Larsson2020-06-04Build site.
Rmd3660612Ludvig Larsson2020-06-04update website

Quality Control

Here we’ll go through some basic steps to assess the quality of your data and how to apply filters to remove low abundant genes and poor quality spots.

Include all spots

If you expect that you have over-permeabilized your tissue it could be useful to look at the expression patterns outside the tissue region as well. This can be done by loading the

Here we have a new infotable data.frame where the file paths in the “samples” column have been set to the '*raw_feature_bc_matrix.h5' matrrices instead of the filtered ones. Now we can load all spots into our Seurat object by setting disable.subset = TRUE.

The tissue borders are quite easy to see in the plot but you can also see that there have been transcripts captured also outside of the tissue. During library preparation, transcripts can diffuse out into the solution and end up anywhere outside the tissue but we can know from the TO experiments that the transcripts captured under the tissue form a cDNA footprint that accurately reflects the tissue morphology and that the transcripts have diffused vertically from the cells in the tissue down onto the capture area surface.

It can be good to keep this in mind when you see that you have holes in your tissue with no cells. You might detect quite a lot of transcripts in such holes and it is therefore important to carefully remove spots that are not covered by cells. If the automatic tissue detection algorithm run by spaceranger fails to find such holes, it could be a good idea to manually remove them using Loupe Browser before running spaceranger.

VersionAuthorDate
f71b0c0Ludvig Larsson2020-06-04
4f42429Ludvig Larsson2020-06-04

Now let’s load the data with the subsetting enabled. Here we can use wither the raw matrices or the filtered matrices as long as we have spotfiles available in our infoTable data.frame which will be used to select the spots under tissue.

Sometimes it can be a good idea to filter the data to remove low quality spots or low abundant genes. When running InputFromTable, spots with 0 counts will automatically be removed but you also have the option to filter the data directly using one of the following arguments:

  • min.gene.count : sets a threshold for the minimum allowed UMI counts of a gene across the whole dataset
  • min.gene.spots : sets a threshold for the minimum allowed number of spots where a gene is detected cross the whole dataset
  • min.spot.feature.count : sets a threshold for the minimum allowed number of unique genes in a spot
  • min.spot.count : sets a threshold for the minimum allowed UMI counts in a spot
  • topN : subset the expression matrix to include only the topN most expressed genes

You can also apply filters when the Seurat obect has been created which gives you more freedom to explore what could be a good threshold. Below we have plotted some basic features that you can use to define your filtering thresholds when running InputFromTable.

VersionAuthorDate
f71b0c0Ludvig Larsson2020-06-04
4f42429Ludvig Larsson2020-06-04

Filter out spots

Let’s say that we want to remove all spots with fewer than 500 unique genes we can simply subset the using the SubsetSTData function and an expression.

NOTE: The Seurat package provides a subset method for Seurat objects but unfotunately this method will not work when using STUtility.

Mitochondrial content

It can also be useful to explore other features of the dataset to use for filtering, for example mitochondrial transcript content or ribosomal protein coding transcript content. Mitochondrial genes are prefixed with “mt-” in MGI nomenclature so we can collect these genes and then calculate the percentage of mitochondrial content per spot and add this information to our meta.data.

VersionAuthorDate
f71b0c0Ludvig Larsson2020-06-04
4f42429Ludvig Larsson2020-06-04
VersionAuthorDate
f71b0c0Ludvig Larsson2020-06-04
4f42429Ludvig Larsson2020-06-04

We can also combine different thresholds to filter the data. Let’s say that we want to remove all spots with fewer than 500 unique genes and also spots with a high mitochondrial transcript content (>30%).

Removing genes

If you have good reson to remove a certain type of gene, this can also be done quite easily as well. For example, you might want to keep only protein coding genes in your dataset. Here we demonstrate how to subset a Seurat object to include only protein coding genes using our predefined covnersion table, but you could also get this information elsewhere, e.g. bioMart.

A work by Joseph Bergenstråhle and Ludvig Larsson


Last updated: 2020-10-06

Checks: 7 0

Knit directory:STUtility_web_site/

This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.

Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20191031) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 36493b2. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.

These are the previous versions of the repository in which changes were made to the R Markdown (analysis/getting_started.Rmd) and HTML (docs/getting_started.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

FileVersionAuthorDateMessage
Rmd36493b2Ludvig Larsson2020-10-06wflow_publish(files = paste0(“analysis/”, c(“index.Rmd”, “about.Rmd”,
htmld13dabdLudvig Larsson2020-06-05Build site.
html60407bbLudvig Larsson2020-06-05Build site.
htmle339ebcLudvig Larsson2020-06-05Build site.
html6606127Ludvig Larsson2020-06-05Build site.
htmlf0ea9c1Ludvig Larsson2020-06-05Build site.
html651f315Ludvig Larsson2020-06-05Build site.
htmla6a0c51Ludvig Larsson2020-06-05Build site.
htmlf3c5cb4Ludvig Larsson2020-06-05Build site.
htmlf89df7eLudvig Larsson2020-06-05Build site.
html1e67a7cLudvig Larsson2020-06-04Build site.
htmlf71b0c0Ludvig Larsson2020-06-04Build site.
html8a54a4dLudvig Larsson2020-06-04Build site.
Rmd3c0ea68Ludvig Larsson2020-06-04update website
html5e466ebLudvig Larsson2020-06-04Build site.
Rmd4819d2bLudvig Larsson2020-06-04update website
html377408dLudvig Larsson2020-06-04Build site.
htmled54ffbLudvig Larsson2020-06-04Build site.
htmlf14518cLudvig Larsson2020-06-04Build site.
htmlefd885bLudvig Larsson2020-06-04Build site.
html4f42429Ludvig Larsson2020-06-04Build site.
Rmd3660612Ludvig Larsson2020-06-04update website
htmlcd4bf7djbergenstrahle2020-01-17Build site.
Rmd74b6860jbergenstrahle2020-01-17wflow_publish(“analysis/getting_started.Rmd”)
html7d3885ajbergenstrahle2020-01-12Build site.
Rmd2eb9eadjbergenstrahle2020-01-12Updated InputFromTable info
htmlfb06450jbergenstrahle2020-01-11Build site.
Rmdbaff98ejbergenstrahle2019-12-16adding
Rmd5cb8ab1jbergenstrahle2019-12-02update2
Rmd8f9876ejbergenstrahle2019-11-29Update
Rmd03c9b7cjbergenstrahle2019-11-19Added JB
html03c9b7cjbergenstrahle2019-11-19Added JB
htmlbe8be1dLudvig Larsson2019-10-31Build site.
htmlf42af97Ludvig Larsson2019-10-31Build site.
html0754921Ludvig Larsson2019-10-31Build site.
html908fe2cLudvig Larsson2019-10-31Build site.
html54787b5Ludvig Larsson2019-10-31Build site.
htmld96da86Ludvig Larsson2019-10-31Build site.
Rmde4e84dcLudvig Larsson2019-10-31Added theme
html0c79353Ludvig Larsson2019-10-31Build site.
Rmd6257541Ludvig Larsson2019-10-31Publish the initial files for myproject
html2241363Ludvig Larsson2019-10-31Build site.
Rmd8bae4faLudvig Larsson2019-10-31Publish the initial files for myproject
htmla53305cLudvig Larsson2019-10-31Build site.
html6f61b95Ludvig Larsson2019-10-31Build site.
Rmd429c12cLudvig Larsson2019-10-31Publish the initial files for myproject

First you need to load the library into your R session.

10X Visium platform

Input files

10X Visium data output is produced with the spaceranger command line tool from raw fastq files. The output includes a number of files, and the ones that needs to be imported into R for STUtility are the following:

  1. “filtered_feature_bc_matrix.h5” or “raw_feature_bc_matrix.h5” : Gene expression matrices in .h5 format containing the raw UMI counts for each spot and each gene
  2. “tissue_positions_list.csv” : contains capture-spot barcode spatial information and pixel coordinates
  3. “tissue_hires_image.png” : H&E image in PNG format
  4. “scalefactors_json.json” : This file contains scaling factors subject to the H&E images of different resolutions. E.g. “tissue_hires_scalef”: 0.063, implies that the pixel coordinates in the position list should be scaled with 0.063 to match the size of the hires_image.png file.

To use the full range of functions within STUtility, all four files are needed for each sample. However, all data analysis steps that do not involve the H&E image can be performed with only the count file as input. To read in the 10x Visium .h5 files, the package hdf5r needs to be installed (BiocManager::install('hdf5r')).

To follow along this tutorial with a test data set, go to the 10x Dataset repo and download the following two files:

  • Feature / cell matrix HDF5 (filtered)
  • Spatial imaging data (.zip)
    • tissue_hires_image
    • tissue_positions_list
    • scalefactors_json

The Parking Spot Files Bankruptcy

The .zip file contains the H&E image (in two resolution formats; “tissue_lowres_image” and “tissue_hires_image”), the “tissue_positions_list” with pixel coordinates for the orginial .tif image and the “scalefactors_json.json” that contains the scalefactors used to dervive the pixel cooridinates for the hires images. There are some alternatives to handle the scalefactors. Either, you manualy open the .json file and note the scalefactor and state these numbers in a column in the infoTable named “scaleVisium” (see below). Or, you add a column named “json” with paths to the “scalefactors_json.json” files. A third option is to manually input the values to the function InputFromTable (see ?InputFromTable).

In this vignette we show e.g. the data sets from Mouse Brain Serial Section 1 and 2 (Sagittal-Posterior)

Prepare data

The recommended method to read the files into R is via the creation of a “infoTable”, there are three columns that the package will note whether they are included or not: “samples”, “spotfiles” and “imgs”.

These contains the paths to the files. Any number of extra columns can be added to the infoTable data.frame that you want to include as meta data in your Seurat object, e.g. “gender”, “age”, “slide_id” etc. These columns can be named as you like, but not “sample”, “spotfiles” or “imgs”.

We are now ready to load our samples and create a “Seurat” object using our infotTable data.frame.

Spot test forensic files

Here, we demonstrate the creation of the Seurat object, while also including some filtering (see section “Quality Control” for more information on filtering):

Parking
  • Keeping the genes that are found in at least 5 capture spots and has a total count value >= 100.
  • Keeping the capture-spots that contains >= 500 total transcripts.

Note that you have to specify which platform the data comes from. The default platform is 10X Visium but if you wish to run data from the older ST platforms, there is support for “1k” and “2k” arrays. You can also mix datasets from different platforms by specifying one of; “Visium”, “1k” or “2k” in a separate column of the infoTable named “platform”. You just have to make sure that the datasets have gene symbols which follows the same nomenclature.


Once you have created a Seurat object you can process and visualize your data just like in a scRNA-seq experiment and make use of the plethora of functions provided in the Seurat package. There are many vignettes to get started available at the Seurat web site.

For example, if you wish to explore the spatial distribution of various features on the array coordinates you can do this using the ST.FeaturePlot() function. Features include any column stored in the “meta.data” slot, dimensionality reduction objects or gene expression vectors.


VersionAuthorDate
4f42429Ludvig Larsson2020-06-04
fb06450jbergenstrahle2020-01-11
03c9b7cjbergenstrahle2019-11-19
d96da86Ludvig Larsson2019-10-31


Original ST platform

In general, using STUtility for the old ST platform data follows the same workflow as for the 10X Visium arrays. The only difference is when loading the data into R.

Input files

The original ST workflow produces the following three output files:

  1. Count file (Count file with raw counts (UMI filtered) for each gene and capture spot)
  2. Spot detector output (File with spatial pixel coordinate information produces via the Spot Detector webtool)
  3. H&E image

Prepare data

The recommended method to read the files into R is via the creation of a “infoTable”, which is a table with at least three columns “samples”, “spotfiles” and “imgs”.

Test data is provided:

Load data and convert from EnsambleIDs to gene symbols

The provided count matrices uses EnsambleIDs (with version id) for the gene symbols. Gene symbols are often a preference for easier reading, and we have therefore included an option to directly convert the gene IDs when creating the Seurat object. The data.frame object required for conversion should have one column called “gene_id” matching the original gene IDs and a column called “gene_name” with the desired symbols. you also need to make sure that these columns have unique names, otherwise the converiion will not work. We have provided such a table that you can use to convert between EnsambleIDs and MGI symbols (mus musculus gene nomenclature).


We are now ready to load our samples and create a “Seurat” object.

Here, we demonstrate the creation of the Seurat object, while also including some filtering:

  • Keeping the genes that are found in at least 5 capture spots and has a total count value >= 100.
  • Keeping the capture-spots that contains >= 500 total transcripts.

Note that we specify that we’re using the “2k” array platform and also, since we in this case have genes in the columns, we set transpose=TRUE.


Once you have created a Seurat object you can process and visualize your data just like in a scRNA-seq experiment and make use of the plethora of functions provided in the Seurat package. There are many vignettes to get started available at the Seurat web site.

Some of the functionalities provided in the Seurat package are not yet supported by STUtility, such as dataset integration and multimodal analysis. These methods should in principle work if you treat the data like a scRNA-seq experiment, but you will not be able to make use of the image related data or the spatial visualization functions.

For example, if you wish to explore the spatial distribution of various features on the array coordinates you can do this using the ST.FeaturePlot() function.

VersionAuthorDate
4f42429Ludvig Larsson2020-06-04
fb06450jbergenstrahle2020-01-11


Navigating and accessing data

We recommend that unexperienced users have look at the Seurat website and tutorials for basic navigation of the Seurat object such as getting and setting identities and accessing various method outputs.

However, specific for STUtility, there is another S4 object stored within the Seurat objects “tools” slot, called “Staffli”. This object contains all the STUtility specific meta data, like pixel cooridinates, sample IDs, platform types etc.

You can reach this via:

It can for example be useful to access the spot coordinates and images if you want to write your own plotting functions.

A work by Joseph Bergenstråhle and Ludvig Larsson

Spot Test Forensic Files