Seurat数据集处理流程

2023-10-08 07:40
文章标签 数据 流程 处理 seurat

本文主要是介绍Seurat数据集处理流程,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

多数据集

pancreas数据集

suppressMessages(require(Seurat))
suppressMessages(require(ggplot2))
suppressMessages(require(cowplot))
suppressMessages(require(scater))
suppressMessages(require(scran))
suppressMessages(require(BiocParallel))
suppressMessages(require(BiocNeighbors))
setwd("/Users/xiaokangyu/Desktop/程序学习总结/库学习/Seurat/data/pancreas_v3_file")
pancreas.data <- readRDS(file = "pancreas_expression_matrix.rds")
metadata <- readRDS(file = "pancreas_metadata.rds")
pancreas <- CreateSeuratObject(pancreas.data, meta.data = metadata)
#注意这个和其他的情况是不一样的,这里只有metadata的标注信息的
#之前的数据集只是传入count matrix,而并没有同时传入meta.data# 标准化数据(Filter cells省略了,这个影响不大)
# Normalize and find variable features
pancreas <- NormalizeData(pancreas, verbose = FALSE)
pancreas <- FindVariableFeatures(pancreas, selection.method = "vst", nfeatures = 2000, verbose = FALSE)# Run the standard workflow for visualization and clustering
pancreas <- ScaleData(pancreas, verbose = FALSE)
pancreas <- RunPCA(pancreas, npcs = 30, verbose = FALSE)
pancreas <- RunUMAP(pancreas, reduction = "pca", dims = 1:30)
p1 <- DimPlot(pancreas, reduction = "umap", group.by = "tech")#画batch图
p2 <- DimPlot(pancreas, reduction = "umap", group.by = "celltype", label = TRUE, repel = TRUE) + NoLegend() #画celltype的图
print(p1+p2)

结果如下
在这里插入图片描述# 单数据集
``

rm(list=ls())
setwd("/Users/xiaokangyu/Desktop/程序学习总结/库学习/Seurat/")
library(Seurat)
library(dplyr)
library(aricode)
## load data (./Seurat/Koh.Rdata)
## Unnormalized data such as raw counts or TPMs
dataname = "./data/Koh.Rdata"
load(dataname)
colnames(label)="celltype"## create Seurat object
pbmc_small <- CreateSeuratObject(data,meta.data = label)## Normalize the count data present in a given assay.
pbmc_small <- NormalizeData(object = pbmc_small)## Identifies features that are outliers on a 'mean variability plot'.
pbmc_small <- FindVariableFeatures(object = pbmc_small)## Scales and centers features in the dataset. If variables are provided in vars.to.regress, they are individually regressed against each feautre, and the resulting residuals are then scaled and centered.
pbmc_small <- ScaleData(object = pbmc_small
)## Run a PCA dimensionality reduction. For details about stored PCA calculation parameters, see PrintPCAParams.
pbmc_small <- RunPCA(object = pbmc_small,pc.genes = pbmc_small@var.genes)
#runPCA和RunUMAP是同时等价地位的。
pbmc_small <- RunUMAP(pbmc_small, reduction = "pca", dims = 1:30)## Randomly permutes a subset of data, and calculates projected PCA scores for these 'random' genes. Then compares the PCA scores for the 'random' genes with the observed PCA scores to determine statistical signifance. End result is a p-value for each gene's association with each principal component.
pbmc_small <- JackStraw(object = pbmc_small)## Constructs a Shared Nearest Neighbor (SNN) Graph for a given dataset.
pbmc_small <- FindNeighbors(pbmc_small)##Clustering
res = FindClusters(object = pbmc_small)
#res$seurat_clusters
#这里需要注意一点,经过FindCluster后赋值的队象变成了res
#pbmc_small还是没有经过处理之前的,因此它没有$seurat_cluster的属性
#但是之前pbmc_small所具有的全部属性,res全部都有
p1 <- DimPlot(res, reduction = "umap",label = T)#画batch图
p2 <- DimPlot(res, reduction = "umap",group.by = "celltype")#画batch图
print(p1+p2)#最终画图可以直接使用+,并排显示

在这里插入图片描述

显示label

如果想达到和sc.pl.umap(adata,color=["celltype"],legend="on data")
那么在Seurat中需要设置label的参数

pp=DimPlot(data_seurat, reduction = "umap", group.by = "celltype", label.size = 5,label=T)+ggtitle("Integrated Celltype")+NoLegend()
print(pp)

结果如下
在这里插入图片描述

Integration

rm(list=ls())
options(future.globals.maxSize = 8000 * 1024^2)
suppressMessages(require(Seurat))
suppressMessages(require(ggplot2))
suppressMessages(require(cowplot))
#suppressMessages(require(scater))
#suppressMessages(require(scran))
#suppressMessages(require(BiocParallel))
#suppressMessages(require(BiocNeighbors))setwd("/home/zhangjingxiao/yxk/Seurat3")
start.time <- Sys.time()
pancreas.data <- readRDS(file = "/DATA2/zhangjingxiao/yxk/dataset/pancreas_v3/pancreas_expression_matrix.rds")
metadata <- readRDS(file = "/DATA2/zhangjingxiao/yxk/dataset/pancreas_v3/pancreas_metadata.rds")
#pancreas <- CreateSeuratObject(pancreas.data, meta.data = metadata)
#注意这个和其他的情况是不一样的,这里只有metadata的标注信息的
#之前的数据集只是传入count matrix,而并没有同时传入meta.data
# 标准化数据(Filter cells省略了,这个影响不大)
# Normalize and find variable features
# pancreas <- NormalizeData(pancreas, verbose = FALSE)
# pancreas <- FindVariableFeatures(pancreas, selection.method = "vst", nfeatures = 2000, verbose = FALSE)
# 
# # Run the standard workflow for visualization and clustering
# pancreas <- ScaleData(pancreas, verbose = FALSE)
# pancreas <- RunPCA(pancreas, npcs = 30, verbose = FALSE)
# pancreas <- RunUMAP(pancreas, reduction = "pca", dims = 1:30)
# 
# p1 <- DimPlot(pancreas, reduction = "umap", group.by = "tech")#画batch图
# p2 <- DimPlot(pancreas, reduction = "umap", group.by = "celltype", label = TRUE, repel = TRUE) + 
#   NoLegend() #画celltype的图
# print(p1+p2)
# # ggsave("vis_pancras.png",plot=p1+p2)print("===================Creating SeuraObject==========")
data_seurat= CreateSeuratObject(pancreas.data, meta.data = metadata)
print("===================Split SeuratOject============")
scRNAlist <- SplitObject(data_seurat, split.by = "tech")
print("===================Normalize SeuratObject=======")
scRNAlist <- lapply(scRNAlist, FUN = function(x) NormalizeData(x,verbose=F))
print("===================Find HVG=====================")
scRNAlist <- lapply(scRNAlist, FUN = function(x) FindVariableFeatures(x,verbose=F))
print("preprecessing done")print(scRNAlist)
data.anchors <- FindIntegrationAnchors(object.list =scRNAlist, dims = 1:20,verbose = F)
data.combined <- IntegrateData(anchorset = data.anchors, dims = 1:20,verbose = F)   DefaultAssay(data.combined) <- "integrated"
################### scale data  =====================
data.combined <- ScaleData(data.combined, verbose = FALSE)
data.combined <- RunPCA(data.combined, npcs = 30, verbose = FALSE)
# t-SNE and Clustering
data.combined <- RunUMAP(data.combined, reduction = "pca", dims = 1:20,verbose=F)p1=DimPlot(data.combined, reduction = "umap", group.by = "tech", label.size = 10)+ggtitle("Integrated Batch")
p2=DimPlot(data.combined, reduction = "umap", group.by = "celltype", label.size = 10)+ggtitle("Integrated Celltype")
p= p1 + p2 
print(p1+p2)# adata <- CreateSeuratObject(pancreas.data, meta.data = metadata)
# message('Preprocessing...')
# adata.list <- SplitObject(adata, split.by = "tech")
# 
# print(dim(adata)[2])
# if(dim(adata)[2] < 50000){
#   for (i in 1:length(adata.list)) {
#     adata.list[[i]] <- NormalizeData(adata.list[[i]], verbose = FALSE)
#     adata.list[[i]] <- FindVariableFeatures(adata.list[[i]], selection.method = "vst", nfeatures = args$n_top_features, verbose = FALSE)
#   }
#   message('FindIntegrationAnchors...')
#   adata.anchors <- FindIntegrationAnchors(object.list = adata.list, dims = 1:30,verbose =FALSE,k.filter = 30)
#   #     adata.anchors <- FindIntegrationAnchors(object.list = adata.list, dims = 1:30,verbose =FALSE,k.filter = 100)
#   
#   message('IntegrateData...')
#   adata.integrated <- IntegrateData(anchorset = adata.anchors, dims = 1:30, verbose = FALSE)
# }else{
#   adata.list <- future_lapply(X = adata.list, FUN = function(x) {
#     x <- NormalizeData(x, verbose = FALSE)
#     x <- FindVariableFeatures(x, nfeatures = args$n_top_features, verbose = FALSE)
#   })
#   
#   features <- SelectIntegrationFeatures(object.list = adata.list)
#   adata.list <- future_lapply(X = adata.list, FUN = function(x) {
#     x <- ScaleData(x, features = features, verbose = FALSE)
#     x <- RunPCA(x, features = features, verbose = FALSE)
#   })
#   message('FindIntegrationAnchors...')
#   adata.anchors <- FindIntegrationAnchors(object.list = adata.list, dims = 1:30, verbose =FALSE, reduction = 'rpca', reference = c(1, 2))
#   message('IntegrateData...')
#   adata.integrated <- IntegrateData(anchorset = adata.anchors, dims = 1:30, verbose = FALSE)
# }
# 
# if (!file.exists(args$output_path)){
#   dir.create(file.path(args$output_path),recursive = TRUE)
# }

这篇关于Seurat数据集处理流程的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/163758

相关文章

Python FastAPI+Celery+RabbitMQ实现分布式图片水印处理系统

《PythonFastAPI+Celery+RabbitMQ实现分布式图片水印处理系统》这篇文章主要为大家详细介绍了PythonFastAPI如何结合Celery以及RabbitMQ实现简单的分布式... 实现思路FastAPI 服务器Celery 任务队列RabbitMQ 作为消息代理定时任务处理完整

C#使用SQLite进行大数据量高效处理的代码示例

《C#使用SQLite进行大数据量高效处理的代码示例》在软件开发中,高效处理大数据量是一个常见且具有挑战性的任务,SQLite因其零配置、嵌入式、跨平台的特性,成为许多开发者的首选数据库,本文将深入探... 目录前言准备工作数据实体核心技术批量插入:从乌龟到猎豹的蜕变分页查询:加载百万数据异步处理:拒绝界面

Springboot处理跨域的实现方式(附Demo)

《Springboot处理跨域的实现方式(附Demo)》:本文主要介绍Springboot处理跨域的实现方式(附Demo),具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不... 目录Springboot处理跨域的方式1. 基本知识2. @CrossOrigin3. 全局跨域设置4.

Java利用JSONPath操作JSON数据的技术指南

《Java利用JSONPath操作JSON数据的技术指南》JSONPath是一种强大的工具,用于查询和操作JSON数据,类似于SQL的语法,它为处理复杂的JSON数据结构提供了简单且高效... 目录1、简述2、什么是 jsONPath?3、Java 示例3.1 基本查询3.2 过滤查询3.3 递归搜索3.4

MySQL大表数据的分区与分库分表的实现

《MySQL大表数据的分区与分库分表的实现》数据库的分区和分库分表是两种常用的技术方案,本文主要介绍了MySQL大表数据的分区与分库分表的实现,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有... 目录1. mysql大表数据的分区1.1 什么是分区?1.2 分区的类型1.3 分区的优点1.4 分

Mysql删除几亿条数据表中的部分数据的方法实现

《Mysql删除几亿条数据表中的部分数据的方法实现》在MySQL中删除一个大表中的数据时,需要特别注意操作的性能和对系统的影响,本文主要介绍了Mysql删除几亿条数据表中的部分数据的方法实现,具有一定... 目录1、需求2、方案1. 使用 DELETE 语句分批删除2. 使用 INPLACE ALTER T

python+opencv处理颜色之将目标颜色转换实例代码

《python+opencv处理颜色之将目标颜色转换实例代码》OpenCV是一个的跨平台计算机视觉库,可以运行在Linux、Windows和MacOS操作系统上,:本文主要介绍python+ope... 目录下面是代码+ 效果 + 解释转HSV: 关于颜色总是要转HSV的掩膜再标注总结 目标:将红色的部分滤

Python Dash框架在数据可视化仪表板中的应用与实践记录

《PythonDash框架在数据可视化仪表板中的应用与实践记录》Python的PlotlyDash库提供了一种简便且强大的方式来构建和展示互动式数据仪表板,本篇文章将深入探讨如何使用Dash设计一... 目录python Dash框架在数据可视化仪表板中的应用与实践1. 什么是Plotly Dash?1.1

Python实现自动化接收与处理手机验证码

《Python实现自动化接收与处理手机验证码》在移动互联网时代,短信验证码已成为身份验证、账号注册等环节的重要安全手段,本文将介绍如何利用Python实现验证码的自动接收,识别与转发,需要的可以参考下... 目录引言一、准备工作1.1 硬件与软件需求1.2 环境配置二、核心功能实现2.1 短信监听与获取2.

Redis 中的热点键和数据倾斜示例详解

《Redis中的热点键和数据倾斜示例详解》热点键是指在Redis中被频繁访问的特定键,这些键由于其高访问频率,可能导致Redis服务器的性能问题,尤其是在高并发场景下,本文给大家介绍Redis中的热... 目录Redis 中的热点键和数据倾斜热点键(Hot Key)定义特点应对策略示例数据倾斜(Data S