golang map真有那么随机吗?——map遍历研究

2024-01-26 04:52

本文主要是介绍golang map真有那么随机吗?——map遍历研究,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

在随机选取map中元素时,本想用map遍历的方式来返回,但是却并没有通过测试。

那么难道map的遍历并不是那么的随机吗?

以下代码参考go1.18

hiter是map遍历的结构,主要记录了当前遍历的元素、开始位置等来完成整个遍历过程

// A hash iteration structure.
// If you modify hiter, also change cmd/compile/internal/reflectdata/reflect.go
// and reflect/value.go to match the layout of this structure.
type hiter struct {// 指向下一个遍历key的地址key         unsafe.Pointer // Must be in first position.  Write nil to indicate iteration end (see cmd/compile/internal/walk/range.go).// 指向下一个遍历value的地址elem        unsafe.Pointer // Must be in second position (see cmd/compile/internal/walk/range.go).// map类型t           *maptype// map headerh           *hmap// 初始化时指向的bucketbuckets     unsafe.Pointer // bucket ptr at hash_iter initialization time// 当前遍历到的bmapbptr        *bmap          // current bucketoverflow    *[]*bmap       // keeps overflow buckets of hmap.buckets aliveoldoverflow *[]*bmap       // keeps overflow buckets of hmap.oldbuckets alive// 开始桶startBucket uintptr        // bucket iteration started at// 桶内偏移量offset      uint8          // intra-bucket offset to start from during iteration (should be big enough to hold bucketCnt-1)// 是否从头遍历了wrapped     bool           // already wrapped around from end of bucket array to beginningB           uint8// 正在遍历的槽位i           uint8// 正在遍历的桶位bucket      uintptr// 用于扩容时进行检查checkBucket uintptr
}

mapiterinit为开始遍历的方法,主要是确定初始遍历的位置

// mapiterinit initializes the hiter struct used for ranging over maps.
// The hiter struct pointed to by 'it' is allocated on the stack
// by the compilers order pass or on the heap by reflect_mapiterinit.
// Both need to have zeroed hiter since the struct contains pointers.
func mapiterinit(t *maptype, h *hmap, it *hiter) {// 若map为空,则跳过遍历过程it.t = tif h == nil || h.count == 0 {return}if unsafe.Sizeof(hiter{})/goarch.PtrSize != 12 {throw("hash_iter size incorrect") // see cmd/compile/internal/reflectdata/reflect.go}it.h = h// grab snapshot of bucket state// 迭代器快照记录map桶信息it.B = h.Bit.buckets = h.bucketsif t.bucket.ptrdata == 0 {// Allocate the current slice and remember pointers to both current and old.// This preserves all relevant overflow buckets alive even if// the table grows and/or overflow buckets are added to the table// while we are iterating.h.createOverflow()it.overflow = h.extra.overflowit.oldoverflow = h.extra.oldoverflow}// decide where to start// 开始bucket选择随机数的低B位// 偏移量选择随机数高B位与桶数量,显然这个桶数量是不包括溢出桶的r := uintptr(fastrand())if h.B > 31-bucketCntBits {r += uintptr(fastrand()) << 31}it.startBucket = r & bucketMask(h.B)it.offset = uint8(r >> h.B & (bucketCnt - 1))// iterator state// 更新迭代器桶为初始桶it.bucket = it.startBucket// Remember we have an iterator.// Can run concurrently with another mapiterinit().// 标记可能有迭代正在使用桶和旧桶if old := h.flags; old&(iterator|oldIterator) != iterator|oldIterator {atomic.Or8(&h.flags, iterator|oldIterator)}mapiternext(it)
}

从上面的代码分析我们便可以看出随机选取的元素并不是真的随机,溢出桶并不包含在随机选择的范围里面

在具体的遍历过程,存在以下疑问

  • 如果在扩容中,如何进行遍历?
  • 如何保证不遗漏?
  • 如何防止重复遍历?
func mapiternext(it *hiter) {h := it.h// 如果标记已经写入,则抛出并发迭代写入错误if h.flags&hashWriting != 0 {throw("concurrent map iteration and map write")}t := it.tbucket := it.bucketb := it.bptri := it.icheckBucket := it.checkBucketnext:if b == nil {// 如果再次遇到开始bucket且是从头遍历的,则说明迭代结束,返回if bucket == it.startBucket && it.wrapped {// end of iterationit.key = nilit.elem = nilreturn}// 如果正在迁移过程中,且老桶没被迁移,采用老桶if h.growing() && it.B == h.B {// Iterator was started in the middle of a grow, and the grow isn't done yet.// If the bucket we're looking at hasn't been filled in yet (i.e. the old// bucket hasn't been evacuated) then we need to iterate through the old// bucket and only return the ones that will be migrated to this bucket.oldbucket := bucket & it.h.oldbucketmask()b = (*bmap)(add(h.oldbuckets, oldbucket*uintptr(t.bucketsize)))// bucket未迁移,记录bucket// checkBucket在当前map处于迁移而bucket未迁移时,为当前bucket// 否则为noCheckif !evacuated(b) {checkBucket = bucket} else {b = (*bmap)(add(it.buckets, bucket*uintptr(t.bucketsize)))checkBucket = noCheck}} else {// map处于未迁移,或者bucket迁移完成,采用新桶b = (*bmap)(add(it.buckets, bucket*uintptr(t.bucketsize)))checkBucket = noCheck}// 推进到下一桶bucket++// 遍历到最后一个桶,要绕回0桶继续遍历if bucket == bucketShift(it.B) {bucket = 0it.wrapped = true}i = 0}// 遍历桶内元素for ; i < bucketCnt; i++ {// 从offset槽开始offi := (i + it.offset) & (bucketCnt - 1)// 跳过空槽if isEmpty(b.tophash[offi]) || b.tophash[offi] == evacuatedEmpty {// TODO: emptyRest is hard to use here, as we start iterating// in the middle of a bucket. It's feasible, just tricky.continue}// 获取元素key、valuek := add(unsafe.Pointer(b), dataOffset+uintptr(offi)*uintptr(t.keysize))if t.indirectkey() {k = *((*unsafe.Pointer)(k))}e := add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+uintptr(offi)*uintptr(t.elemsize))// 扩容迁移时过滤掉不属于当前指向新桶的旧桶元素if checkBucket != noCheck && !h.sameSizeGrow() {// Special case: iterator was started during a grow to a larger size// and the grow is not done yet. We're working on a bucket whose// oldbucket has not been evacuated yet. Or at least, it wasn't// evacuated when we started the bucket. So we're iterating// through the oldbucket, skipping any keys that will go// to the other new bucket (each oldbucket expands to two// buckets during a grow).// 若key是有效的if t.reflexivekey() || t.key.equal(k, k) {// If the item in the oldbucket is not destined for// the current new bucket in the iteration, skip it.// 如果旧桶中的项在迭代中不打算用于当前的新桶,则跳过它。hash := t.hasher(k, uintptr(h.hash0))if hash&bucketMask(it.B) != checkBucket {continue}} else {// 对k!=k,也就是nil之类的,判断是否属于该新桶// 不是,则跳过// Hash isn't repeatable if k != k (NaNs).  We need a// repeatable and randomish choice of which direction// to send NaNs during evacuation. We'll use the low// bit of tophash to decide which way NaNs go.// NOTE: this case is why we need two evacuate tophash// values, evacuatedX and evacuatedY, that differ in// their low bit.if checkBucket>>(it.B-1) != uintptr(b.tophash[offi]&1) {continue}}}// 如果当前桶未扩容迁移,或者是每次hash不一致的key,获取到key、value添加到迭代器中if (b.tophash[offi] != evacuatedX && b.tophash[offi] != evacuatedY) ||!(t.reflexivekey() || t.key.equal(k, k)) {// This is the golden data, we can return it.// OR// key!=key, so the entry can't be deleted or updated, so we can just return it.// That's lucky for us because when key!=key we can't look it up successfully.it.key = kif t.indirectelem() {e = *((*unsafe.Pointer)(e))}it.elem = e} else {// 数据已经迁移情况下,处理键已被删除、更新或删除并重新插入的情况,定位数据,最后添加遍历key、value// The hash table has grown since the iterator was started.// The golden data for this key is now somewhere else.// Check the current hash table for the data.// This code handles the case where the key// has been deleted, updated, or deleted and reinserted.// NOTE: we need to regrab the key as it has potentially been// updated to an equal() but not identical key (e.g. +0.0 vs -0.0).rk, re := mapaccessK(t, h, k)if rk == nil {continue // key has been deleted}it.key = rkit.elem = re}// 迭代器记录进度it.bucket = bucketif it.bptr != b { // avoid unnecessary write barrier; see issue 14921it.bptr = b}it.i = i + 1it.checkBucket = checkBucketreturn}// 遍历溢出桶b = b.overflow(t)i = 0goto next
}

通过以上代码分析,可以看出:

  • 在扩容时遍历,

    • 如果当前遍历的桶已经迁移好了,那么取新桶

    • 如果仍然处于旧桶,则取旧桶。

      但值得注意的是要过滤掉那些不属于该新桶的旧桶元素。因为旧桶在扩容迁移时会分为两块,当前指向的新桶只属于其中之一

  • bucket从初始桶逐渐递增,保证正常桶都能遍历到。此外也保证了完整遍历溢出桶,直到溢出桶为空

  • 通过记录是否从头遍历的标志和起始bucket,以及在扩容过程中过滤不属于该新桶的元素来保证不会重复遍历

Ref

  1. https://zhuanlan.zhihu.com/p/597348765
  2. https://www.cnblogs.com/cnblogs-wangzhipeng/p/13292524.html
  3. https://qcrao.com/post/dive-into-go-map/

这篇关于golang map真有那么随机吗?——map遍历研究的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/645686

相关文章

golang获取当前时间、时间戳和时间字符串及它们之间的相互转换方法

《golang获取当前时间、时间戳和时间字符串及它们之间的相互转换方法》:本文主要介绍golang获取当前时间、时间戳和时间字符串及它们之间的相互转换,本文通过实例代码给大家介绍的非常详细,感兴趣... 目录1、获取当前时间2、获取当前时间戳3、获取当前时间的字符串格式4、它们之间的相互转化上篇文章给大家介

Python中随机休眠技术原理与应用详解

《Python中随机休眠技术原理与应用详解》在编程中,让程序暂停执行特定时间是常见需求,当需要引入不确定性时,随机休眠就成为关键技巧,下面我们就来看看Python中随机休眠技术的具体实现与应用吧... 目录引言一、实现原理与基础方法1.1 核心函数解析1.2 基础实现模板1.3 整数版实现二、典型应用场景2

golang 日志log与logrus示例详解

《golang日志log与logrus示例详解》log是Go语言标准库中一个简单的日志库,本文给大家介绍golang日志log与logrus示例详解,感兴趣的朋友一起看看吧... 目录一、Go 标准库 log 详解1. 功能特点2. 常用函数3. 示例代码4. 优势和局限二、第三方库 logrus 详解1.

SpringBoot如何通过Map实现策略模式

《SpringBoot如何通过Map实现策略模式》策略模式是一种行为设计模式,它允许在运行时选择算法的行为,在Spring框架中,我们可以利用@Resource注解和Map集合来优雅地实现策略模式,这... 目录前言底层机制解析Spring的集合类型自动装配@Resource注解的行为实现原理使用直接使用M

C++ 各种map特点对比分析

《C++各种map特点对比分析》文章比较了C++中不同类型的map(如std::map,std::unordered_map,std::multimap,std::unordered_multima... 目录特点比较C++ 示例代码 ​​​​​​代码解释特点比较1. std::map底层实现:基于红黑

Golang中拼接字符串的6种方式性能对比

《Golang中拼接字符串的6种方式性能对比》golang的string类型是不可修改的,对于拼接字符串来说,本质上还是创建一个新的对象将数据放进去,主要有6种拼接方式,下面小编就来为大家详细讲讲吧... 目录拼接方式介绍性能对比测试代码测试结果源码分析golang的string类型是不可修改的,对于拼接字

如何通过Golang的container/list实现LRU缓存算法

《如何通过Golang的container/list实现LRU缓存算法》文章介绍了Go语言中container/list包实现的双向链表,并探讨了如何使用链表实现LRU缓存,LRU缓存通过维护一个双向... 目录力扣:146. LRU 缓存主要结构 List 和 Element常用方法1. 初始化链表2.

JavaScript中的Map用法完全指南

《JavaScript中的Map用法完全指南》:本文主要介绍JavaScript中Map用法的相关资料,通过实例讲解了Map的创建、常用方法和迭代方式,还探讨了Map与对象的区别,并通过一个例子展... 目录引言1. 创建 Map2. Map 和对象的对比3. Map 的常用方法3.1 set(key, v

Golang基于内存的键值存储缓存库go-cache

《Golang基于内存的键值存储缓存库go-cache》go-cache是一个内存中的key:valuestore/cache库,适用于单机应用程序,本文主要介绍了Golang基于内存的键值存储缓存库... 目录文档安装方法示例1示例2使用注意点优点缺点go-cache 和 Redis 缓存对比1)功能特性

Golang中map缩容的实现

《Golang中map缩容的实现》本文主要介绍了Go语言中map的扩缩容机制,包括grow和hashGrow方法的处理,具有一定的参考价值,感兴趣的可以了解一下... 目录基本分析带来的隐患为什么不支持缩容基本分析在 Go 底层源码 src/runtime/map.go 中,扩缩容的处理方法是 grow