本文主要是介绍JDK9 ConcurrentHashMap实现原理(一),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
文章目录
- JDK9 ConcurrentHashMap实现原理(一)
- 数据结构
- 私有属性
- 静态属性
- 相关节点
- 构造器
- Hash值计算
- 添加元素
- 初始化数组
JDK9 ConcurrentHashMap实现原理(一)
数据结构
JDK1.7中采用Segment + HashEntry的方式进行实现.使用ReentrantLock实现加锁操作。
JDK1.8中放弃了Segment臃肿的设计,取而代之的是采用Node + CAS + Synchronized来保证并发安全进行实现.结构类似于HashMap,数组+链表+红黑树。
私有属性
静态属性
-
private static final int MAXIMUM_CAPACITY = 1 << 30;
最大的容量,必须为2的平方。 -
private static final int DEFAULT_CAPACITY = 16;
默认的初始容量,也是2的平方 -
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
最大的数组大小。 -
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
默认的并发等级,只在writeObject中用了。 -
private static final float LOAD_FACTOR = 0.75f;
加载因子,只在writeObject中用了。不像HashMap中的用法。 -
static final int TREEIFY_THRESHOLD = 8;
当某个数组位置上的节点数量超过8时,则将单链表结构转换为红黑树结构。 -
static final int UNTREEIFY_THRESHOLD = 6;
当某个数组位置上的节点数量小于6时,则将红黑树结构转换为单链表结构。 -
static final int MIN_TREEIFY_CAPACITY = 64;
在转换成红黑树之前,还需要检测当前table的大小是否大于等于MIN_TREEIFY_CAPACITY,小于不会转换成红黑树,而是重新扩展table的大小。 -
private static final int MIN_TRANSFER_STRIDE = 16;
-
private static final int RESIZE_STAMP_BITS = 16;
-
private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;
扩容时可利用的最大线程 -
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;
-
static final int MOVED = -1; // hash for forwarding nodes
static final int TREEBIN = -2; // hash for roots of trees
static final int RESERVED = -3; // hash for transient reservations
static final int HASH_BITS = 0x7fffffff; // usable bits of normal node hash -
static final int NCPU = Runtime.getRuntime().availableProcessors();
当前机器的CPU的核心处理器数量,transfer扩容时会用到。
相关节点
- Node:该类用于构造table[],只读节点(不提供修改方法)。是一个单链表结构。
static class Node<K,V> implements Map.Entry<K,V> {//节点的哈希值final int hash;//建final K key;//值volatile V val;//指向下一个节点,说明是单链表结构volatile Node<K,V> next;Node(int hash, K key, V val) {this.hash = hash;this.key = key;this.val = val;}Node(int hash, K key, V val, Node<K,V> next) {this(hash, key, val);this.next = next;}public final K getKey() { return key; }public final V getValue() { return val; }public final int hashCode() { return key.hashCode() ^ val.hashCode(); }public final String toString() {return Helpers.mapEntryToString(key, val);}public final V setValue(V value) {throw new UnsupportedOperationException();}public final boolean equals(Object o) {Object k, v, u; Map.Entry<?,?> e;return ((o instanceof Map.Entry) &&(k = (e = (Map.Entry<?,?>)o).getKey()) != null &&(v = e.getValue()) != null &&(k == key || k.equals(key)) &&(v == (u = val) || v.equals(u)));}/*** Virtualized support for map.get(); overridden in subclasses.*/Node<K,V> find(int h, Object k) {Node<K,V> e = this;if (k != null) {do {K ek;if (e.hash == h &&((ek = e.key) == k || (ek != null && k.equals(ek))))return e;} while ((e = e.next) != null);}return null;}}
- TreeBin:红黑树结构。
- TreeNode:红黑树节点。
static final class TreeNode<K,V> extends Node<K,V> {//父节点TreeNode<K,V> parent; // red-black tree links//左节点TreeNode<K,V> left;//右节点TreeNode<K,V> right;//TreeNode<K,V> prev; // needed to unlink next upon deletion//节点的颜色boolean red;TreeNode(int hash, K key, V val, Node<K,V> next,TreeNode<K,V> parent) {super(hash, key, val, next);this.parent = parent;}Node<K,V> find(int h, Object k) {return findTreeNode(h, k, null);}//使用this这个树形节点作为根节点,寻找目标节点。final TreeNode<K,V> findTreeNode(int h, Object k, Class<?> kc) {}}
- ForwardingNode:临时节点(扩容时使用)。
构造器
- 无参构造器
public ConcurrentHashMap() {}
指定初始容量
public ConcurrentHashMap(int initialCapacity) {if (initialCapacity < 0)throw new IllegalArgumentException();int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ?MAXIMUM_CAPACITY :tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1));this.sizeCtl = cap;
}
这里使用tableSizeFor将输入的容量转换为2的平方。
private static final int tableSizeFor(int c) {int n = c - 1;n |= n >>> 1;n |= n >>> 2;n |= n >>> 4;n |= n >>> 8;n |= n >>> 16;return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;}
输入容量:3 , 输出:4;
输入容量:11 , 输出:16;
输入容量:17 , 输出:32;
不确定这里为什么要这么处理(initialCapacity + (initialCapacity >>> 1) + 1));虽然直接initialCapacity的结果也是一样的。
- 还可以指定加载因子和concurrencyLevel。
可以看到这两个参数在新版本中只是在初始化才会用到,其他地方不会用到,注意这里和HashMap的区别。
public ConcurrentHashMap(int initialCapacity,float loadFactor, int concurrencyLevel) {if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)throw new IllegalArgumentException();if (initialCapacity < concurrencyLevel) // Use at least as many binsinitialCapacity = concurrencyLevel; // as estimated threadslong size = (long)(1.0 + (long)initialCapacity / loadFactor);int cap = (size >= (long)MAXIMUM_CAPACITY) ?MAXIMUM_CAPACITY : tableSizeFor((int)size);this.sizeCtl = cap;
}
- 和大多数集合一样,都可以由其他集合元素作为初始元素。
public ConcurrentHashMap(Map<? extends K, ? extends V> m) {this.sizeCtl = DEFAULT_CAPACITY;putAll(m);}
Hash值计算
可以看出 计算出的hash都是正值
static final int HASH_BITS = 0x7fffffff; // usable bits of normal node hash
static final int spread(int h) {return (h ^ (h >>> 16)) & HASH_BITS;
}
添加元素
1.key值和value都不能null
2.onlyIfAbsent=true : 如果当前key值已经存在,则不 更新为新值, false: 不管什么情况都会更新为新值。
final V putVal(K key, V value, boolean onlyIfAbsent) {if (key == null || value == null) throw new NullPointerException();//获取hash值int hash = spread(key.hashCode());int binCount = 0;for (Node<K,V>[] tab = table;;) {Node<K,V> f; int n, i, fh; K fk; V fv;//如果table为空,说明之前没有放入元素if (tab == null || (n = tab.length) == 0)tab = initTable();//如果要插入的位置没有节点数据 else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {//如果数组的桶是空的,则尝试插入数据,直到成功才中断当前循环,使用CAS算法if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value)))break; // no lock when adding to empty bin}//到此插入数据成功//如果在插入的时候,节点是一个forwordingNode状态,表示正在扩容,那么当前线程进行帮助扩容 else if ((fh = f.hash) == MOVED)tab = helpTransfer(tab, f);else if (//如果onlyIfAbsent为true,也就是只要key已经存在,就不写入新值onlyIfAbsent //扩容已经结束,fh 和传入key的值一样&& fh == hash // check first node//&& ((fk = f.key) == key || fk != null && key.equals(fk)) && (fv = f.val) != null)return fv;else {V oldVal = null;synchronized (f) {if (tabAt(tab, i) == f) {if (fh >= 0) {binCount = 1;for (Node<K,V> e = f;; ++binCount) {K ek;if (e.hash == hash &&((ek = e.key) == key ||(ek != null && key.equals(ek)))) {oldVal = e.val;if (!onlyIfAbsent)e.val = value;break;}Node<K,V> pred = e;if ((e = e.next) == null) {pred.next = new Node<K,V>(hash, key, value);break;}}}else if (f instanceof TreeBin) {Node<K,V> p;binCount = 2;if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,value)) != null) {oldVal = p.val;if (!onlyIfAbsent)p.val = value;}}else if (f instanceof ReservationNode)throw new IllegalStateException("Recursive update");}}if (binCount != 0) {if (binCount >= TREEIFY_THRESHOLD)treeifyBin(tab, i);if (oldVal != null)return oldVal;break;}}}addCount(1L, binCount);return null;
}
初始化数组
private final Node<K,V>[] initTable() {Node<K,V>[] tab; int sc;while ((tab = table) == null || tab.length == 0) {if ((sc = sizeCtl) < 0)Thread.yield(); // lost initialization race; just spinelse if (U.compareAndSetInt(this, SIZECTL, sc, -1)) {try {if ((tab = table) == null || tab.length == 0) {int n = (sc > 0) ? sc : DEFAULT_CAPACITY;@SuppressWarnings("unchecked")Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];table = tab = nt;sc = n - (n >>> 2);}} finally {sizeCtl = sc;}break;}}return tab;
}
未完待续。。。。。。。。。。
这篇关于JDK9 ConcurrentHashMap实现原理(一)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!