本文主要是介绍[内核内存] [arm64] zone区域的水线值(watermark)和保留内存值(lowmem_reserve)详解,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
文章目录
- 1 watermark简介
- 2 watermark相关结构体
- 3 watermark初始化
- 3.1 managed_pages,spanned_pages,present_pages三个值对应的意义
- 3.2 什么是min_free_kbytes
- 3.3 Watermark的low,min和high这3档位初始化
- 3.3.1 内存水线初始化过程分析
- 3.3.2 内存水线初始化内核代码分析
- init_per_zone_wmark_min函数
- nr_free_buffer_pages函数
- setup_per_zone_wmarks函数
- refresh_zone_stat_thresholds函数
- setup_per_zone_lowmem_reserve函数
- 什么是zone的lowmem_reserve?
- struct zone中的lowmem_reserve[MAX_NR_ZONES]计算方式
- struct zone中的lowmem_reserve用户态查看和数值控制
- setup_per_zone_lowmem_reserve内核源码
- calculate_totalreserve_pages函数
- 4 用户自定义调节watermark的值
- 4.1 /proc/sys/vm/min_free_kbytes
- 4.2 /proc/sys/vm/watermark_scale_factor
- 4.2.1 watermark_scale_factor调节水线原理分析
- 4.2.2 引入watermark_scale_factor原因分析
- 4.2.3 watermark_scale_factor应用场景
- 场景一:存储类型服务使用page cache使用较多带来的延迟问题
- 场景二:在线/离线混合部署场景下的干扰问题
- 5 watermark判断
- __zone_watermark_ok
- zone_watermark_ok
- zone_watermark_fast
- zone_watermark_ok_safe
1 watermark简介
linux物理内存中的每个zone都有自己独立的3个档位的watermark值:
- 最低水线(WMARK_MIN):如果内存区域的空闲页数小于最低水线,说明该内存区域的内存严重不足
- 低水线(WMARK_LOW):空闲页数小数低水线,说明该内存区域的内存轻微不足。默认情况下,该值为WMARK_MIN的125%
- 高水线(WMARK_HIGH):如果内存区域的空闲页数大于高水线,说明该内存区域水线充足。默认情况下,该值为WMARK_MAX的150%
在进行内存分配的时候,如果分配器(比如buddy allocator)发现当前空余内存的值低于”low”但高于”min”,说明现在内存面临一定的压力,那么在此次内存分配完成后,kswapd将被唤醒,以执行内存回收操作。在这种情况下,内存分配虽然会触发内存回收,但不存在被内存回收所阻塞的问题,两者的执行关系是异步的
对于kswapd来说,要回收多少内存才算完成任务呢?只要把空余内存的大小恢复到”high”对应的watermark值就可以了,当然,这取决于当前空余内存和”high”值之间的差距,差距越大,需要回收的内存也就越多。”low”可以被认为是一个警戒水位线,而”high”则是一个安全的水位线。
如果内存分配器发现空余内存的值低于了”min”,说明现在内存严重不足。这里要分两种情况来讨论,一种是默认的操作,此时分配器将同步等待内存回收完成,再进行内存分配,也就是direct reclaim。还有一种特殊情况,如果内存分配的请求是带了PF_MEMALLOC标志位的,并且现在空余内存的大小可以满足本次内存分配的需求,那么也将是先分配,再回收。
2 watermark相关结构体
每个zone如何记录各个档位的水线和如何获取每个zone各个档位的水线值???
struct zone { /* Read-mostly fields */ /**水位值,WMARK_MIN/WMARK_LOV/WMARK_HIGH,页面分配器和kswapd页面回收中会用到,访问用 *_wmark_pages(zone) 宏*/ unsigned long watermark[NR_WMARK]; unsigned long nr_reserved_highatomic;//zone内存区域中预留的内存long lowmem_reserve[MAX_NR_ZONES]; .........}#define min_wmark_pages(z) (z->watermark[WMARK_MIN])
#define low_wmark_pages(z) (z->watermark[WMARK_LOW])
#define high_wmark_pages(z) (z->watermark[WMARK_HIGH])enum zone_watermarks {WMARK_MIN,WMARK_LOW,WMARK_HIGH,NR_WMARK
};
3 watermark初始化
每个zone对应的3个档位的水线值是如何计算出来的呢?
在计算之前我们需要了解内核中几个全局变量值对应的意义
3.1 managed_pages,spanned_pages,present_pages三个值对应的意义
/** spanned_pages is the total pages spanned by the zone, including* holes, which is calculated as:* spanned_pages = zone_end_pfn - zone_start_pfn;** present_pages is physical pages existing within the zone, which* is calculated as:* present_pages = spanned_pages - absent_pages(pages in holes);** managed_pages is present pages managed by the buddy system, which* is calculated as (reserved_pages includes pages allocated by the* bootmem allocator):* managed_pages = present_pages - reserved_pages;*/
unsigned long managed_pages;
unsigned long spanned_pages;
unsigned long present_pages;
- spanned_pages: 代表的是这个zone中所有的页,包含空洞,计算公式是: zone_end_pfn - zone_start_pfn
- present_pages: 代表的是这个zone中可用的所有物理页,计算公式是:spanned_pages-hole_pages
- managed_pages: 代表的是通过buddy管理的所有可用的页,计算公式是:present_pages - reserved_pages
- 三者的关系是: spanned_pages > present_pages > managed_pages
3.2 什么是min_free_kbytes
min_free_kbytes:This is used to force the Linux VM to keep a minimum number
of kilobytes free. The VM uses this number to compute a
watermark[WMARK_MIN] value for each lowmem zone in the system.
Each lowmem zone gets a number of reserved free pages based
proportionally on its size.Some minimal amount of memory is needed to satisfy PF_MEMALLOC
allocations; if you set this to lower than 1024KB, your system will
become subtly broken, and prone to deadlock under high loads.Setting this too high will OOM your machine instantly.
由上可知
- min_free_kbyes代表的是系统保留空闲内存的最低限
- watermark[WMARK_MIN]的值是通过min_free_kbytes计算出来的
内核是在函数init_per_zone_wmark_min中完成min_free_kbyes的初始化,这里的min_free_kbytes值有个下限和上限,就是最小不能低于128KiB,最大不能超过65536KiB。在实际应用中,通常建议为不低于1024KiB
//计算DMA_ZONE和NORAML_ZONE中超过高水位页的个数
lowmem_kbytes = nr_free_buffer_pages() * (PAGE_SIZE >> 10);
min_free_kbytes = int_sqrt(lowmem_kbytes * 16);
3.3 Watermark的low,min和high这3档位初始化
3.3.1 内存水线初始化过程分析
ZONE_HIGHMEM的watermark值比较特殊,对于64位系统因为不再使用ZONE_HIGHMEM此处我们不做过多详细介绍,计算方法参考下图。
在非ZONE_HIGHMEM区域中,他们的watermark的min档位值是由计算出的min_free_kbytes来获得的。然后watermark的low和high档位值可以根据上面计算出的min档位值来获得。具体计算过程可以参考下图。
ps:根据上图可知lowmem_pages计算涉及到zone区域的高水线值,但是目前初始化阶段所有zone区域的高水线值还未初始化,所有估计此处每个zone的高水线值都为0,及lowmem_pages = ZONE_NORMAL->managed_pages + ZONE_DMA->managed_pages(正确性有待考证).
3.3.2 内存水线初始化内核代码分析
先上函数流程图
init_per_zone_wmark_min函数
//mm/page_alloc.c
/** Initialise min_free_kbytes.** For small machines we want it small (128k min). For large machines* we want it large (64MB max). But it is not linear, because network* bandwidth does not increase linearly with machine size. We use** min_free_kbytes = 4 * sqrt(lowmem_kbytes), for better accuracy:* min_free_kbytes = sqrt(lowmem_kbytes * 16)*/
int __meminit init_per_zone_wmark_min(void)
{unsigned long lowmem_kbytes;int new_min_free_kbytes;/**nr_free_buffer_pages是获取ZONE_DMA和ZONE_NORMAL区中高于high水位的总页数*nr_free_buffer_pages = managed_pages - high_pages*/lowmem_kbytes = nr_free_buffer_pages() * (PAGE_SIZE >> 10);//根据本函数上的注释计算new_min_free_kbytes值new_min_free_kbytes = int_sqrt(lowmem_kbytes * 16);/**根据new_min_free_kbytes值大小情况设置最终的min_free_kbytes(user_min_free_kbytes值未用户通过proc接口设*置的值):* 1.若new_min_free_kbytes > user_min_free_kbytes,则min_free_kbytes取new_min_free_kbytes对应的值,但大* 小需控制在[128k,65536k]区间* 2.若new_min_free_kbytes <= user_min_free_kbytes,则min_free_kbytes取user_min_free_kbytes对应的值.*/if (new_min_free_kbytes > user_min_free_kbytes) {min_free_kbytes = new_min_free_kbytes;if (min_free_kbytes < 128)min_free_kbytes = 128;if (min_free_kbytes > 65536)min_free_kbytes = 65536;} else {pr_warn("min_free_kbytes is not updated to %d because user defined value %d is preferred\n",new_min_free_kbytes, user_min_free_kbytes);}//通过初始化后的min_free_kbytes计算各个zone的min,low,high值setup_per_zone_wmarks();refresh_zone_stat_thresholds();setup_per_zone_lowmem_reserve();#ifdef CONFIG_NUMAsetup_min_unmapped_ratio();setup_min_slab_ratio();
#endifreturn 0;
}
core_initcall(init_per_zone_wmark_min)
nr_free_buffer_pages函数
//该函数计算DMA_ZONE和NORAML_ZONE中超过高水位页的个数,初始化时zone高水位线为0
unsigned long nr_free_buffer_pages(void)
{return nr_free_zone_pages(gfp_zone(GFP_USER));
}
EXPORT_SYMBOL_GPL(nr_free_buffer_pages);
/**对每个zone做计算,将每个zone中超过high水位的值放到sum中。超过高水位的页数计算方法是:managed_pages减去 *watermark[HIGH], 这样就可以获取到系统中各个zone超过高水位页的总和*/
static unsigned long nr_free_zone_pages(int offset)
{struct zoneref *z;struct zone *zone;/* Just pick one node, since fallback list is circular */unsigned long sum = 0;struct zonelist *zonelist = node_zonelist(numa_node_id(), GFP_KERNEL);for_each_zone_zonelist(zone, z, zonelist, offset) {unsigned long size = zone->managed_pages;unsigned long high = high_wmark_pages(zone);if (size > high)sum += size - high;}return sum;
}
setup_per_zone_wmarks函数
根据min_free_kybtes值计算各个zone的水线的3个档位值。
ps:水线初始化,min_free_kbytes值改变或者热插拔时都会调用setup_per_zone_wmarks函数。
void setup_per_zone_wmarks(void)
{mutex_lock(&zonelists_mutex);__setup_per_zone_wmarks();mutex_unlock(&zonelists_mutex);
}static void __setup_per_zone_wmarks(void)
{//将最小保留的内存转化为以page为单位,最小预留的空闲页unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);//代表的是除过HIGHMEM_ZONE的所有zone的managed_pagesunsigned long lowmem_pages = 0;struct zone *zone;unsigned long flags;/* Calculate total number of !ZONE_HIGHMEM pages */for_each_zone(zone) {if (!is_highmem(zone))lowmem_pages += zone->managed_pages;}for_each_zone(zone) {u64 tmp;spin_lock_irqsave(&zone->lock, flags);tmp = (u64)pages_min * zone->managed_pages;do_div(tmp, lowmem_pages);//HIGHMEM_ZONE水线的min档位的计算if (is_highmem(zone)) {/** __GFP_HIGH and PF_MEMALLOC allocations usually don't* need highmem pages, so cap pages_min to a small* value here.** The WMARK_HIGH-WMARK_LOW and (WMARK_LOW-WMARK_MIN)* deltas control asynch page reclaim, and so should* not be capped for highmem.*/unsigned long min_pages;min_pages = zone->managed_pages / 1024;min_pages = clamp(min_pages, SWAP_CLUSTER_MAX, 128UL);zone->watermark[WMARK_MIN] = min_pages;} else {//非HIGHMEM_ZONE区域水线min档位的计算/** If it's a lowmem zone, reserve a number of pages* proportionate to the zone's size.*/zone->watermark[WMARK_MIN] = tmp;}/** Set the kswapd watermarks distance according to the* scale factor in proportion to available memory, but* ensure a minimum size on small systems.*//*通过各个zone的水线的min值计算low和high档位的值,注意我们可以通过proc接口改变 *watermark_scale_factor,来更大范围的控制min与low和high值间的比例关系*/tmp = max_t(u64, tmp >> 2,mult_frac(zone->managed_pages,watermark_scale_factor, 10000));zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp;zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;spin_unlock_irqrestore(&zone->lock, flags);}//更新总保留页面totalreserve_pages的数值calculate_totalreserve_pages();
}
refresh_zone_stat_thresholds函数
对于Node和zone都有一个Per-CPU变量
Node:
typedef struct pglist_data {
...
struct per_cpu_nodestat __percpu *per_cpu_nodestats;
...
} pg_data_t;struct per_cpu_nodestat {s8 stat_threshold;s8 vm_node_stat_diff[NR_VM_NODE_STAT_ITEMS];
};
Zone:
struct zone {
...
struct per_cpu_pageset __percpu *pageset;
...
}struct per_cpu_pageset {struct per_cpu_pages pcp;
#ifdef CONFIG_NUMAs8 expire;u16 vm_numa_stat_diff[NR_VM_NUMA_STAT_ITEMS];
#endif
#ifdef CONFIG_SMPs8 stat_threshold;s8 vm_stat_diff[NR_VM_ZONE_STAT_ITEMS];
#endif
};
refresh_zone_stat_thresholds更新这两个结构中的stat_threshold字段,该字段的作用目前还不是很清楚,具体源码分析吃透了再分析
refresh_zone_stat_thresholds函数还计算了percpu_drift_mark,这个在水印判断的时候需要用到该值。
stat_threshold或percpu_drift_mark都是判断内存使用情况的阈值,主要作用就是用来进行判断,从而触发某个行为,比如内存压缩处理,迁移等
setup_per_zone_lowmem_reserve函数
该函数迭代系统中所有节点,对每个节点的各个内存域分别计算预留内存最小值(lowmem_reserve)
什么是zone的lowmem_reserve?
linux内存模型中一个node可以按照属性划分为多个zones比如ZONE_DMA和ZONE_NORMAL,对于32位系统还有ZOENE_HIGHMEM.若内存分配GFP没有做相关限制,那么当ZONE_HIGHMEM中内存不足时,可从更低位的ZONE_NORMAL中分配,ZONE_NORMAL中内存不足时,可从更低位的ZONE_DMA中分配。这里所谓的“不足”就是当前zone的空余内存低于了本次内存分配的请求大小。除了最高位的ZONE_HIGHMEM,其他zones都会为比它更高位的zone单独划分出一片内存作为预留,这部分内存被称作"lowmem reserve"。如果说watermark是zone保有的自留田,那么这个"lowmem reserve"就是给高位zones提供的后花园。总的来说lowmem_reserve设置就是为了避免跨zone分配内存将下级zone内存耗光的情况
struct zone中的lowmem_reserve[MAX_NR_ZONES]计算方式
为了方便讲解,此处以UMA模型,32位的arm架构内核为例
该内核只有一个节点,节点中有3个zone区域如下所示:
DMA_zone 大小为A
NORMAL_zone 大小为B
HIGHMEM_zone 大小为C
在系统中输入如下命令
# cat /proc/sys/vm/lowmem_reserve_ratio
256 32 0
- DMA zone中需要预留给NORMAL zone和HIGHMEM zoen的内存大小分别是 B/256 和 (B+C)/256
- NORMAL zone中需要预留给HIGHMEM zone的内存大小为C/32
- ZONE_HIGHMEM不需预留
在struct zone中用lowmem_reserve[MAX_NR_ZONES]成员来记录该zone给其它zone预留的的内存大小,以page为单位。假如linux系统中DMA_zone,NORMAL_zone,HIGHMEM_zone的大小分别是16MB,784MB金额200MB,且page大小为4K;那么3个zone区域包含页的数量分别是:4096个,190464个和51200个。
- 对于DMA_zone区域
- DMA_zone->lowmem_reserve[ZONE_DMA] = 0
- DMA_zone->lowmem_reserve[ZONE_NORMAL] = 190464/256=784
- DMA_zone->lowmem_reserve[ZONE_HIGHMEM] = (190464+51200)/256=984
- 对于NORMAL_zone区域
- DMA_NORMAL->lowmem_reserve[ZONE_DMA] = 0
- DMA_NORMAL->lowmem_reserve[ZONE_NORMAL] = 0
- DMA_NORMAL->lowmem_reserve[ZONE_HIGHMEM] = 51200/32 = 1600
- 对于HIGHMEM_zone区域
- DMA_HIGHMEM->lowmem_reserve[ZONE_DMA] = 0
- DMA_HIGHMEM->lowmem_reserve[ZONE_NORMAL] = 0
- DMA_HIGHMEM->lowmem_reserve[ZONE_HIGHMEM] = 0
struct zone中的lowmem_reserve用户态查看和数值控制
在系统shell终端我们可以通过"cat /proc/zoneinfo"中的"protection"部分查看各个zones预留pages的数目。
/ # cat /proc/sys/vm/lowmem_reserve_ratio
256 32 0
//uma arm64架构
/ # cat /proc/zoneinfo
Node 0, zone DMA32pages free 483593min 2507low 3133high 3759spanned 524288present 507392managed 484089protection: (0, 5958, 5958)
Node 0, zone Normalpages free 1398441min 8756low 10945high 13134spanned 9435136present 1558528managed 1525492protection: (0, 0, 0)
Node 0, zone Movablepages free 0min 0low 0high 0spanned 0present 0managed 0protection: (0, 0, 0)protection: (0, 0, 0)
同时我们可以通过/proc/sys/vm/lowmem_reserve_ratio接口来改变lowmem_reserve_ratio值,以此在用户态来控制系统zone 的lowmem_reserve值
/ # echo 100 10 0 >/proc/sys/vm/lowmem_reserve_ratio
/ # cat /proc/zoneinfo
Node 0, zone DMA32pages free 483593min 2507low 3133high 3759spanned 524288present 507392managed 484089protection: (0, 15254, 15254)
Node 0, zone Normalpages free 1398393min 8756low 10945high 13134spanned 9435136present 1558528managed 1525492protection: (0, 0, 0)
Node 0, zone Movablepages free 0min 0low 0high 0spanned 0present 0managed 0protection: (0, 0, 0)
/ #
setup_per_zone_lowmem_reserve内核源码
/** results with 256, 32 in the lowmem_reserve sysctl:* 1G machine -> (16M dma, 800M-16M normal, 1G-800M high)* 1G machine -> (16M dma, 784M normal, 224M high)* NORMAL allocation will leave 784M/256 of ram reserved in the ZONE_DMA* HIGHMEM allocation will leave 224M/32 of ram reserved in ZONE_NORMAL* HIGHMEM allocation will leave (224M+784M)/256 of ram reserved in ZONE_DMA** TBD: should special case ZONE_DMA32 machines here - in those we normally* don't need any ZONE_NORMAL reservation*/
int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES-1] = {
#ifdef CONFIG_ZONE_DMA256,
#endif
#ifdef CONFIG_ZONE_DMA32256,
#endif
#ifdef CONFIG_HIGHMEM32,
#endif32,
};static void setup_per_zone_lowmem_reserve(void)
{struct pglist_data *pgdat;enum zone_type j, idx;//遍历每个节点for_each_online_pgdat(pgdat) {//遍历每个zone,代码下面假设系统上有ZONE_DMA=0,ZONE_DMA32=1,ZONE_NORMAL=2三个zone类型for (j = 0; j < MAX_NR_ZONES; j++) {struct zone *zone = pgdat->node_zones + j;//当前zone可用内存的页总数unsigned long managed_pages = zone->managed_pages;//自身zone内存使用不需要考虑保留内存给自己zone->lowmem_reserve[j] = 0;idx = j;/*遍历比当前zone更低端的zone,在循环中分别给底端zone的lowmem_reserve数组对应下标成员赋值*(1)j=0: 不进入循环,因为没有更低端的zone区域给DMA_zone预留内存*(2)j=1: idx=0,DMA_zone->lowmem_reserve[ZONE_DMA32] = * DMA32_zone->managed_pages /sysctl_lowmem_reserve_ratio[ZONE_DMA]*(3)j=2: idx=1,DMA32_zone->lowmem_reserve[ZONE_NORMAL] = * NORMAL_zone->managed_pages / * sysctl_lowmem_reserve_ratio[ZONE_DMA32]* idx=0,DMA_zone->lowmem_reserve[ZONE_NORMAL] = (NORMAL_zone->managed_pages +DMA32_zone->managed_pages) / * sysctl_lowmem_reserve_ratio[ZONE_DMA]*/ while (idx) {struct zone *lower_zone;idx--;if (sysctl_lowmem_reserve_ratio[idx] < 1)sysctl_lowmem_reserve_ratio[idx] = 1;lower_zone = pgdat->node_zones + idx;lower_zone->lowmem_reserve[j] = managed_pages /sysctl_lowmem_reserve_ratio[idx];managed_pages += lower_zone->managed_pages;}}}//再次更新totalreserve_pages的值calculate_totalreserve_pages();
}
calculate_totalreserve_pages函数
__setup_per_zone_wmarks函数设置完各个zone后watermaker值后都会调用calculate_totalreserve_pages更新totalreserve_pages。setup_per_zone_lowmem_reserve函数会设置各个zone的lowmem_reserve值,设置完后会再次调用calculate_totalreserve_pages函数更新totalreserve_pages的值
/** calculate_totalreserve_pages - called when sysctl_lowmem_reserve_ratio* or min_free_kbytes changes.*/
static void calculate_totalreserve_pages(void)
{struct pglist_data *pgdat;unsigned long reserve_pages = 0;enum zone_type i, j;//遍历每个nodefor_each_online_pgdat(pgdat) {//遍历每个zonepgdat->totalreserve_pages = 0;for (i = 0; i < MAX_NR_ZONES; i++) {struct zone *zone = pgdat->node_zones + i;long max = 0;/* Find valid and maximum lowmem_reserve in the zone*每个zone都会比高端的zone预留一些内存,分别存储到lowmem_reserve数组对应的位置,*此处是找出该组织中最大值max*/for (j = i; j < MAX_NR_ZONES; j++) {if (zone->lowmem_reserve[j] > max)max = zone->lowmem_reserve[j];}/* we treat the high watermark as reserved pages. *///每个zone的high水位值和最大保留值之和当做是系统运行保留阈值max += high_wmark_pages(zone);if (max > zone->managed_pages)max = zone->managed_pages;pgdat->totalreserve_pages += max;reserve_pages += max;}}totalreserve_pages = reserve_pages;
}
4 用户自定义调节watermark的值
通过前面我们知道每个zone的watermark的3个档位min,low,high值的计算过程中涉及到两个全局变量min_free_kbytes和watermark_scale_factor。min_free_kbytes直接用于计算出min档位值。watermark_scale_factor值用于控制low和high档位值与min档位值比例关系
因此我们可以通过改变min_free_kbytes和watermark_scale_factor值来更新每个zone的watermark的档位值。linux系统提供了/proc/sys/vm/min_free_kbytes和/proc/sys/vm/watermark_scale_factor两个接口来在用户端对min_free_kbytes和watermark_scale_factor的值进行查看或更改。
4.1 /proc/sys/vm/min_free_kbytes
/** min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so* that we can call two helper functions whenever min_free_kbytes* changes.*/
int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,void __user *buffer, size_t *length, loff_t *ppos)
{int rc;rc = proc_dointvec_minmax(table, write, buffer, length, ppos);if (rc)return rc;if (write) {//直接根据用户的值设置min、low、high水位值user_min_free_kbytes = min_free_kbytes;setup_per_zone_wmarks();}return 0;
}
系统初始化里min_free_kbytes的值介于128k65M之间,但是在动辄256G甚至1T的服务器,最大64M这个数值显然是太小了,四舍五入约等于没有了;因此我们可以通过proc接口利用上述函数突破128k65M区间限制。
4.2 /proc/sys/vm/watermark_scale_factor
4.2.1 watermark_scale_factor调节水线原理分析
在Linux内核4.6版本中,诞生了一种新的调节watermark的方式。具体做法是引入一个叫做"watermark_scale_factor"的系数,其默认值为10,对应内存占比0.1%(10/10000),可通过"/proc/ sys/vm/watermark_scale_factor"设置,最大为1000。当它的值被设定为1000时,意味着"low"与"min"之间的差值,以及"high"与"low"之间的差值都将是内存大小的10%(1000/10000)。
//忽略zone_HIGHMEMunsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);...tmp = (u64)pages_min * zone->managed_pages;do_div(tmp, lowmem_pages);...zone->watermark[WMARK_MIN] = tmp;tmp = (u64)pages_min * zone->managed_pages;....zone->watermark[WMARK_MIN] = tmp;..../*通过各个zone的水线的min值计算low和high档位的值,注意我们可以通过proc接口改变 *watermark_scale_factor,来更大范围的空子min与low和high值间的比例关系*/tmp = max_t(u64, tmp >> 2,mult_frac(zone->managed_pages,watermark_scale_factor, 10000));zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp;zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
由上面的代码可以看出zone区域的水线min档位值始终是有min_free_kbytes的值控制的。而对于水线的low和high档位置要分两种情况:
- 若min_free_kbytes偏大,则水线的low和high档位值由min_free_kybtes决定,且3个档位值大小关系如下: min:low:high=1:1.25:1.5
- 若min_free_kbytes偏小,则水线的low和high档位值由watermark_scale_factor.
- low=zone->managed_pages * watermark_scale_factor/10000
- high=2*low
4.2.2 引入watermark_scale_factor原因分析
在内核的文档中由对该参数做如下描述
This factor controls the aggressiveness of kswapd. It defines theamount of memory left in a node/system before kswapd is woken up andhow much memory needs to be free before kswapd goes back to sleep.The unit is in fractions of 10,000. The default value of 10 means the distances between watermarks are 0.1% of the available memory in the node/system. The maximum value is 1000, or 10% of memory.A high rate of threads entering direct reclaim (allocstall) or kswapd going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicatethat the number of free pages kswapd maintains for latency reasons is too small for the allocation bursts occurring in the system. This knobcan then be used to tune kswapd aggressiveness accordingly.
上面话大致意思是引入watermark_scale_factor,能够便于用户在用户端对内核的内存回收时机进行控制。因为linux系统在free memory低于水线的low之后kswapd线程就会被唤醒,内核开始回收内存;而free memory达到或高于水线high这一档位后kswapd线程就会进入休眠,结束内存回收。而watermark_scale_factor能够按照比例的方式调控水线的low和high档位值。
引入watermark_scale_factor另外一个原因就是:在linux系统内存比较大的情况下,忽略min watermark的数值。那么,zone的low watermark是zone的总内存的万分之watermark_scale_factor,而zone的high watermark是zone的min watermark的两倍。所以,watermark_scale_factor的效果就是按照比例来计算water matermark的min和high档位数值,而非是按照大小来计算。在总内存比较大的情况下,依然可以计算出来比较有效的内存watermark数值。
4.2.3 watermark_scale_factor应用场景
场景来源:腾讯云-皮振伟的专栏
在描述线面场景前介绍一下MemFree和MemAvailable间的关系.cat /proc/meminfo的输出结果中,会有MemFree和MemAvailable:
- MemFree就是真正的free memory,没有被内核/进程使用过的内存页面
- MemAvailable计算相对复杂,它的主要思路是:free memory – reserved memory + 部分page cache + 部分slab。而reserved memory的数量几乎就是上面提到的high watermark。所以,如果high watermark比较大的情况下会出现一种场景:free memory比较多,但是available memory不是很多而导致了进程的OOM。
场景一:存储类型服务使用page cache使用较多带来的延迟问题
在对象存储或者其他的存储场景下,会使用大量的page cache。而Linux使用内存的策略比较贪婪,尽量分配,不够了再回收。而默认的回收的阈值比较低,总内存较大的场景下,buddy system中进程缺少较大块的连续内存。而在特定的场景下,例如TCP发送数据的过程中,会有特定的逻辑需要使用大块连续内存,那么就需要先回收内存做compaction,再继续发送数据,从而造成了延迟抖动。那么,我们可以调整watermark_scale_factor到较大的数值,内核因此会回收内存较为激进,维持较高的free memory,业务的延迟会相应的变好。同时,思考另外一个问题,就是较高的watermark_scale_factor值会导致available memory较少,更加容易发生OOM。
场景二:在线/离线混合部署场景下的干扰问题
通常情况下,在线业务会处理网络请求进行应答,磁盘IO基本就是少量的log,产生不多的page cache。而离线业务会从远端获取数据进行计算,保存临时结果,再把最终结果返回给中心节点。那么就会产生很多的page cache。大量的page cache会消耗掉free memory,会让在线业务的内存申请陷入到慢路径中而影响了在线业务的延迟。为此,也需要适当的调整watermark_scale_factor,思路还是让内核保持较高的内存水线,但是也需要避免OOM。
5 watermark判断
什么是zone的watermark判断?其实就是判断该zone在水线等阀值的限制下是否具有分配指定空闲内存的能力
zone的watermark判断主要依靠3个函数zone_watermark_ok,zone_watermark_ok_fast和zone_watermark_ok_safe;3个函数的调用关系如下所示
zone_watermark_fast和zone_water_ok在前面介绍的伙伴系统空闲内存分配过程的get_page_from_freelist函数中进行了调用。由上图可以看出3个水线判断函数最终会调用__zone_watermark_ok函数。
__zone_watermark_ok
该函数首先是判断分配order大小的内存块后该zone的free_pages是否能够满足内存的阀值要求;若满足要求则会接着检查该zone的伙伴系统是否具有分配order阶连续内存块的能力(因为free_pages中可能都是小于order阶的空闲内存块)。若上面两个检查都通过函数返回true,表示内存水线判断通过。具体流程参考下面代码。
bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,int classzone_idx, unsigned int alloc_flags,long free_pages)
{/*检查是否满足mark要求,先将最小值设置为mark,该mark通过如下等式获得:* mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK]*/long min = mark;int o;const bool alloc_harder = (alloc_flags & ALLOC_HARDER);/* free_pages may go negative - that's OK *///检查分配出去2^order个page之后的free pages是否满足mark的要求free_pages -= (1 << order) - 1;/*ALLOC_HIGH==__GFP_HIGH,请求分配非常紧急的内存,降低水线阀值;注意与__GFP_HIGHMEM的区分,*__GPF_HIGHMEM指从高端内存域分配内存 */if (alloc_flags & ALLOC_HIGH)min -= min / 2;/** If the caller does not have rights to ALLOC_HARDER then subtract* the high-atomic reserves. This will over-estimate the size of the* atomic reserve but it avoids a search.*//*只有设置了ALLOC_HARDER,才能从free_list[MIGRATE_HIGHATOMIC]的链表中进行页面分配,否则减去*该链表中的空闲内存*/if (likely(!alloc_harder))free_pages -= z->nr_reserved_highatomic;else//若设置了ALLOC_HARDER,分配水线阀值进一步降低min -= min / 4;#ifdef CONFIG_CMA/* If allocation can't use CMA areas don't use free CMA pages */if (!(alloc_flags & ALLOC_CMA))free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES);
#endif//如果free pages已经小于等于保留内存和min之和,说明此次分配请求不满足wartmark要求if (free_pages <= min + z->lowmem_reserve[classzone_idx])return false;/* If this is an order-0 request then the watermark is fine */if (!order)return true;/* For a high-order request, check at least one suitable page is free *到此zone的水线检查已经通过,下面主要是检查当前zone是否具有分配order大小连续内存块的能力具体做法是:*在当前zone中从申请order往上循环查看伙伴系统中的各个free_area链表中是否有空闲节点可以进行此次内存分配,*有这判断通过,该zone具有此次内存分配能力。*/for (o = order; o < MAX_ORDER; o++) {struct free_area *area = &z->free_area[o];int mt;if (!area->nr_free)continue;for (mt = 0; mt < MIGRATE_PCPTYPES; mt++) {if (!list_empty(&area->free_list[mt]))return true;}#ifdef CONFIG_CMAif ((alloc_flags & ALLOC_CMA) &&!list_empty(&area->free_list[MIGRATE_CMA])) {return true;}
#endifif (alloc_harder &&!list_empty(&area->free_list[MIGRATE_HIGHATOMIC]))return true;}//在当前zone的free_area[]中没有找到可以满足当前order分配的内存块return false;
}
zone_watermark_ok
bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,int classzone_idx, unsigned int alloc_flags)
{return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,zone_page_state(z, NR_FREE_PAGES));
}
zone_watermark_ok就是直接调用__zone_watermark_ok函数
zone_watermark_fast
static inline bool zone_watermark_fast(struct zone *z, unsigned int order,unsigned long mark, int classzone_idx, unsigned int alloc_flags)
{long free_pages = zone_page_state(z, NR_FREE_PAGES);long cma_pages = 0;#ifdef CONFIG_CMA//ALLOC_CMA,允许从CMA区域分配/* If allocation can't use CMA areas don't use free CMA pages */if (!(alloc_flags & ALLOC_CMA))cma_pages = zone_page_state(z, NR_FREE_CMA_PAGES);
#endif/** Fast check for order-0 only. If this fails then the reserves* need to be calculated. There is a corner case where the check* passes but only the high-order atomic reserve are free. If* the caller is !atomic then it'll uselessly search the free* list. That corner case is then slower but it is harmless.*/if (!order && (free_pages - cma_pages) > mark + z->lowmem_reserve[classzone_idx])return true;return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,free_pages);
}
相对于__zone_watermark_ok,zone_watermark_fast是一个快速判断水线的函数。快速点主要是当order=0的时候进行的判断决策。如果满足条件直接就pass,否则会再次调用__zone_watermark_ok函数进行决策判断。
zone_watermark_ok_safe
bool zone_watermark_ok_safe(struct zone *z, unsigned int order,unsigned long mark, int classzone_idx)
{long free_pages = zone_page_state(z, NR_FREE_PAGES);/*当当前zone的free_pages低于该zone的percpu_drift_mark阀值时会调用zone_page_state_snapshot函数*来获取更加精确的free_pages*/if (z->percpu_drift_mark && free_pages < z->percpu_drift_mark)free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES);return __zone_watermark_ok(z, order, mark, classzone_idx, 0,free_pages);
}
zone_watermark_ok_safe函数相较与zone_watermark_ok函数主要的不同点是free_pages计算更加严格更加精确。zone_watermark_ok函数是通过下面方式获取得zone的空闲页面值(free_pages)
long free_pages = zone_page_state(z, NR_FREE_PAGES);
而在zone_watermark_ok_safe函数中当通过zone_page_state计算出的free_pages小于zone的percpu_drift_mark阀值时会另外通过zone_page_state_snapshot函数来使free_pages更加精确
free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES);
前面讨论过zone维护了3个字段来进行页面的统计
struct zone {
...
struct per_cpu_pageset __percpu *pageset;
...
/** When free pages are below this point, additional steps are taken* when reading the number of free pages to avoid per-cpu counter* drift allowing watermarks to be breached*/
unsigned long percpu_drift_mark;
...
/* Zone statistics */
atomic_long_t vm_stat[NR_VM_ZONE_STAT_ITEMS];
}
percpu_drift_mark字段是在zone 水线初始化流程中的refresh_zone_stat_thresholds函数中进行设置前面已经讨论过。
内核的内存管理过程中,若要读取空闲页与水线阀值进行比较,若内核想获取到更加精确的空闲页面值(free_pages),内核必须读取zone区域的vm_stat和pageset成员值来进行计算。但是如果每次获取zone的空闲页面值都这么操作的话,执行效率会很低;于是内核给zone设定了percpu_drift_mark阀值,只有空闲页面值(free_pages)低于percpu_drift_mark这个值的时候,内核才会调用zone_page_state_snapshot去获得更加精确的空闲页面值。
那么zone_page_state_snapshot和zone_page_state计算出的free_pages值到底有什么不一样呢?
static inline unsigned long zone_page_state_snapshot(struct zone *zone,enum zone_stat_item item)
{long x = atomic_long_read(&zone->vm_stat[item]);#ifdef CONFIG_SMPint cpu;for_each_online_cpu(cpu)x += per_cpu_ptr(zone->pageset, cpu)->vm_stat_diff[item];if (x < 0)x = 0;
#endifreturn x;
}
static inline unsigned long zone_page_state(struct zone *zone,enum zone_stat_item item)
{long x = atomic_long_read(&zone->vm_stat[item]);
#ifdef CONFIG_SMPif (x < 0)x = 0;
#endifreturn x;
}
由代码可以看出zone_page_state_snapshot计算出的free_pages比zone_page_state计算出的free_pages多出了每个cpu相对于该zone的per_cpu_ptr(zone->pageset, cpu)->vm_stat_diff[item]值。这样处理来提高free_pages精确度原因是和zone的vm_stat更新机制有关。
每个zone的__percpu *pageset计数器的值更新时,只有当计数器值超过stat_threshold值,才会更新到vm_stat[]中。所以vm_stat[item]在有些时候并没有将那些低于stat_threshold值的__percpu *pageset计数器中的值统计进来;而zone_page_state_snapshot函数的工作就是将这些值加入进去了。
那为什么zone_page_state_snapshot函数只是在free_pages值低于percpu_drift_mark阀值时,才将每个cpu的per_cpu_ptr(zone->pageset, cpu)->vm_stat_diff[item]统计到free_pages中去呢?个人的理解是因为当free_pages若大于percpu_drift_mark阀值,每个cpu的per_cpu_ptr(zone->pageset, cpu)->vm_stat_diff[item]值向对于free_pages来说所占的比例可以忽略。因此直接跳过,以此来提高系统性能
这篇关于[内核内存] [arm64] zone区域的水线值(watermark)和保留内存值(lowmem_reserve)详解的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!