Linux内核深入理解中断和异常(4):不可屏蔽中断NMI、浮点异常和SIMD

2023-10-14 07:18

本文主要是介绍Linux内核深入理解中断和异常(4):不可屏蔽中断NMI、浮点异常和SIMD,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Linux内核深入理解中断和异常(4):不可屏蔽中断NMI、浮点异常和SIMD


rtoax
2021年3月

本文介绍一下几种trap:

//* External hardware asserts (外部设备断言)the non-maskable interrupt [pin] on the CPU.
//* The processor receives a message on the system bus or the APIC serial bus with a delivery mode `NMI`.
#define X86_TRAP_NMI	 2	/*  2, 不可屏蔽中断 *//* Non-maskable Interrupt 不可屏蔽中断, 严重问题 *//***  hardware interrupt*      exc_nmi  : arch\x86\kernel\nmi.c*      ()*/#define X86_TRAP_BR		 5	/*  5, 超出范围 *//* Bound Range Exceeded *//***      exc_bounds  : arch\x86\kernel\traps.c*      ()*/
#define X86_TRAP_MF		16	/* 16, x87 浮点异常 *//* x87 Floating-Point Exception *//***      exc_coprocessor_error  : arch\x86\kernel\traps.c*      ()*/                             
#define X86_TRAP_XF		19	/* 19, SIMD (单指令多数据结构浮点)异常 *//* SIMD Floating-Point Exception *//***      exc_simd_coprocessor_error  : arch\x86\kernel\traps.c*      ()*//* `SSE` or `SSE2` or `SSE3` SIMD floating-point exception *///There are six classes of numeric exception conditions that //can occur while executing an SIMD floating-point instruction:////* Invalid operation//* Divide-by-zero//* Denormal operand//* Numeric overflow//* Numeric underflow//* Inexact result (Precision)                             

首先看下X86_TRAP_NMI出现的位置:

arch/x86/mm/extable.c:194:	if (trapnr == X86_TRAP_NMI)
arch/x86/kernel/idt.c:77:	INTG(X86_TRAP_NMI,		asm_exc_nmi),   //arch/x86/entry/entry_64.S
arch/x86/kernel/idt.c:238:	ISTG(X86_TRAP_NMI,	asm_exc_nmi,			IST_INDEX_NMI), arch/x8
6/entry/entry_64.Sarch/x86/platform/uv/uv_nmi.c:905:		ret = kgdb_nmicallin(cpu, X86_TRAP_NMI, regs, reason,
arch/x86/include/asm/trapnr.h:52:#define X86_TRAP_NMI	 2	/*  2, 不可屏蔽中断 *//* Non-maskable Interrupt 不
可屏蔽中断, 严重问题 */arch/x86/include/asm/idtentry.h:594:DECLARE_IDTENTRY_NMI(X86_TRAP_NMI,	exc_nmi);
arch/x86/include/asm/idtentry.h:596:DECLARE_IDTENTRY_RAW(X86_TRAP_NMI,	xenpv_exc_nmi);

1. Non-maskable interrupt handler

It is sixth part of the Interrupts and Interrupt Handling in the Linux kernel chapter and in the previous part we saw implementation of some exception handlers for the General Protection Fault exception, divide exception, invalid opcode exceptions and etc. As I wrote in the previous part we will see implementations of the rest exceptions in this part. We will see implementation of the following handlers:

  • Non-Maskable interrupt;
  • BOUND Range Exceeded Exception;
  • Coprocessor exception;
  • SIMD coprocessor exception.

in this part. So, let’s start.

2. Non-Maskable interrupt handling

A Non-Maskable interrupt is a hardware interrupt that cannot be ignored by standard masking techniques. In a general way, a non-maskable interrupt can be generated in either of two ways:

  • **External hardware asserts (外部设备断言)**the non-maskable interrupt pin on the CPU.
  • The processor receives a message on the system bus or the APIC serial bus with a delivery mode NMI.

When the processor receives a NMI from one of these sources, the processor handles it immediately by calling the NMI handler pointed to by interrupt vector which has number 2 (see table in the first part).

#define X86_TRAP_NMI	 2

We already filled the Interrupt Descriptor Table with the vector number, address of the nmi interrupt handler and NMI_STACK Interrupt Stack Table entry:

set_intr_gate_ist(X86_TRAP_NMI, &nmi, NMI_STACK);

5.10.13中:

static const __initconst struct idt_data def_idts[] = {/* 默认的 中断描述符表 */...INTG(X86_TRAP_NMI,		asm_exc_nmi),   //arch/x86/entry/entry_64.S...
};

in the trap_init function which defined in the arch/x86/kernel/traps.c source code file. In the previous parts we saw that entry points of the all interrupt handlers are defined with the:

.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
ENTRY(\sym)
...
...
...
END(\sym)
.endm

macro from the arch/x86/entry/entry_64.S assembly source code file. But the handler of the Non-Maskable interrupts is not defined with this macro. It has own entry point:

ENTRY(nmi)
...
...
...
END(nmi)

in the same arch/x86/entry/entry_64.S assembly file. Lets dive into it and will try to understand how Non-Maskable interrupt handler works. The nmi handlers starts from the call of the:

PARAVIRT_ADJUST_EXCEPTION_FRAME

macro but we will not dive into details about it in this part, because this macro related to the Paravirtualization stuff which we will see in another chapter. After this save the content of the rdx register on the stack:

pushq	%rdx

And allocated check that cs was not the kernel segment when an non-maskable interrupt occurs:

cmpl	$__KERNEL_CS, 16(%rsp)
jne	first_nmi

The __KERNEL_CS macro defined in the arch/x86/include/asm/segment.h and represented second descriptor in the Global Descriptor Table:

#define GDT_ENTRY_KERNEL_CS	2
#define __KERNEL_CS	(GDT_ENTRY_KERNEL_CS*8)

more about GDT you can read in the second part of the Linux kernel booting process chapter. If cs is not kernel segment, it means that it is not nested NMI and we jump on the first_nmi label. Let’s consider this case. First of all we put address of the current stack pointer to the rdx and pushes 1 to the stack in the first_nmi label:

first_nmi:movq	(%rsp), %rdxpushq	$1

Why do we push 1 on the stack?

5.10.13中改成了0

first_nmi:/* Restore rdx. */movq	(%rsp), %rdx/* Make room for "NMI executing". */pushq	$0

总的来说是为了解决在处理NMI期间又来了一个NMI。

As the comment says: We allow breakpoints in NMIs. On the x86_64, like other architectures, the CPU will not execute another NMI until the first NMI is completed. A NMI interrupt finished with the iret instruction like other interrupts and exceptions do it. If the NMI handler triggers either a page fault or breakpoint or another exception which are use iret instruction too. If this happens while in NMI context, the CPU will leave NMI context and a new NMI may come in.

The iret used to return from those exceptions will re-enable NMIs and we will get nested non-maskable interrupts. The problem the NMI handler will not return to the state that it was, when the exception triggered, but instead it will return to a state that will allow new NMIs to preempt the running NMI handler.

If another NMI comes in before the first NMI handler is complete, the new NMI will write all over the preempted NMIs stack. **We can have nested NMIs where the next NMI is using the top of the stack of the previous NMI. It means that we cannot execute it because a nested non-maskable interrupt will corrupt stack of a previous non-maskable interrupt. **

当NMIs嵌套,下一个NMI使用上一个NMI的栈顶,那么我们就不能执行它,因为嵌套的NMI将摧毁上一个NMI的栈。

That’s why we have allocated space on the stack for temporary variable. We will check this variable that it was set when a previous NMI is executing and clear if it is not nested NMI. We push 1 here to the previously allocated space on the stack to denote that a non-maskable interrupt executed currently. Remember that when and NMI or another exception occurs we have the following stack frame:

+------------------------+
|         SS             |
|         RSP            |
|        RFLAGS          |
|         CS             |
|         RIP            |
+------------------------+

and also an error code if an exception has it. So, after all of these manipulations our stack frame will look like this:

+------------------------+
|         SS             |
|         RSP            |
|        RFLAGS          |
|         CS             |
|         RIP            |
|         RDX            |
|          1             |
+------------------------+

In the next step we allocate yet another 40 bytes on the stack:

subq	$(5*8), %rsp

and pushes the copy of the original stack frame after the allocated space:

.rept 5
pushq	11*8(%rsp)
.endr

with the .rept assembly directive. We need in the copy of the original stack frame. Generally we need in two copies of the interrupt stack.

  • First is copied interrupts stack: saved stack frame and copied stack frame. Now we pushes original stack frame to the saved stack frame which locates after the just allocated 40 bytes (copied stack frame). This stack frame is used to fixup the copied stack frame that a nested NMI may change.
  • The second - copied stack frame modified by any nested NMIs to let the first NMI know that we triggered a second NMI and we should repeat the first NMI handler. Ok, we have made first copy of the original stack frame, now time to make second copy:
addq	$(10*8), %rsp.rept 5
pushq	-6*8(%rsp)
.endr
subq	$(5*8), %rsp

After all of these manipulations our stack frame will be like this:

+-------------------------+
| original SS             |
| original Return RSP     |
| original RFLAGS         |
| original CS             |
| original RIP            |
+-------------------------+
| temp storage for rdx    |
+-------------------------+
| NMI executing variable  |
+-------------------------+
| copied SS               |
| copied Return RSP       |
| copied RFLAGS           |
| copied CS               |
| copied RIP              |
+-------------------------+
| Saved SS                |
| Saved Return RSP        |
| Saved RFLAGS            |
| Saved CS                |
| Saved RIP               |
+-------------------------+

After this we push dummy error code on the stack as we did it already in the previous exception handlers and allocate space for the general purpose registers on the stack:

pushq	$-1
ALLOC_PT_GPREGS_ON_STACK

We already saw implementation of the ALLOC_PT_GREGS_ON_STACK macro in the third part of the interrupts chapter. This macro defined in the arch/x86/entry/calling.h and yet another allocates 120 bytes on stack for the general purpose registers, from the rdi to the r15:

.macro ALLOC_PT_GPREGS_ON_STACK addskip=0
addq	$-(15*8+\addskip), %rsp
.endm

After space allocation for the general registers we can see call of the paranoid_entry:

call	paranoid_entry

We can remember from the previous parts this label. It pushes general purpose registers on the stack, reads MSR_GS_BASE Model Specific register and checks its value. If the value of the MSR_GS_BASE is negative, we came from the kernel mode and just return from the paranoid_entry, in other way it means that we came from the usermode and need to execute swapgs instruction which will change user gs with the kernel gs:

ENTRY(paranoid_entry)cldSAVE_C_REGS 8SAVE_EXTRA_REGS 8movl	$1, %ebxmovl	$MSR_GS_BASE, %ecxrdmsrtestl	%edx, %edxjs	1fSWAPGSxorl	%ebx, %ebx
1:	ret
END(paranoid_entry)

Note that after the swapgs instruction we zeroed the ebx register.

Next time we will check content of this register and if we executed swapgs than ebx must contain 0 and 1 in other way. In the next step we store value of the cr2 control register to the r12 register, because the NMI handler can cause page fault and corrupt the value of this control register:

movq	%cr2, %r12

Now time to call actual NMI handler. We push the address of the pt_regs to the rdi, error code to the rsi and call the do_nmi handler:

movq	%rsp, %rdi
movq	$-1, %rsi
call	do_nmi

在5.10.13中:

call	exc_nmi

We will back to the do_nmi little later in this part, but now let’s look what occurs after the do_nmi will finish its execution.

After the do_nmi handler will be finished we check the cr2 register, because we can got page fault during do_nmi performed and if we got it we restore original cr2, in other way we jump on the label 1. After this we test content of the ebx register (remember it must contain 0 if we have used swapgs instruction and 1 if we didn’t use it) and execute SWAPGS_UNSAFE_STACK if it contains 1 or jump to the nmi_restore label.

The SWAPGS_UNSAFE_STACK macro just expands to the swapgs instruction.

In the nmi_restore label we restore general purpose registers, clear allocated space on the stack for this registers, clear our temporary variable and exit from the interrupt handler with the INTERRUPT_RETURN macro:

	movq	%cr2, %rcxcmpq	%rcx, %r12je	1fmovq	%r12, %cr2
1:testl	%ebx, %ebxjnz	nmi_restore
nmi_swapgs:SWAPGS_UNSAFE_STACK
nmi_restore:RESTORE_EXTRA_REGSRESTORE_C_REGS/* Pop the extra iret frame at once */REMOVE_PT_GPREGS_FROM_STACK 6*8/* Clear the NMI executing stack variable */movq	$0, 5*8(%rsp)INTERRUPT_RETURN

5.10.13中是这样的:

nmi_restore:    //EBX contains `0`POP_REGS/** Skip orig_ax and the "outermost" frame to point RSP at the "iret"* at the "iret" frame.*/addq	$6*8, %rsp/** Clear "NMI executing".  Set DF first so that we can easily* distinguish the remaining code between here and IRET from* the SYSCALL entry and exit paths.** We arguably should just inspect RIP instead, but I (Andy) wrote* this code when I had the misapprehension that Xen PV supported* NMIs, and Xen PV would break that approach.*/stdmovq	$0, 5*8(%rsp)		/* clear "NMI executing" *//** iretq reads the "iret" frame and exits the NMI stack in a* single instruction.  We are returning to kernel mode, so this* cannot result in a fault.  Similarly, we don't need to worry* about espfix64 on the way back to kernel mode.*/iretq

where INTERRUPT_RETURN is defined in the arch/x86/include/asm/irqflags.h and just expands to the iret instruction. That’s all.

当一个NMI未终止,另一个NMI发生会发生什么呢?

Now let’s consider case when another NMI interrupt occurred when previous NMI interrupt didn’t finish its execution. You can remember from the beginning of this part that we’ve made a check that we came from userspace and jump on the first_nmi in this case:

cmpl	$__KERNEL_CS, 16(%rsp)
jne	first_nmi

Note that in this case it is first NMI every time, because if the first NMI catched page fault, breakpoint or another exception it will be executed in the kernel mode. If we didn’t come from userspace, first of all we test our temporary variable:

cmpl	$1, -8(%rsp)
je	nested_nmi
	/* This is a nested NMI. */nested_nmi: //嵌套的 NMI 开始/** Modify the "iret" frame to point to repeat_nmi, forcing another* iteration of NMI handling.*/subq	$8, %rspleaq	-10*8(%rsp), %rdxpushq	$__KERNEL_DSpushq	%rdxpushfqpushq	$__KERNEL_CSpushq	$repeat_nmi/* Put stack back */addq	$(6*8), %rspnested_nmi_out: //嵌套的 NMI 结束

and if it is set to 1 we jump to the nested_nmi label. If it is not 1, we test the IST stack. In the case of nested NMIs we check that we are above the repeat_nmi. In this case we ignore it, in other way we check that we above than end_repeat_nmi and jump on the nested_nmi_out label.

Now let’s look on the do_nmi exception handler. This function defined in the arch/x86/kernel/nmi.c source code file and takes two parameters:

  • address of the pt_regs;
  • error code.

as all exception handlers.

在5.10.13中显然已经不是这样,有些中断处理函数有errorcode,有些则没有。

The do_nmi starts from the call of the nmi_nesting_preprocess function and ends with the call of the nmi_nesting_postprocess. The nmi_nesting_preprocess function checks that we likely do not work with the debug stack and if we on the debug stack set the update_debug_stack per-cpu variable to 1 and call the debug_stack_set_zero function from the arch/x86/kernel/cpu/common.c. This function increases the debug_stack_use_ctr per-cpu variable and loads new Interrupt Descriptor Table:

static inline void nmi_nesting_preprocess(struct pt_regs *regs)
{if (unlikely(is_debug_stack(regs->sp))) {debug_stack_set_zero();this_cpu_write(update_debug_stack, 1);}
}

The nmi_nesting_postprocess function checks the update_debug_stack per-cpu variable which we set in the nmi_nesting_preprocess and resets debug stack or in another words it loads origin Interrupt Descriptor Table. After the call of the nmi_nesting_preprocess function, we can see the call of the nmi_enter in the do_nmi. The nmi_enter increases lockdep_recursion field of the interrupted process, update preempt counter and informs the RCU subsystem about NMI. There is also nmi_exit function that does the same stuff as nmi_enter, but vice-versa. After the nmi_enter we increase __nmi_count in the irq_stat structure and call the default_do_nmi function. First of all in the default_do_nmi we check the address of the previous nmi and update address of the last nmi to the actual:

if (regs->ip == __this_cpu_read(last_nmi_rip))b2b = true;
else__this_cpu_write(swallow_nmi, false);__this_cpu_write(last_nmi_rip, regs->ip);

After this first of all we need to handle CPU-specific NMIs:

handled = nmi_handle(NMI_LOCAL, regs, b2b);
__this_cpu_add(nmi_stats.normal, handled);

And then non-specific NMIs depends on its reason:

reason = x86_platform.get_nmi_reason();
if (reason & NMI_REASON_MASK) {if (reason & NMI_REASON_SERR)pci_serr_error(reason, regs);else if (reason & NMI_REASON_IOCHK)io_check_error(reason, regs);__this_cpu_add(nmi_stats.external, 1);return;
}

5.10.13中不叫do_nmi,而是如下的函数:

void exc_nmi(struct pt_regs *regs){/* 我加的 */}
DEFINE_IDTENTRY_RAW(exc_nmi)
{bool irq_state;/** Re-enable NMIs right here when running as an SEV-ES guest. This might* cause nested NMIs, but those can be handled safely.*/sev_es_nmi_complete();if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id()))return;if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) {this_cpu_write(nmi_state, NMI_LATCHED);return;}this_cpu_write(nmi_state, NMI_EXECUTING);this_cpu_write(nmi_cr2, read_cr2());
nmi_restart:/** Needs to happen before DR7 is accessed, because the hypervisor can* intercept DR7 reads/writes, turning those into #VC exceptions.*/sev_es_ist_enter(regs);this_cpu_write(nmi_dr7, local_db_save());irq_state = idtentry_enter_nmi(regs);inc_irq_stat(__nmi_count);if (!ignore_nmis)default_do_nmi(regs);idtentry_exit_nmi(regs, irq_state);local_db_restore(this_cpu_read(nmi_dr7));sev_es_ist_exit();if (unlikely(this_cpu_read(nmi_cr2) != read_cr2()))write_cr2(this_cpu_read(nmi_cr2));if (this_cpu_dec_return(nmi_state))goto nmi_restart;if (user_mode(regs))mds_user_clear_cpu_buffers();
}

这里将其极端简化为:

void exc_nmi(struct pt_regs *regs){if (!ignore_nmis)default_do_nmi(regs);
}

default_do_nmi极端简化为:

static noinstr void default_do_nmi(struct pt_regs *regs)
{nmi_handle(NMI_LOCAL, regs);
}

引入数据结构:

struct nmiaction {struct list_head	list;nmi_handler_t		handler;u64			max_duration;unsigned long		flags;const char		*name;
};

结构nmiaction被如下结构连成双向链表:

struct nmi_desc {   /* NMI:不可屏蔽中断处理函数 链表头 */raw_spinlock_t lock;struct list_head head; /* struct nmiaction->list */
};

而在nmi_handle中所作的就是遍历链表,执行回调函数:

static int nmi_handle(unsigned int type, struct pt_regs *regs)
{struct nmi_desc *desc = nmi_to_desc(type);struct nmiaction *a;int handled=0;rcu_read_lock();/** NMIs are edge-triggered, which means if you have enough* of them concurrently, you can lose some because only one* can be latched at any given time.  Walk the whole list* to handle those situations.*/list_for_each_entry_rcu(a, &desc->head, list) {int thishandled;u64 delta;delta = sched_clock();thishandled = a->handler(type, regs);handled += thishandled;delta = sched_clock() - delta;trace_nmi_handler(a->handler, (int)delta, thishandled);nmi_check_duration(a, delta);}rcu_read_unlock();/* return total number of NMI events handled */return handled;
}
NOKPROBE_SYMBOL(nmi_handle);

That’s all.

3. Range Exceeded Exception

Exceeded: 超过(数量); 超越(法律、命令等)的限制

The next exception is the BOUND range exceeded exception. The BOUND instruction determines if the first operand (array index) is within the bounds of an array specified the second operand (bounds operand). If the index is not within bounds, a BOUND range exceeded exception or #BR is occurred. The handler of the #BR exception is the do_bounds function that defined in the arch/x86/kernel/traps.c.

5.10.13中:

void exc_bounds(struct pt_regs *regs){/* 我加的 */}
DEFINE_IDTENTRY(exc_bounds)
{if (notify_die(DIE_TRAP, "bounds", regs, 0,X86_TRAP_BR, SIGSEGV) == NOTIFY_STOP)return;cond_local_irq_enable(regs);if (!user_mode(regs))die("bounds", regs, 0);do_trap(X86_TRAP_BR, SIGSEGV, "bounds", regs, 0, 0, NULL);cond_local_irq_disable(regs);
}

The do_bounds handler starts with the call of the exception_enter function and ends with the call of the exception_exit:

prev_state = exception_enter();if (notify_die(DIE_TRAP, "bounds", regs, error_code,X86_TRAP_BR, SIGSEGV) == NOTIFY_STOP)goto exit;
...
...
...
exception_exit(prev_state);
return;

After we have got the state of the previous context, we add the exception to the notify_die chain and if it will return NOTIFY_STOP we return from the exception. More about notify chains and the context tracking functions you can read in the previous part. In the next step we enable interrupts if they were disabled with the contidional_sti function that checks IF flag and call the local_irq_enable depends on its value:

conditional_sti(regs);if (!user_mode(regs))die("bounds", regs, error_code);

and check that if we didn’t came from user mode we send SIGSEGV signal with the die function. After this we check is MPX enabled or not, and if this feature is disabled we jump on the exit_trap label:

if (!cpu_feature_enabled(X86_FEATURE_MPX)) {goto exit_trap;
}where we execute `do_trap` function (more about it you can find in the previous part):```C
exit_trap:do_trap(X86_TRAP_BR, SIGSEGV, "bounds", regs, error_code, NULL);exception_exit(prev_state);

If MPX feature is enabled we check the BNDSTATUS with the get_xsave_field_ptr function and if it is zero, it means that the MPX was not responsible for this exception:

bndcsr = get_xsave_field_ptr(XSTATE_BNDCSR);
if (!bndcsr)goto exit_trap;

After all of this, there is still only one way when MPX is responsible for this exception. We will not dive into the details about Intel Memory Protection Extensions in this part, but will see it in another chapter.

而在5.10.13中,会调用do_trap,do_trap继续晚上上述的一系列操作。

4. Coprocessor exception and SIMD exception

The next two exceptions are x87 FPU Floating-Point Error exception or #MF and SIMD Floating-Point Exception or #XF. The first exception occurs when the x87 FPU has detected floating point error. For example divide by zero, numeric overflow and etc. The second exception occurs when the processor has detected SSE/SSE2/SSE3 SIMD floating-point exception. It can be the same as for the x87 FPU. The handlers for these exceptions are do_coprocessor_error and do_simd_coprocessor_error are defined in the arch/x86/kernel/traps.c and very similar on each other. They both make a call of the math_error function from the same source code file but pass different vector number. The do_coprocessor_error passes X86_TRAP_MF vector number to the math_error:

dotraplinkage void do_coprocessor_error(struct pt_regs *regs, long error_code)
{enum ctx_state prev_state;prev_state = exception_enter();math_error(regs, error_code, X86_TRAP_MF);exception_exit(prev_state);
}

and do_simd_coprocessor_error passes X86_TRAP_XF to the math_error function:

dotraplinkage void
do_simd_coprocessor_error(struct pt_regs *regs, long error_code)
{enum ctx_state prev_state;prev_state = exception_enter();math_error(regs, error_code, X86_TRAP_XF);exception_exit(prev_state);
}

在5.10.13中:

void exc_coprocessor_error(struct pt_regs *regs){/* 我加的 */}
DEFINE_IDTENTRY(exc_coprocessor_error)
{math_error(regs, X86_TRAP_MF);
}
void exc_simd_coprocessor_error(struct pt_regs *regs){/* 我加的 */}
DEFINE_IDTENTRY(exc_simd_coprocessor_error)
{if (IS_ENABLED(CONFIG_X86_INVD_BUG)) {/* AMD 486 bug: INVD in CPL 0 raises #XF instead of #GP */if (!static_cpu_has(X86_FEATURE_XMM)) {__exc_general_protection(regs, 0);return;}}math_error(regs, X86_TRAP_XF);
}

First of all the math_error function defines current interrupted task, address of its fpu, string which describes an exception, add it to the notify_die chain and return from the exception handler if it will return NOTIFY_STOP:

	struct task_struct *task = current;struct fpu *fpu = &task->thread.fpu;siginfo_t info;char *str = (trapnr == X86_TRAP_MF) ? "fpu exception" :"simd exception";if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, SIGFPE) == NOTIFY_STOP)return;

After this we check that we are from the kernel mode and if yes we will try to fix an exception with the fixup_exception function. If we cannot we fill the task with the exception’s error code and vector number and die:

if (!user_mode(regs)) {if (!fixup_exception(regs)) {task->thread.error_code = error_code;task->thread.trap_nr = trapnr;die(str, regs, error_code);}return;
}

If we came from the user mode, we save the fpu state, fill the task structure with the vector number of an exception and siginfo_t with the number of signal, errno, the address where exception occurred and signal code:

fpu__save(fpu);task->thread.trap_nr	= trapnr;
task->thread.error_code = error_code;
info.si_signo		= SIGFPE;
info.si_errno		= 0;
info.si_addr		= (void __user *)uprobe_get_trap_addr(regs);
info.si_code = fpu__exception_code(fpu, trapnr);

After this we check the signal code and if it is non-zero we return:

if (!info.si_code)return;

Or send the SIGFPE signal in the end:

force_sig_info(SIGFPE, &info, task);

That’s all.

5. Conclusion

It is the end of the sixth part of the Interrupts and Interrupt Handling chapter and we saw implementation of some exception handlers in this part, like non-maskable interrupt, SIMD and x87 FPU floating point exception. Finally we have finsihed with the trap_init function in this part and will go ahead in the next part. The next our point is the external interrupts and the early_irq_init function from the init/main.c.

If you have any questions or suggestions write me a comment or ping me at twitter.

Please note that English is not my first language, And I am really sorry for any inconvenience. If you find any mistakes please send me PR to linux-insides.

6. Links

  • General Protection Fault
  • opcode
  • Non-Maskable
  • BOUND instruction
  • CPU socket
  • Interrupt Descriptor Table
  • Interrupt Stack Table
  • Paravirtualization
  • .rept
  • SIMD
  • Coprocessor
  • x86_64
  • iret
  • page fault
  • breakpoint
  • Global Descriptor Table
  • stack frame
  • Model Specific regiser
  • percpu
  • RCU
  • MPX
  • x87 FPU
  • Previous part

这篇关于Linux内核深入理解中断和异常(4):不可屏蔽中断NMI、浮点异常和SIMD的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/208965

相关文章

无人叉车3d激光slam多房间建图定位异常处理方案-墙体画线地图切分方案

墙体画线地图切分方案 针对问题:墙体两侧特征混淆误匹配,导致建图和定位偏差,表现为过门跳变、外月台走歪等 ·解决思路:预期的根治方案IGICP需要较长时间完成上线,先使用切分地图的工程化方案,即墙体两侧切分为不同地图,在某一侧只使用该侧地图进行定位 方案思路 切分原理:切分地图基于关键帧位置,而非点云。 理论基础:光照是直线的,一帧点云必定只能照射到墙的一侧,无法同时照到两侧实践考虑:关

【前端学习】AntV G6-08 深入图形与图形分组、自定义节点、节点动画(下)

【课程链接】 AntV G6:深入图形与图形分组、自定义节点、节点动画(下)_哔哩哔哩_bilibili 本章十吾老师讲解了一个复杂的自定义节点中,应该怎样去计算和绘制图形,如何给一个图形制作不间断的动画,以及在鼠标事件之后产生动画。(有点难,需要好好理解) <!DOCTYPE html><html><head><meta charset="UTF-8"><title>06

linux-基础知识3

打包和压缩 zip 安装zip软件包 yum -y install zip unzip 压缩打包命令: zip -q -r -d -u 压缩包文件名 目录和文件名列表 -q:不显示命令执行过程-r:递归处理,打包各级子目录和文件-u:把文件增加/替换到压缩包中-d:从压缩包中删除指定的文件 解压:unzip 压缩包名 打包文件 把压缩包从服务器下载到本地 把压缩包上传到服务器(zip

认识、理解、分类——acm之搜索

普通搜索方法有两种:1、广度优先搜索;2、深度优先搜索; 更多搜索方法: 3、双向广度优先搜索; 4、启发式搜索(包括A*算法等); 搜索通常会用到的知识点:状态压缩(位压缩,利用hash思想压缩)。

第10章 中断和动态时钟显示

第10章 中断和动态时钟显示 从本章开始,按照书籍的划分,第10章开始就进入保护模式(Protected Mode)部分了,感觉从这里开始难度突然就增加了。 书中介绍了为什么有中断(Interrupt)的设计,中断的几种方式:外部硬件中断、内部中断和软中断。通过中断做了一个会走的时钟和屏幕上输入字符的程序。 我自己理解中断的一些作用: 为了更好的利用处理器的性能。协同快速和慢速设备一起工作

深入探索协同过滤:从原理到推荐模块案例

文章目录 前言一、协同过滤1. 基于用户的协同过滤(UserCF)2. 基于物品的协同过滤(ItemCF)3. 相似度计算方法 二、相似度计算方法1. 欧氏距离2. 皮尔逊相关系数3. 杰卡德相似系数4. 余弦相似度 三、推荐模块案例1.基于文章的协同过滤推荐功能2.基于用户的协同过滤推荐功能 前言     在信息过载的时代,推荐系统成为连接用户与内容的桥梁。本文聚焦于

Linux 网络编程 --- 应用层

一、自定义协议和序列化反序列化 代码: 序列化反序列化实现网络版本计算器 二、HTTP协议 1、谈两个简单的预备知识 https://www.baidu.com/ --- 域名 --- 域名解析 --- IP地址 http的端口号为80端口,https的端口号为443 url为统一资源定位符。CSDNhttps://mp.csdn.net/mp_blog/creation/editor

【Python编程】Linux创建虚拟环境并配置与notebook相连接

1.创建 使用 venv 创建虚拟环境。例如,在当前目录下创建一个名为 myenv 的虚拟环境: python3 -m venv myenv 2.激活 激活虚拟环境使其成为当前终端会话的活动环境。运行: source myenv/bin/activate 3.与notebook连接 在虚拟环境中,使用 pip 安装 Jupyter 和 ipykernel: pip instal

内核启动时减少log的方式

内核引导选项 内核引导选项大体上可以分为两类:一类与设备无关、另一类与设备有关。与设备有关的引导选项多如牛毛,需要你自己阅读内核中的相应驱动程序源码以获取其能够接受的引导选项。比如,如果你想知道可以向 AHA1542 SCSI 驱动程序传递哪些引导选项,那么就查看 drivers/scsi/aha1542.c 文件,一般在前面 100 行注释里就可以找到所接受的引导选项说明。大多数选项是通过"_